text
stringlengths
493
2.1k
inputs
dict
prediction
null
prediction_agent
null
annotation
stringclasses
2 values
annotation_agent
stringclasses
1 value
vectors
null
multi_label
bool
1 class
explanation
null
id
stringlengths
36
36
metadata
null
status
stringclasses
2 values
event_timestamp
stringlengths
26
26
metrics
dict
label
class label
2 classes
TITLE: Evaluating and Crafting Datasets Effective for Deep Learning With Data Maps ABSTRACT: Rapid development in deep learning model construction has prompted an increased need for appropriate training data. The popularity of large datasets - sometimes known as "big data" - has diverted attention from assessing their quality. Training on large datasets often requires excessive system resources and an infeasible amount of time. Furthermore, the supervised machine learning process has yet to be fully automated: for supervised learning, large datasets require more time for manually labeling samples. We propose a method of curating smaller datasets with comparable out-of-distribution model accuracy after an initial training session using an appropriate distribution of samples classified by how difficult it is for a model to learn from them.
{ "abstract": "Rapid development in deep learning model construction has prompted an\nincreased need for appropriate training data. The popularity of large datasets\n- sometimes known as \"big data\" - has diverted attention from assessing their\nquality. Training on large datasets often requires excessive system resources\nand an infeasible amount of time. Furthermore, the supervised machine learning\nprocess has yet to be fully automated: for supervised learning, large datasets\nrequire more time for manually labeling samples. We propose a method of\ncurating smaller datasets with comparable out-of-distribution model accuracy\nafter an initial training session using an appropriate distribution of samples\nclassified by how difficult it is for a model to learn from them.", "title": "Evaluating and Crafting Datasets Effective for Deep Learning With Data Maps", "url": "http://arxiv.org/abs/2208.10033v2" }
null
null
no_new_dataset
admin
null
false
null
35163b28-914d-472d-b705-15c31d09c376
null
Validated
2023-10-04 15:19:51.884649
{ "text_length": 866 }
1no_new_dataset
TITLE: ApacheJIT: A Large Dataset for Just-In-Time Defect Prediction ABSTRACT: In this paper, we present ApacheJIT, a large dataset for Just-In-Time defect prediction. ApacheJIT consists of clean and bug-inducing software changes in popular Apache projects. ApacheJIT has a total of 106,674 commits (28,239 bug-inducing and 78,435 clean commits). Having a large number of commits makes ApacheJIT a suitable dataset for machine learning models, especially deep learning models that require large training sets to effectively generalize the patterns present in the historical data to future data.
{ "abstract": "In this paper, we present ApacheJIT, a large dataset for Just-In-Time defect\nprediction. ApacheJIT consists of clean and bug-inducing software changes in\npopular Apache projects. ApacheJIT has a total of 106,674 commits (28,239\nbug-inducing and 78,435 clean commits). Having a large number of commits makes\nApacheJIT a suitable dataset for machine learning models, especially deep\nlearning models that require large training sets to effectively generalize the\npatterns present in the historical data to future data.", "title": "ApacheJIT: A Large Dataset for Just-In-Time Defect Prediction", "url": "http://arxiv.org/abs/2203.00101v2" }
null
null
new_dataset
admin
null
false
null
121a25ca-99a0-47c6-bce2-9de1da816757
null
Validated
2023-10-04 15:19:51.887974
{ "text_length": 611 }
0new_dataset
TITLE: Meta-Analysis and Systematic Review for Anomaly Network Intrusion Detection Systems: Detection Methods, Dataset, Validation Methodology, and Challenges ABSTRACT: Intrusion detection systems (IDSs) built on artificial intelligence (AI) are presented as latent mechanisms for actively detecting fresh attacks over a complex network. Although review papers are used the systematic review or simple methods to analyse and criticize the anomaly NIDS works, the current review uses a traditional way as a quantitative description to find current gaps by synthesizing and summarizing the data comparison without considering algorithms performance. This paper presents a systematic and meta-analysis study of AI for network intrusion detection systems (NIDS) focusing on deep learning (DL) and machine learning (ML) approaches in network security. Deep learning algorithms are explained in their structure, and data intrusion network is justified based on an infrastructure of networks and attack types. By conducting a meta-analysis and debating the validation of the DL and ML approach by effectiveness, used dataset, detected attacks, classification task, and time complexity, we offer a thorough benchmarking assessment of the current NIDS-based publications-based systematic approach. The proposed method is considered reviewing works for the anomaly-based network intrusion detection system (anomaly-NIDS) models. Furthermore, the effectiveness of proposed algorithms and selected datasets are discussed for the recent direction and improvements of ML and DL to the NIDS. The future trends for improving an anomaly-IDS for continuing detection in the evolution of cyberattacks are highlighted in several research studies.
{ "abstract": "Intrusion detection systems (IDSs) built on artificial intelligence (AI) are\npresented as latent mechanisms for actively detecting fresh attacks over a\ncomplex network. Although review papers are used the systematic review or\nsimple methods to analyse and criticize the anomaly NIDS works, the current\nreview uses a traditional way as a quantitative description to find current\ngaps by synthesizing and summarizing the data comparison without considering\nalgorithms performance. This paper presents a systematic and meta-analysis\nstudy of AI for network intrusion detection systems (NIDS) focusing on deep\nlearning (DL) and machine learning (ML) approaches in network security. Deep\nlearning algorithms are explained in their structure, and data intrusion\nnetwork is justified based on an infrastructure of networks and attack types.\nBy conducting a meta-analysis and debating the validation of the DL and ML\napproach by effectiveness, used dataset, detected attacks, classification task,\nand time complexity, we offer a thorough benchmarking assessment of the current\nNIDS-based publications-based systematic approach. The proposed method is\nconsidered reviewing works for the anomaly-based network intrusion detection\nsystem (anomaly-NIDS) models. Furthermore, the effectiveness of proposed\nalgorithms and selected datasets are discussed for the recent direction and\nimprovements of ML and DL to the NIDS. The future trends for improving an\nanomaly-IDS for continuing detection in the evolution of cyberattacks are\nhighlighted in several research studies.", "title": "Meta-Analysis and Systematic Review for Anomaly Network Intrusion Detection Systems: Detection Methods, Dataset, Validation Methodology, and Challenges", "url": "http://arxiv.org/abs/2308.02805v2" }
null
null
no_new_dataset
admin
null
false
null
2d8d4dba-2ff3-409d-8412-98cb5bda5335
null
Validated
2023-10-04 15:19:51.864449
{ "text_length": 1743 }
1no_new_dataset
TITLE: Decision Forest Based EMG Signal Classification with Low Volume Dataset Augmented with Random Variance Gaussian Noise ABSTRACT: Electromyography signals can be used as training data by machine learning models to classify various gestures. We seek to produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience while comparing the effect of our feature extraction results on model accuracy to other more conventional methods such as the use of AR parameters on a sliding window across the channels of a signal. We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting where EMG classification is being conducted, as opposed to more complicated methods such as the use of the Fourier Transform. To augment our limited training data, we used a standard technique, known as jitter, where random noise is added to each observation in a channel wise manner. Once all datasets were produced using the above methods, we performed a grid search with Random Forest and XGBoost to ultimately create a high accuracy model. For human computer interface purposes, high accuracy classification of EMG signals is of particular importance to their functioning and given the difficulty and cost of amassing any sort of biomedical data in a high volume, it is valuable to have techniques that can work with a low amount of high-quality samples with less expensive feature extraction methods that can reliably be carried out in an online application.
{ "abstract": "Electromyography signals can be used as training data by machine learning\nmodels to classify various gestures. We seek to produce a model that can\nclassify six different hand gestures with a limited number of samples that\ngeneralizes well to a wider audience while comparing the effect of our feature\nextraction results on model accuracy to other more conventional methods such as\nthe use of AR parameters on a sliding window across the channels of a signal.\nWe appeal to a set of more elementary methods such as the use of random bounds\non a signal, but desire to show the power these methods can carry in an online\nsetting where EMG classification is being conducted, as opposed to more\ncomplicated methods such as the use of the Fourier Transform. To augment our\nlimited training data, we used a standard technique, known as jitter, where\nrandom noise is added to each observation in a channel wise manner. Once all\ndatasets were produced using the above methods, we performed a grid search with\nRandom Forest and XGBoost to ultimately create a high accuracy model. For human\ncomputer interface purposes, high accuracy classification of EMG signals is of\nparticular importance to their functioning and given the difficulty and cost of\namassing any sort of biomedical data in a high volume, it is valuable to have\ntechniques that can work with a low amount of high-quality samples with less\nexpensive feature extraction methods that can reliably be carried out in an\nonline application.", "title": "Decision Forest Based EMG Signal Classification with Low Volume Dataset Augmented with Random Variance Gaussian Noise", "url": "http://arxiv.org/abs/2206.14947v1" }
null
null
no_new_dataset
admin
null
false
null
a5d58f80-5c31-4293-a7aa-0b00cd4d7b74
null
Validated
2023-10-04 15:19:51.885486
{ "text_length": 1640 }
1no_new_dataset
TITLE: IMDB-WIKI-SbS: An Evaluation Dataset for Crowdsourced Pairwise Comparisons ABSTRACT: Today, comprehensive evaluation of large-scale machine learning models is possible thanks to the open datasets produced using crowdsourcing, such as SQuAD, MS COCO, ImageNet, SuperGLUE, etc. These datasets capture objective responses, assuming the single correct answer, which does not allow to capture the subjective human perception. In turn, pairwise comparison tasks, in which one has to choose between only two options, allow taking peoples' preferences into account for very challenging artificial intelligence tasks, such as information retrieval and recommender system evaluation. Unfortunately, the available datasets are either small or proprietary, slowing down progress in gathering better feedback from human users. In this paper, we present IMDB-WIKI-SbS, a new large-scale dataset for evaluating pairwise comparisons. It contains 9,150 images appearing in 250,249 pairs annotated on a crowdsourcing platform. Our dataset has balanced distributions of age and gender using the well-known IMDB-WIKI dataset as ground truth. We describe how our dataset is built and then compare several baseline methods, indicating its suitability for model evaluation.
{ "abstract": "Today, comprehensive evaluation of large-scale machine learning models is\npossible thanks to the open datasets produced using crowdsourcing, such as\nSQuAD, MS COCO, ImageNet, SuperGLUE, etc. These datasets capture objective\nresponses, assuming the single correct answer, which does not allow to capture\nthe subjective human perception. In turn, pairwise comparison tasks, in which\none has to choose between only two options, allow taking peoples' preferences\ninto account for very challenging artificial intelligence tasks, such as\ninformation retrieval and recommender system evaluation. Unfortunately, the\navailable datasets are either small or proprietary, slowing down progress in\ngathering better feedback from human users. In this paper, we present\nIMDB-WIKI-SbS, a new large-scale dataset for evaluating pairwise comparisons.\nIt contains 9,150 images appearing in 250,249 pairs annotated on a\ncrowdsourcing platform. Our dataset has balanced distributions of age and\ngender using the well-known IMDB-WIKI dataset as ground truth. We describe how\nour dataset is built and then compare several baseline methods, indicating its\nsuitability for model evaluation.", "title": "IMDB-WIKI-SbS: An Evaluation Dataset for Crowdsourced Pairwise Comparisons", "url": "http://arxiv.org/abs/2110.14990v2" }
null
null
new_dataset
admin
null
false
null
58afb829-e708-47a6-942f-49ee7c4e0f64
null
Validated
2023-10-04 15:19:51.890222
{ "text_length": 1274 }
0new_dataset
TITLE: AVIDa-hIL6: A Large-Scale VHH Dataset Produced from an Immunized Alpaca for Predicting Antigen-Antibody Interactions ABSTRACT: Antibodies have become an important class of therapeutic agents to treat human diseases. To accelerate therapeutic antibody discovery, computational methods, especially machine learning, have attracted considerable interest for predicting specific interactions between antibody candidates and target antigens such as viruses and bacteria. However, the publicly available datasets in existing works have notable limitations, such as small sizes and the lack of non-binding samples and exact amino acid sequences. To overcome these limitations, we have developed AVIDa-hIL6, a large-scale dataset for predicting antigen-antibody interactions in the variable domain of heavy chain of heavy chain antibodies (VHHs), produced from an alpaca immunized with the human interleukin-6 (IL-6) protein, as antigens. By leveraging the simple structure of VHHs, which facilitates identification of full-length amino acid sequences by DNA sequencing technology, AVIDa-hIL6 contains 573,891 antigen-VHH pairs with amino acid sequences. All the antigen-VHH pairs have reliable labels for binding or non-binding, as generated by a novel labeling method. Furthermore, via introduction of artificial mutations, AVIDa-hIL6 contains 30 different mutants in addition to wild-type IL-6 protein. This characteristic provides opportunities to develop machine learning models for predicting changes in antibody binding by antigen mutations. We report experimental benchmark results on AVIDa-hIL6 by using neural network-based baseline models. The results indicate that the existing models have potential, but further research is needed to generalize them to predict effective antibodies against unknown mutants. The dataset is available at https://avida-hil6.cognanous.com.
{ "abstract": "Antibodies have become an important class of therapeutic agents to treat\nhuman diseases. To accelerate therapeutic antibody discovery, computational\nmethods, especially machine learning, have attracted considerable interest for\npredicting specific interactions between antibody candidates and target\nantigens such as viruses and bacteria. However, the publicly available datasets\nin existing works have notable limitations, such as small sizes and the lack of\nnon-binding samples and exact amino acid sequences. To overcome these\nlimitations, we have developed AVIDa-hIL6, a large-scale dataset for predicting\nantigen-antibody interactions in the variable domain of heavy chain of heavy\nchain antibodies (VHHs), produced from an alpaca immunized with the human\ninterleukin-6 (IL-6) protein, as antigens. By leveraging the simple structure\nof VHHs, which facilitates identification of full-length amino acid sequences\nby DNA sequencing technology, AVIDa-hIL6 contains 573,891 antigen-VHH pairs\nwith amino acid sequences. All the antigen-VHH pairs have reliable labels for\nbinding or non-binding, as generated by a novel labeling method. Furthermore,\nvia introduction of artificial mutations, AVIDa-hIL6 contains 30 different\nmutants in addition to wild-type IL-6 protein. This characteristic provides\nopportunities to develop machine learning models for predicting changes in\nantibody binding by antigen mutations. We report experimental benchmark results\non AVIDa-hIL6 by using neural network-based baseline models. The results\nindicate that the existing models have potential, but further research is\nneeded to generalize them to predict effective antibodies against unknown\nmutants. The dataset is available at https://avida-hil6.cognanous.com.", "title": "AVIDa-hIL6: A Large-Scale VHH Dataset Produced from an Immunized Alpaca for Predicting Antigen-Antibody Interactions", "url": "http://arxiv.org/abs/2306.03329v1" }
null
null
new_dataset
admin
null
false
null
73cfb3fe-fa67-4807-b85d-58361edbceac
null
Validated
2023-10-04 15:19:51.874639
{ "text_length": 1897 }
0new_dataset
TITLE: HandCT: hands-on computational dataset for X-Ray Computed Tomography and Machine-Learning ABSTRACT: Machine-learning methods rely on sufficiently large dataset to learn data distributions. They are widely used in research in X-Ray Computed Tomography, from low-dose scan denoising to optimisation of the reconstruction process. The lack of datasets prevents the scalability of these methods to realistic 3D problems. We develop a 3D procedural dataset in order to produce samples for data-driven algorithms. It is made of a meshed model of a left hand and a script to randomly change its anatomic properties and pose whilst conserving realistic features. This open-source solution relies on the freeware Blender and its Python core. Blender handles the modelling, the mesh and the generation of the hand's pose, whilst Python processes file format conversion from obj file to matrix and functions to scale and center the volume for further processing. Dataset availability and quality drives research in machine-learning. We design a dataset that weighs few megabytes, provides truthful samples and proposes continuous enhancements using version control. We anticipate this work to be a starting point for anatomically accurate procedural datasets. For instance, by adding more internal features and fine tuning their X-Ray attenuation properties.
{ "abstract": "Machine-learning methods rely on sufficiently large dataset to learn data\ndistributions. They are widely used in research in X-Ray Computed Tomography,\nfrom low-dose scan denoising to optimisation of the reconstruction process. The\nlack of datasets prevents the scalability of these methods to realistic 3D\nproblems. We develop a 3D procedural dataset in order to produce samples for\ndata-driven algorithms. It is made of a meshed model of a left hand and a\nscript to randomly change its anatomic properties and pose whilst conserving\nrealistic features. This open-source solution relies on the freeware Blender\nand its Python core. Blender handles the modelling, the mesh and the generation\nof the hand's pose, whilst Python processes file format conversion from obj\nfile to matrix and functions to scale and center the volume for further\nprocessing. Dataset availability and quality drives research in\nmachine-learning. We design a dataset that weighs few megabytes, provides\ntruthful samples and proposes continuous enhancements using version control. We\nanticipate this work to be a starting point for anatomically accurate\nprocedural datasets. For instance, by adding more internal features and fine\ntuning their X-Ray attenuation properties.", "title": "HandCT: hands-on computational dataset for X-Ray Computed Tomography and Machine-Learning", "url": "http://arxiv.org/abs/2304.14412v1" }
null
null
new_dataset
admin
null
false
null
2ee4198f-4a93-495b-8668-61d2c2b1454f
null
Validated
2023-10-04 15:19:51.879852
{ "text_length": 1371 }
0new_dataset
TITLE: Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation ABSTRACT: Distribution shift is a major source of failure for machine learning models. However, evaluating model reliability under distribution shift can be challenging, especially since it may be difficult to acquire counterfactual examples that exhibit a specified shift. In this work, we introduce the notion of a dataset interface: a framework that, given an input dataset and a user-specified shift, returns instances from that input distribution that exhibit the desired shift. We study a number of natural implementations for such an interface, and find that they often introduce confounding shifts that complicate model evaluation. Motivated by this, we propose a dataset interface implementation that leverages Textual Inversion to tailor generation to the input distribution. We then demonstrate how applying this dataset interface to the ImageNet dataset enables studying model behavior across a diverse array of distribution shifts, including variations in background, lighting, and attributes of the objects. Code available at https://github.com/MadryLab/dataset-interfaces.
{ "abstract": "Distribution shift is a major source of failure for machine learning models.\nHowever, evaluating model reliability under distribution shift can be\nchallenging, especially since it may be difficult to acquire counterfactual\nexamples that exhibit a specified shift. In this work, we introduce the notion\nof a dataset interface: a framework that, given an input dataset and a\nuser-specified shift, returns instances from that input distribution that\nexhibit the desired shift. We study a number of natural implementations for\nsuch an interface, and find that they often introduce confounding shifts that\ncomplicate model evaluation. Motivated by this, we propose a dataset interface\nimplementation that leverages Textual Inversion to tailor generation to the\ninput distribution. We then demonstrate how applying this dataset interface to\nthe ImageNet dataset enables studying model behavior across a diverse array of\ndistribution shifts, including variations in background, lighting, and\nattributes of the objects. Code available at\nhttps://github.com/MadryLab/dataset-interfaces.", "title": "Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation", "url": "http://arxiv.org/abs/2302.07865v2" }
null
null
no_new_dataset
admin
null
false
null
4d85d106-63de-4ef5-a87d-554736f13dfc
null
Validated
2023-10-04 15:19:51.881171
{ "text_length": 1202 }
1no_new_dataset
TITLE: Ensembled sparse-input hierarchical networks for high-dimensional datasets ABSTRACT: Neural networks have seen limited use in prediction for high-dimensional data with small sample sizes, because they tend to overfit and require tuning many more hyperparameters than existing off-the-shelf machine learning methods. With small modifications to the network architecture and training procedure, we show that dense neural networks can be a practical data analysis tool in these settings. The proposed method, Ensemble by Averaging Sparse-Input Hierarchical networks (EASIER-net), appropriately prunes the network structure by tuning only two L1-penalty parameters, one that controls the input sparsity and another that controls the number of hidden layers and nodes. The method selects variables from the true support if the irrelevant covariates are only weakly correlated with the response; otherwise, it exhibits a grouping effect, where strongly correlated covariates are selected at similar rates. On a collection of real-world datasets with different sizes, EASIER-net selected network architectures in a data-adaptive manner and achieved higher prediction accuracy than off-the-shelf methods on average.
{ "abstract": "Neural networks have seen limited use in prediction for high-dimensional data\nwith small sample sizes, because they tend to overfit and require tuning many\nmore hyperparameters than existing off-the-shelf machine learning methods. With\nsmall modifications to the network architecture and training procedure, we show\nthat dense neural networks can be a practical data analysis tool in these\nsettings. The proposed method, Ensemble by Averaging Sparse-Input Hierarchical\nnetworks (EASIER-net), appropriately prunes the network structure by tuning\nonly two L1-penalty parameters, one that controls the input sparsity and\nanother that controls the number of hidden layers and nodes. The method selects\nvariables from the true support if the irrelevant covariates are only weakly\ncorrelated with the response; otherwise, it exhibits a grouping effect, where\nstrongly correlated covariates are selected at similar rates. On a collection\nof real-world datasets with different sizes, EASIER-net selected network\narchitectures in a data-adaptive manner and achieved higher prediction accuracy\nthan off-the-shelf methods on average.", "title": "Ensembled sparse-input hierarchical networks for high-dimensional datasets", "url": "http://arxiv.org/abs/2005.04834v1" }
null
null
no_new_dataset
admin
null
false
null
14704424-dea0-4393-ae4b-e37954c74da8
null
Validated
2023-10-04 15:19:51.900022
{ "text_length": 1231 }
1no_new_dataset
TITLE: HealthFC: A Dataset of Health Claims for Evidence-Based Medical Fact-Checking ABSTRACT: Seeking health-related advice on the internet has become a common practice in the digital era. Determining the trustworthiness of medical claims found online and finding appropriate evidence for this information is increasingly challenging. Fact-checking has emerged as an approach to assess the veracity of factual claims using evidence from credible knowledge sources. To help advance the automation of this task, in this paper, we introduce a novel dataset of 750 health-related claims, labeled for veracity by medical experts and backed with evidence from appropriate clinical studies. We provide an analysis of the dataset, highlighting its characteristics and challenges. The dataset can be used for Machine Learning tasks related to automated fact-checking such as evidence retrieval, veracity prediction, and explanation generation. For this purpose, we provide baseline models based on different approaches, examine their performance, and discuss the findings.
{ "abstract": "Seeking health-related advice on the internet has become a common practice in\nthe digital era. Determining the trustworthiness of medical claims found online\nand finding appropriate evidence for this information is increasingly\nchallenging. Fact-checking has emerged as an approach to assess the veracity of\nfactual claims using evidence from credible knowledge sources. To help advance\nthe automation of this task, in this paper, we introduce a novel dataset of 750\nhealth-related claims, labeled for veracity by medical experts and backed with\nevidence from appropriate clinical studies. We provide an analysis of the\ndataset, highlighting its characteristics and challenges. The dataset can be\nused for Machine Learning tasks related to automated fact-checking such as\nevidence retrieval, veracity prediction, and explanation generation. For this\npurpose, we provide baseline models based on different approaches, examine\ntheir performance, and discuss the findings.", "title": "HealthFC: A Dataset of Health Claims for Evidence-Based Medical Fact-Checking", "url": "http://arxiv.org/abs/2309.08503v1" }
null
null
new_dataset
admin
null
false
null
309ac8b2-3681-43f5-b36c-348a03e320ab
null
Validated
2023-10-04 15:19:51.863518
{ "text_length": 1081 }
0new_dataset
TITLE: Uncovering bias in the PlantVillage dataset ABSTRACT: We report our investigation on the use of the popular PlantVillage dataset for training deep learning based plant disease detection models. We trained a machine learning model using only 8 pixels from the PlantVillage image backgrounds. The model achieved 49.0% accuracy on the held-out test set, well above the random guessing accuracy of 2.6%. This result indicates that the PlantVillage dataset contains noise correlated with the labels and deep learning models can easily exploit this bias to make predictions. Possible approaches to alleviate this problem are discussed.
{ "abstract": "We report our investigation on the use of the popular PlantVillage dataset\nfor training deep learning based plant disease detection models. We trained a\nmachine learning model using only 8 pixels from the PlantVillage image\nbackgrounds. The model achieved 49.0% accuracy on the held-out test set, well\nabove the random guessing accuracy of 2.6%. This result indicates that the\nPlantVillage dataset contains noise correlated with the labels and deep\nlearning models can easily exploit this bias to make predictions. Possible\napproaches to alleviate this problem are discussed.", "title": "Uncovering bias in the PlantVillage dataset", "url": "http://arxiv.org/abs/2206.04374v1" }
null
null
no_new_dataset
admin
null
false
null
7c0f2734-edfa-4f75-b357-07725c13c681
null
Validated
2023-10-04 15:19:51.885867
{ "text_length": 653 }
1no_new_dataset
TITLE: Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension ABSTRACT: Question answering (QA) is a fundamental means to facilitate assessment and training of narrative comprehension skills for both machines and young children, yet there is scarcity of high-quality QA datasets carefully designed to serve this purpose. In particular, existing datasets rarely distinguish fine-grained reading skills, such as the understanding of varying narrative elements. Drawing on the reading education research, we introduce FairytaleQA, a dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Generated by educational experts based on an evidence-based theoretical framework, FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations. Our dataset is valuable in two folds: First, we ran existing QA models on our dataset and confirmed that this annotation helps assess models' fine-grained learning skills. Second, the dataset supports question generation (QG) task in the education domain. Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
{ "abstract": "Question answering (QA) is a fundamental means to facilitate assessment and\ntraining of narrative comprehension skills for both machines and young\nchildren, yet there is scarcity of high-quality QA datasets carefully designed\nto serve this purpose. In particular, existing datasets rarely distinguish\nfine-grained reading skills, such as the understanding of varying narrative\nelements. Drawing on the reading education research, we introduce FairytaleQA,\na dataset focusing on narrative comprehension of kindergarten to eighth-grade\nstudents. Generated by educational experts based on an evidence-based\ntheoretical framework, FairytaleQA consists of 10,580 explicit and implicit\nquestions derived from 278 children-friendly stories, covering seven types of\nnarrative elements or relations. Our dataset is valuable in two folds: First,\nwe ran existing QA models on our dataset and confirmed that this annotation\nhelps assess models' fine-grained learning skills. Second, the dataset supports\nquestion generation (QG) task in the education domain. Through benchmarking\nwith QG models, we show that the QG model trained on FairytaleQA is capable of\nasking high-quality and more diverse questions.", "title": "Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension", "url": "http://arxiv.org/abs/2203.13947v1" }
null
null
new_dataset
admin
null
false
null
9100c8ff-d2e7-4e00-bf88-b1cb8fdbbbf7
null
Validated
2023-10-04 15:19:51.887452
{ "text_length": 1336 }
0new_dataset
TITLE: Evaluating Software User Feedback Classifiers on Unseen Apps, Datasets, and Metadata ABSTRACT: Listening to user's requirements is crucial to building and maintaining high quality software. Online software user feedback has been shown to contain large amounts of information useful to requirements engineering (RE). Previous studies have created machine learning classifiers for parsing this feedback for development insight. While these classifiers report generally good performance when evaluated on a test set, questions remain as to how well they extend to unseen data in various forms. This study evaluates machine learning classifiers performance on feedback for two common classification tasks (classifying bug reports and feature requests). Using seven datasets from prior research studies, we investigate the performance of classifiers when evaluated on feedback from different apps than those contained in the training set and when evaluated on completely different datasets (coming from different feedback platforms and/or labelled by different researchers). We also measure the difference in performance of using platform-specific metadata as a feature in classification. We demonstrate that classification performance is similar on feedback from unseen apps compared to seen apps in the majority of cases tested. However, the classifiers do not perform well on unseen datasets. We show that multi-dataset training or zero shot classification approaches can somewhat mitigate this performance decrease. Finally, we find that using metadata as features in classifying bug reports and feature requests does not lead to a statistically significant improvement in the majority of datasets tested. We discuss the implications of these results on developing user feedback classification models to analyse and extract software requirements.
{ "abstract": "Listening to user's requirements is crucial to building and maintaining high\nquality software. Online software user feedback has been shown to contain large\namounts of information useful to requirements engineering (RE). Previous\nstudies have created machine learning classifiers for parsing this feedback for\ndevelopment insight. While these classifiers report generally good performance\nwhen evaluated on a test set, questions remain as to how well they extend to\nunseen data in various forms.\n This study evaluates machine learning classifiers performance on feedback for\ntwo common classification tasks (classifying bug reports and feature requests).\nUsing seven datasets from prior research studies, we investigate the\nperformance of classifiers when evaluated on feedback from different apps than\nthose contained in the training set and when evaluated on completely different\ndatasets (coming from different feedback platforms and/or labelled by different\nresearchers). We also measure the difference in performance of using\nplatform-specific metadata as a feature in classification.\n We demonstrate that classification performance is similar on feedback from\nunseen apps compared to seen apps in the majority of cases tested. However, the\nclassifiers do not perform well on unseen datasets. We show that multi-dataset\ntraining or zero shot classification approaches can somewhat mitigate this\nperformance decrease. Finally, we find that using metadata as features in\nclassifying bug reports and feature requests does not lead to a statistically\nsignificant improvement in the majority of datasets tested. We discuss the\nimplications of these results on developing user feedback classification models\nto analyse and extract software requirements.", "title": "Evaluating Software User Feedback Classifiers on Unseen Apps, Datasets, and Metadata", "url": "http://arxiv.org/abs/2112.13497v1" }
null
null
no_new_dataset
admin
null
false
null
28f872b2-7a74-4550-a13c-257b6cde0dbf
null
Validated
2023-10-04 15:19:51.889094
{ "text_length": 1873 }
1no_new_dataset
TITLE: HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions ABSTRACT: Commercial ML APIs offered by providers such as Google, Amazon and Microsoft have dramatically simplified ML adoption in many applications. Numerous companies and academics pay to use ML APIs for tasks such as object detection, OCR and sentiment analysis. Different ML APIs tackling the same task can have very heterogeneous performance. Moreover, the ML models underlying the APIs also evolve over time. As ML APIs rapidly become a valuable marketplace and a widespread way to consume machine learning, it is critical to systematically study and compare different APIs with each other and to characterize how APIs change over time. However, this topic is currently underexplored due to the lack of data. In this paper, we present HAPI (History of APIs), a longitudinal dataset of 1,761,417 instances of commercial ML API applications (involving APIs from Amazon, Google, IBM, Microsoft and other providers) across diverse tasks including image tagging, speech recognition and text mining from 2020 to 2022. Each instance consists of a query input for an API (e.g., an image or text) along with the API's output prediction/annotation and confidence scores. HAPI is the first large-scale dataset of ML API usages and is a unique resource for studying ML-as-a-service (MLaaS). As examples of the types of analyses that HAPI enables, we show that ML APIs' performance change substantially over time--several APIs' accuracies dropped on specific benchmark datasets. Even when the API's aggregate performance stays steady, its error modes can shift across different subtypes of data between 2020 and 2022. Such changes can substantially impact the entire analytics pipelines that use some ML API as a component. We further use HAPI to study commercial APIs' performance disparities across demographic subgroups over time. HAPI can stimulate more research in the growing field of MLaaS.
{ "abstract": "Commercial ML APIs offered by providers such as Google, Amazon and Microsoft\nhave dramatically simplified ML adoption in many applications. Numerous\ncompanies and academics pay to use ML APIs for tasks such as object detection,\nOCR and sentiment analysis. Different ML APIs tackling the same task can have\nvery heterogeneous performance. Moreover, the ML models underlying the APIs\nalso evolve over time. As ML APIs rapidly become a valuable marketplace and a\nwidespread way to consume machine learning, it is critical to systematically\nstudy and compare different APIs with each other and to characterize how APIs\nchange over time. However, this topic is currently underexplored due to the\nlack of data. In this paper, we present HAPI (History of APIs), a longitudinal\ndataset of 1,761,417 instances of commercial ML API applications (involving\nAPIs from Amazon, Google, IBM, Microsoft and other providers) across diverse\ntasks including image tagging, speech recognition and text mining from 2020 to\n2022. Each instance consists of a query input for an API (e.g., an image or\ntext) along with the API's output prediction/annotation and confidence scores.\nHAPI is the first large-scale dataset of ML API usages and is a unique resource\nfor studying ML-as-a-service (MLaaS). As examples of the types of analyses that\nHAPI enables, we show that ML APIs' performance change substantially over\ntime--several APIs' accuracies dropped on specific benchmark datasets. Even\nwhen the API's aggregate performance stays steady, its error modes can shift\nacross different subtypes of data between 2020 and 2022. Such changes can\nsubstantially impact the entire analytics pipelines that use some ML API as a\ncomponent. We further use HAPI to study commercial APIs' performance\ndisparities across demographic subgroups over time. HAPI can stimulate more\nresearch in the growing field of MLaaS.", "title": "HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions", "url": "http://arxiv.org/abs/2209.08443v1" }
null
null
new_dataset
admin
null
false
null
dbb8c2fe-bb43-4e45-9fd7-15a622ece478
null
Validated
2023-10-04 15:19:51.883949
{ "text_length": 1988 }
0new_dataset
TITLE: I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews ABSTRACT: Counterfactual statements describe events that did not or cannot take place. We consider the problem of counterfactual detection (CFD) in product reviews. For this purpose, we annotate a multilingual CFD dataset from Amazon product reviews covering counterfactual statements written in English, German, and Japanese languages. The dataset is unique as it contains counterfactuals in multiple languages, covers a new application area of e-commerce reviews, and provides high quality professional annotations. We train CFD models using different text representation methods and classifiers. We find that these models are robust against the selectional biases introduced due to cue phrase-based sentence selection. Moreover, our CFD dataset is compatible with prior datasets and can be merged to learn accurate CFD models. Applying machine translation on English counterfactual examples to create multilingual data performs poorly, demonstrating the language-specificity of this problem, which has been ignored so far.
{ "abstract": "Counterfactual statements describe events that did not or cannot take place.\nWe consider the problem of counterfactual detection (CFD) in product reviews.\nFor this purpose, we annotate a multilingual CFD dataset from Amazon product\nreviews covering counterfactual statements written in English, German, and\nJapanese languages. The dataset is unique as it contains counterfactuals in\nmultiple languages, covers a new application area of e-commerce reviews, and\nprovides high quality professional annotations. We train CFD models using\ndifferent text representation methods and classifiers. We find that these\nmodels are robust against the selectional biases introduced due to cue\nphrase-based sentence selection. Moreover, our CFD dataset is compatible with\nprior datasets and can be merged to learn accurate CFD models. Applying machine\ntranslation on English counterfactual examples to create multilingual data\nperforms poorly, demonstrating the language-specificity of this problem, which\nhas been ignored so far.", "title": "I Wish I Would Have Loved This One, But I Didn't -- A Multilingual Dataset for Counterfactual Detection in Product Reviews", "url": "http://arxiv.org/abs/2104.06893v2" }
null
null
new_dataset
admin
null
false
null
2a4bc531-8aab-43ce-b8b8-58e9ed751a5b
null
Validated
2023-10-04 15:19:51.895051
{ "text_length": 1172 }
0new_dataset
TITLE: NPU-BOLT: A Dataset for Bolt Object Detection in Natural Scene Images ABSTRACT: Bolt joints are very common and important in engineering structures. Due to extreme service environment and load factors, bolts often get loose or even disengaged. To real-time or timely detect the loosed or disengaged bolts is an urgent need in practical engineering, which is critical to keep structural safety and service life. In recent years, many bolt loosening detection methods using deep learning and machine learning techniques have been proposed and are attracting more and more attention. However, most of these studies use bolt images captured in laboratory for deep leaning model training. The images are obtained in a well-controlled light, distance, and view angle conditions. Also, the bolted structures are well designed experimental structures with brand new bolts and the bolts are exposed without any shelter nearby. It is noted that in practical engineering, the above well controlled lab conditions are not easy realized and the real bolt images often have blur edges, oblique perspective, partial occlusion and indistinguishable colors etc., which make the trained models obtained in laboratory conditions loss their accuracy or fails. Therefore, the aim of this study is to develop a dataset named NPU-BOLT for bolt object detection in natural scene images and open it to researchers for public use and further development. In the first version of the dataset, it contains 337 samples of bolt joints images mainly in the natural environment, with image data sizes ranging from 400*400 to 6000*4000, totaling approximately 1275 bolt targets. The bolt targets are annotated into four categories named blur bolt, bolt head, bolt nut and bolt side. The dataset is tested with advanced object detection models including yolov5, Faster-RCNN and CenterNet. The effectiveness of the dataset is validated.
{ "abstract": "Bolt joints are very common and important in engineering structures. Due to\nextreme service environment and load factors, bolts often get loose or even\ndisengaged. To real-time or timely detect the loosed or disengaged bolts is an\nurgent need in practical engineering, which is critical to keep structural\nsafety and service life. In recent years, many bolt loosening detection methods\nusing deep learning and machine learning techniques have been proposed and are\nattracting more and more attention. However, most of these studies use bolt\nimages captured in laboratory for deep leaning model training. The images are\nobtained in a well-controlled light, distance, and view angle conditions. Also,\nthe bolted structures are well designed experimental structures with brand new\nbolts and the bolts are exposed without any shelter nearby. It is noted that in\npractical engineering, the above well controlled lab conditions are not easy\nrealized and the real bolt images often have blur edges, oblique perspective,\npartial occlusion and indistinguishable colors etc., which make the trained\nmodels obtained in laboratory conditions loss their accuracy or fails.\nTherefore, the aim of this study is to develop a dataset named NPU-BOLT for\nbolt object detection in natural scene images and open it to researchers for\npublic use and further development. In the first version of the dataset, it\ncontains 337 samples of bolt joints images mainly in the natural environment,\nwith image data sizes ranging from 400*400 to 6000*4000, totaling approximately\n1275 bolt targets. The bolt targets are annotated into four categories named\nblur bolt, bolt head, bolt nut and bolt side. The dataset is tested with\nadvanced object detection models including yolov5, Faster-RCNN and CenterNet.\nThe effectiveness of the dataset is validated.", "title": "NPU-BOLT: A Dataset for Bolt Object Detection in Natural Scene Images", "url": "http://arxiv.org/abs/2205.11191v2" }
null
null
new_dataset
admin
null
false
null
963660df-1a9e-408b-a532-c6bc631dbfbd
null
Validated
2023-10-04 15:19:51.886302
{ "text_length": 1925 }
0new_dataset
TITLE: Local earthquakes detection: A benchmark dataset of 3-component seismograms built on a global scale ABSTRACT: Machine learning is becoming increasingly important in scientific and technological progress, due to its ability to create models that describe complex data and generalize well. The wealth of publicly-available seismic data nowadays requires automated, fast, and reliable tools to carry out a multitude of tasks, such as the detection of small, local earthquakes in areas characterized by sparsity of receivers. A similar application of machine learning, however, should be built on a large amount of labeled seismograms, which is neither immediate to obtain nor to compile. In this study we present a large dataset of seismograms recorded along the vertical, north, and east components of 1487 broad-band or very broad-band receivers distributed worldwide; this includes 629,095 3-component seismograms generated by 304,878 local earthquakes and labeled as EQ, and 615,847 ones labeled as noise (AN). Application of machine learning to this dataset shows that a simple Convolutional Neural Network of 67,939 parameters allows discriminating between earthquakes and noise single-station recordings, even if applied in regions not represented in the training set. Achieving an accuracy of 96.7, 95.3, and 93.2% on training, validation, and test set, respectively, we prove that the large variety of geological and tectonic settings covered by our data supports the generalization capabilities of the algorithm, and makes it applicable to real-time detection of local events. We make the database publicly available, intending to provide the seismological and broader scientific community with a benchmark for time-series to be used as a testing ground in signal processing.
{ "abstract": "Machine learning is becoming increasingly important in scientific and\ntechnological progress, due to its ability to create models that describe\ncomplex data and generalize well. The wealth of publicly-available seismic data\nnowadays requires automated, fast, and reliable tools to carry out a multitude\nof tasks, such as the detection of small, local earthquakes in areas\ncharacterized by sparsity of receivers. A similar application of machine\nlearning, however, should be built on a large amount of labeled seismograms,\nwhich is neither immediate to obtain nor to compile. In this study we present a\nlarge dataset of seismograms recorded along the vertical, north, and east\ncomponents of 1487 broad-band or very broad-band receivers distributed\nworldwide; this includes 629,095 3-component seismograms generated by 304,878\nlocal earthquakes and labeled as EQ, and 615,847 ones labeled as noise (AN).\nApplication of machine learning to this dataset shows that a simple\nConvolutional Neural Network of 67,939 parameters allows discriminating between\nearthquakes and noise single-station recordings, even if applied in regions not\nrepresented in the training set. Achieving an accuracy of 96.7, 95.3, and 93.2%\non training, validation, and test set, respectively, we prove that the large\nvariety of geological and tectonic settings covered by our data supports the\ngeneralization capabilities of the algorithm, and makes it applicable to\nreal-time detection of local events. We make the database publicly available,\nintending to provide the seismological and broader scientific community with a\nbenchmark for time-series to be used as a testing ground in signal processing.", "title": "Local earthquakes detection: A benchmark dataset of 3-component seismograms built on a global scale", "url": "http://arxiv.org/abs/2008.02903v1" }
null
null
new_dataset
admin
null
false
null
2f43b9d0-cd26-475f-a405-927384918acb
null
Validated
2023-10-04 15:19:51.898874
{ "text_length": 1806 }
0new_dataset
TITLE: Private Dataset Generation Using Privacy Preserving Collaborative Learning ABSTRACT: With increasing usage of deep learning algorithms in many application, new research questions related to privacy and adversarial attacks are emerging. However, the deep learning algorithm improvement needs more and more data to be shared within research community. Methodologies like federated learning, differential privacy, additive secret sharing provides a way to train machine learning models on edge without moving the data from the edge. However, it is very computationally intensive and prone to adversarial attacks. Therefore, this work introduces a privacy preserving FedCollabNN framework for training machine learning models at edge, which is computationally efficient and robust against adversarial attacks. The simulation results using MNIST dataset indicates the effectiveness of the framework.
{ "abstract": "With increasing usage of deep learning algorithms in many application, new\nresearch questions related to privacy and adversarial attacks are emerging.\nHowever, the deep learning algorithm improvement needs more and more data to be\nshared within research community. Methodologies like federated learning,\ndifferential privacy, additive secret sharing provides a way to train machine\nlearning models on edge without moving the data from the edge. However, it is\nvery computationally intensive and prone to adversarial attacks. Therefore,\nthis work introduces a privacy preserving FedCollabNN framework for training\nmachine learning models at edge, which is computationally efficient and robust\nagainst adversarial attacks. The simulation results using MNIST dataset\nindicates the effectiveness of the framework.", "title": "Private Dataset Generation Using Privacy Preserving Collaborative Learning", "url": "http://arxiv.org/abs/2004.13598v1" }
null
null
no_new_dataset
admin
null
false
null
456ca6aa-40ba-4b21-b1a7-f279ef3e4218
null
Validated
2023-10-04 15:19:51.900470
{ "text_length": 918 }
1no_new_dataset
TITLE: Mixing Deep Learning and Multiple Criteria Optimization: An Application to Distributed Learning with Multiple Datasets ABSTRACT: The training phase is the most important stage during the machine learning process. In the case of labeled data and supervised learning, machine training consists in minimizing the loss function subject to different constraints. In an abstract setting, it can be formulated as a multiple criteria optimization model in which each criterion measures the distance between the output associated with a specific input and its label. Therefore, the fitting term is a vector function and its minimization is intended in the Pareto sense. We provide stability results of the efficient solutions with respect to perturbations of input and output data. We then extend the same approach to the case of learning with multiple datasets. The multiple dataset environment is relevant when reducing the bias due to the choice of a specific training set. We propose a scalarization approach to implement this model and numerical experiments in digit classification using MNIST data.
{ "abstract": "The training phase is the most important stage during the machine learning\nprocess. In the case of labeled data and supervised learning, machine training\nconsists in minimizing the loss function subject to different constraints. In\nan abstract setting, it can be formulated as a multiple criteria optimization\nmodel in which each criterion measures the distance between the output\nassociated with a specific input and its label. Therefore, the fitting term is\na vector function and its minimization is intended in the Pareto sense. We\nprovide stability results of the efficient solutions with respect to\nperturbations of input and output data. We then extend the same approach to the\ncase of learning with multiple datasets. The multiple dataset environment is\nrelevant when reducing the bias due to the choice of a specific training set.\nWe propose a scalarization approach to implement this model and numerical\nexperiments in digit classification using MNIST data.", "title": "Mixing Deep Learning and Multiple Criteria Optimization: An Application to Distributed Learning with Multiple Datasets", "url": "http://arxiv.org/abs/2112.01358v1" }
null
null
no_new_dataset
admin
null
false
null
2cbfc423-d6a4-4b0d-942d-9564aa569993
null
Validated
2023-10-04 15:19:51.889443
{ "text_length": 1119 }
1no_new_dataset
TITLE: VISA: An Ambiguous Subtitles Dataset for Visual Scene-Aware Machine Translation ABSTRACT: Existing multimodal machine translation (MMT) datasets consist of images and video captions or general subtitles, which rarely contain linguistic ambiguity, making visual information not so effective to generate appropriate translations. We introduce VISA, a new dataset that consists of 40k Japanese-English parallel sentence pairs and corresponding video clips with the following key features: (1) the parallel sentences are subtitles from movies and TV episodes; (2) the source subtitles are ambiguous, which means they have multiple possible translations with different meanings; (3) we divide the dataset into Polysemy and Omission according to the cause of ambiguity. We show that VISA is challenging for the latest MMT system, and we hope that the dataset can facilitate MMT research. The VISA dataset is available at: https://github.com/ku-nlp/VISA.
{ "abstract": "Existing multimodal machine translation (MMT) datasets consist of images and\nvideo captions or general subtitles, which rarely contain linguistic ambiguity,\nmaking visual information not so effective to generate appropriate\ntranslations. We introduce VISA, a new dataset that consists of 40k\nJapanese-English parallel sentence pairs and corresponding video clips with the\nfollowing key features: (1) the parallel sentences are subtitles from movies\nand TV episodes; (2) the source subtitles are ambiguous, which means they have\nmultiple possible translations with different meanings; (3) we divide the\ndataset into Polysemy and Omission according to the cause of ambiguity. We show\nthat VISA is challenging for the latest MMT system, and we hope that the\ndataset can facilitate MMT research. The VISA dataset is available at:\nhttps://github.com/ku-nlp/VISA.", "title": "VISA: An Ambiguous Subtitles Dataset for Visual Scene-Aware Machine Translation", "url": "http://arxiv.org/abs/2201.08054v3" }
null
null
new_dataset
admin
null
false
null
77dc69c7-ec2f-49f1-90a7-46005b1e2e56
null
Validated
2023-10-04 15:19:51.888749
{ "text_length": 971 }
0new_dataset
TITLE: XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding (XLU) ABSTRACT: Natural Language Processing systems are heavily dependent on the availability of annotated data to train practical models. Primarily, models are trained on English datasets. In recent times, significant advances have been made in multilingual understanding due to the steeply increasing necessity of working in different languages. One of the points that stands out is that since there are now so many pre-trained multilingual models, we can utilize them for cross-lingual understanding tasks. Using cross-lingual understanding and Natural Language Inference, it is possible to train models whose applications extend beyond the training language. We can leverage the power of machine translation to skip the tiresome part of translating datasets from one language to another. In this work, we focus on improving the original XNLI dataset by re-translating the MNLI dataset in all of the 14 different languages present in XNLI, including the test and dev sets of XNLI using Google Translate. We also perform experiments by training models in all 15 languages and analyzing their performance on the task of natural language inference. We then expand our boundary to investigate if we could improve performance in low-resource languages such as Swahili and Urdu by training models in languages other than English.
{ "abstract": "Natural Language Processing systems are heavily dependent on the availability\nof annotated data to train practical models. Primarily, models are trained on\nEnglish datasets. In recent times, significant advances have been made in\nmultilingual understanding due to the steeply increasing necessity of working\nin different languages. One of the points that stands out is that since there\nare now so many pre-trained multilingual models, we can utilize them for\ncross-lingual understanding tasks. Using cross-lingual understanding and\nNatural Language Inference, it is possible to train models whose applications\nextend beyond the training language. We can leverage the power of machine\ntranslation to skip the tiresome part of translating datasets from one language\nto another. In this work, we focus on improving the original XNLI dataset by\nre-translating the MNLI dataset in all of the 14 different languages present in\nXNLI, including the test and dev sets of XNLI using Google Translate. We also\nperform experiments by training models in all 15 languages and analyzing their\nperformance on the task of natural language inference. We then expand our\nboundary to investigate if we could improve performance in low-resource\nlanguages such as Swahili and Urdu by training models in languages other than\nEnglish.", "title": "XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding (XLU)", "url": "http://arxiv.org/abs/2301.06527v1" }
null
null
new_dataset
admin
null
false
null
5d7c3099-3b08-404f-aa00-740731fa9df4
null
Validated
2023-10-04 15:19:51.881485
{ "text_length": 1430 }
0new_dataset
TITLE: Evaluating a Synthetic Image Dataset Generated with Stable Diffusion ABSTRACT: We generate synthetic images with the "Stable Diffusion" image generation model using the Wordnet taxonomy and the definitions of concepts it contains. This synthetic image database can be used as training data for data augmentation in machine learning applications, and it is used to investigate the capabilities of the Stable Diffusion model. Analyses show that Stable Diffusion can produce correct images for a large number of concepts, but also a large variety of different representations. The results show differences depending on the test concepts considered and problems with very specific concepts. These evaluations were performed using a vision transformer model for image classification.
{ "abstract": "We generate synthetic images with the \"Stable Diffusion\" image generation\nmodel using the Wordnet taxonomy and the definitions of concepts it contains.\nThis synthetic image database can be used as training data for data\naugmentation in machine learning applications, and it is used to investigate\nthe capabilities of the Stable Diffusion model.\n Analyses show that Stable Diffusion can produce correct images for a large\nnumber of concepts, but also a large variety of different representations. The\nresults show differences depending on the test concepts considered and problems\nwith very specific concepts. These evaluations were performed using a vision\ntransformer model for image classification.", "title": "Evaluating a Synthetic Image Dataset Generated with Stable Diffusion", "url": "http://arxiv.org/abs/2211.01777v2" }
null
null
new_dataset
admin
null
false
null
d51559a2-cac7-49a1-b6e1-bf7fae787356
null
Validated
2023-10-04 15:19:51.883161
{ "text_length": 804 }
0new_dataset
TITLE: Multilingual Argument Mining: Datasets and Analysis ABSTRACT: The growing interest in argument mining and computational argumentation brings with it a plethora of Natural Language Understanding (NLU) tasks and corresponding datasets. However, as with many other NLU tasks, the dominant language is English, with resources in other languages being few and far between. In this work, we explore the potential of transfer learning using the multilingual BERT model to address argument mining tasks in non-English languages, based on English datasets and the use of machine translation. We show that such methods are well suited for classifying the stance of arguments and detecting evidence, but less so for assessing the quality of arguments, presumably because quality is harder to preserve under translation. In addition, focusing on the translate-train approach, we show how the choice of languages for translation, and the relations among them, affect the accuracy of the resultant model. Finally, to facilitate evaluation of transfer learning on argument mining tasks, we provide a human-generated dataset with more than 10k arguments in multiple languages, as well as machine translation of the English datasets.
{ "abstract": "The growing interest in argument mining and computational argumentation\nbrings with it a plethora of Natural Language Understanding (NLU) tasks and\ncorresponding datasets. However, as with many other NLU tasks, the dominant\nlanguage is English, with resources in other languages being few and far\nbetween. In this work, we explore the potential of transfer learning using the\nmultilingual BERT model to address argument mining tasks in non-English\nlanguages, based on English datasets and the use of machine translation. We\nshow that such methods are well suited for classifying the stance of arguments\nand detecting evidence, but less so for assessing the quality of arguments,\npresumably because quality is harder to preserve under translation. In\naddition, focusing on the translate-train approach, we show how the choice of\nlanguages for translation, and the relations among them, affect the accuracy of\nthe resultant model. Finally, to facilitate evaluation of transfer learning on\nargument mining tasks, we provide a human-generated dataset with more than 10k\narguments in multiple languages, as well as machine translation of the English\ndatasets.", "title": "Multilingual Argument Mining: Datasets and Analysis", "url": "http://arxiv.org/abs/2010.06432v1" }
null
null
no_new_dataset
admin
null
false
null
0d734fad-f261-45b3-9025-790b907bbf20
null
Validated
2023-10-04 15:19:51.897691
{ "text_length": 1240 }
1no_new_dataset
TITLE: Network Report: A Structured Description for Network Datasets ABSTRACT: The rapid development of network science and technologies depends on shareable datasets. Currently, there is no standard practice for reporting and sharing network datasets. Some network dataset providers only share links, while others provide some contexts or basic statistics. As a result, critical information may be unintentionally dropped, and network dataset consumers may misunderstand or overlook critical aspects. Inappropriately using a network dataset can lead to severe consequences (e.g., discrimination) especially when machine learning models on networks are deployed in high-stake domains. Challenges arise as networks are often used across different domains (e.g., network science, physics, etc) and have complex structures. To facilitate the communication between network dataset providers and consumers, we propose network report. A network report is a structured description that summarizes and contextualizes a network dataset. Network report extends the idea of dataset reports (e.g., Datasheets for Datasets) from prior work with network-specific descriptions of the non-i.i.d. nature, demographic information, network characteristics, etc. We hope network reports encourage transparency and accountability in network research and development across different fields.
{ "abstract": "The rapid development of network science and technologies depends on\nshareable datasets. Currently, there is no standard practice for reporting and\nsharing network datasets. Some network dataset providers only share links,\nwhile others provide some contexts or basic statistics. As a result, critical\ninformation may be unintentionally dropped, and network dataset consumers may\nmisunderstand or overlook critical aspects. Inappropriately using a network\ndataset can lead to severe consequences (e.g., discrimination) especially when\nmachine learning models on networks are deployed in high-stake domains.\nChallenges arise as networks are often used across different domains (e.g.,\nnetwork science, physics, etc) and have complex structures. To facilitate the\ncommunication between network dataset providers and consumers, we propose\nnetwork report. A network report is a structured description that summarizes\nand contextualizes a network dataset. Network report extends the idea of\ndataset reports (e.g., Datasheets for Datasets) from prior work with\nnetwork-specific descriptions of the non-i.i.d. nature, demographic\ninformation, network characteristics, etc. We hope network reports encourage\ntransparency and accountability in network research and development across\ndifferent fields.", "title": "Network Report: A Structured Description for Network Datasets", "url": "http://arxiv.org/abs/2206.03635v1" }
null
null
no_new_dataset
admin
null
false
null
b1d23b1e-601e-4b89-9a19-1e985e3cfec4
null
Validated
2023-10-04 15:19:51.885914
{ "text_length": 1386 }
1no_new_dataset
TITLE: FENDA-FL: Personalized Federated Learning on Heterogeneous Clinical Datasets ABSTRACT: Federated learning (FL) is increasingly being recognized as a key approach to overcoming the data silos that so frequently obstruct the training and deployment of machine-learning models in clinical settings. This work contributes to a growing body of FL research specifically focused on clinical applications along three important directions. First, an extension of the FENDA method (Kim et al., 2016) to the FL setting is proposed. Experiments conducted on the FLamby benchmarks (du Terrail et al., 2022a) and GEMINI datasets (Verma et al., 2017) show that the approach is robust to heterogeneous clinical data and often outperforms existing global and personalized FL techniques. Further, the experimental results represent substantive improvements over the original FLamby benchmarks and expand such benchmarks to include evaluation of personalized FL methods. Finally, we advocate for a comprehensive checkpointing and evaluation framework for FL to better reflect practical settings and provide multiple baselines for comparison.
{ "abstract": "Federated learning (FL) is increasingly being recognized as a key approach to\novercoming the data silos that so frequently obstruct the training and\ndeployment of machine-learning models in clinical settings. This work\ncontributes to a growing body of FL research specifically focused on clinical\napplications along three important directions. First, an extension of the FENDA\nmethod (Kim et al., 2016) to the FL setting is proposed. Experiments conducted\non the FLamby benchmarks (du Terrail et al., 2022a) and GEMINI datasets (Verma\net al., 2017) show that the approach is robust to heterogeneous clinical data\nand often outperforms existing global and personalized FL techniques. Further,\nthe experimental results represent substantive improvements over the original\nFLamby benchmarks and expand such benchmarks to include evaluation of\npersonalized FL methods. Finally, we advocate for a comprehensive checkpointing\nand evaluation framework for FL to better reflect practical settings and\nprovide multiple baselines for comparison.", "title": "FENDA-FL: Personalized Federated Learning on Heterogeneous Clinical Datasets", "url": "http://arxiv.org/abs/2309.16825v1" }
null
null
no_new_dataset
admin
null
false
null
00f8bd6d-153c-435c-8fa5-64cd5db97395
null
Validated
2023-10-04 15:19:51.863174
{ "text_length": 1146 }
1no_new_dataset
TITLE: Improving Multilayer-Perceptron(MLP)-based Network Anomaly Detection with Birch Clustering on CICIDS-2017 Dataset ABSTRACT: Machine learning algorithms have been widely used in intrusion detection systems, including Multi-layer Perceptron (MLP). In this study, we proposed a two-stage model that combines the Birch clustering algorithm and MLP classifier to improve the performance of network anomaly multi-classification. In our proposed method, we first apply Birch or Kmeans as an unsupervised clustering algorithm to the CICIDS-2017 dataset to pre-group the data. The generated pseudo-label is then added as an additional feature to the training of the MLP-based classifier. The experimental results show that using Birch and K-Means clustering for data pre-grouping can improve intrusion detection system performance. Our method can achieve 99.73% accuracy in multi-classification using Birch clustering, which is better than similar researches using a stand-alone MLP model.
{ "abstract": "Machine learning algorithms have been widely used in intrusion detection\nsystems, including Multi-layer Perceptron (MLP). In this study, we proposed a\ntwo-stage model that combines the Birch clustering algorithm and MLP classifier\nto improve the performance of network anomaly multi-classification. In our\nproposed method, we first apply Birch or Kmeans as an unsupervised clustering\nalgorithm to the CICIDS-2017 dataset to pre-group the data. The generated\npseudo-label is then added as an additional feature to the training of the\nMLP-based classifier. The experimental results show that using Birch and\nK-Means clustering for data pre-grouping can improve intrusion detection system\nperformance. Our method can achieve 99.73% accuracy in multi-classification\nusing Birch clustering, which is better than similar researches using a\nstand-alone MLP model.", "title": "Improving Multilayer-Perceptron(MLP)-based Network Anomaly Detection with Birch Clustering on CICIDS-2017 Dataset", "url": "http://arxiv.org/abs/2208.09711v2" }
null
null
no_new_dataset
admin
null
false
null
60902729-ac8e-43f0-b661-6bdcc36c3f92
null
Validated
2023-10-04 15:19:51.884697
{ "text_length": 1004 }
1no_new_dataset
TITLE: CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation ABSTRACT: Human annotated data plays a crucial role in machine learning (ML) research and development. However, the ethical considerations around the processes and decisions that go into dataset annotation have not received nearly enough attention. In this paper, we survey an array of literature that provides insights into ethical considerations around crowdsourced dataset annotation. We synthesize these insights, and lay out the challenges in this space along two layers: (1) who the annotator is, and how the annotators' lived experiences can impact their annotations, and (2) the relationship between the annotators and the crowdsourcing platforms, and what that relationship affords them. Finally, we introduce a novel framework, CrowdWorkSheets, for dataset developers to facilitate transparent documentation of key decisions points at various stages of the data annotation pipeline: task formulation, selection of annotators, platform and infrastructure choices, dataset analysis and evaluation, and dataset release and maintenance.
{ "abstract": "Human annotated data plays a crucial role in machine learning (ML) research\nand development. However, the ethical considerations around the processes and\ndecisions that go into dataset annotation have not received nearly enough\nattention. In this paper, we survey an array of literature that provides\ninsights into ethical considerations around crowdsourced dataset annotation. We\nsynthesize these insights, and lay out the challenges in this space along two\nlayers: (1) who the annotator is, and how the annotators' lived experiences can\nimpact their annotations, and (2) the relationship between the annotators and\nthe crowdsourcing platforms, and what that relationship affords them. Finally,\nwe introduce a novel framework, CrowdWorkSheets, for dataset developers to\nfacilitate transparent documentation of key decisions points at various stages\nof the data annotation pipeline: task formulation, selection of annotators,\nplatform and infrastructure choices, dataset analysis and evaluation, and\ndataset release and maintenance.", "title": "CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation", "url": "http://arxiv.org/abs/2206.08931v1" }
null
null
no_new_dataset
admin
null
false
null
c3e8727e-7b35-4f6f-8561-a8f03d0211df
null
Validated
2023-10-04 15:19:51.885843
{ "text_length": 1178 }
1no_new_dataset
TITLE: Multimodal datasets: misogyny, pornography, and malignant stereotypes ABSTRACT: We have now entered the era of trillion parameter machine learning models trained on billion-sized datasets scraped from the internet. The rise of these gargantuan datasets has given rise to formidable bodies of critical work that has called for caution while generating these large datasets. These address concerns surrounding the dubious curation practices used to generate these datasets, the sordid quality of alt-text data available on the world wide web, the problematic content of the CommonCrawl dataset often used as a source for training large language models, and the entrenched biases in large-scale visio-linguistic models (such as OpenAI's CLIP model) trained on opaque datasets (WebImageText). In the backdrop of these specific calls of caution, we examine the recently released LAION-400M dataset, which is a CLIP-filtered dataset of Image-Alt-text pairs parsed from the Common-Crawl dataset. We found that the dataset contains, troublesome and explicit images and text pairs of rape, pornography, malign stereotypes, racist and ethnic slurs, and other extremely problematic content. We outline numerous implications, concerns and downstream harms regarding the current state of large scale datasets while raising open questions for various stakeholders including the AI community, regulators, policy makers and data subjects.
{ "abstract": "We have now entered the era of trillion parameter machine learning models\ntrained on billion-sized datasets scraped from the internet. The rise of these\ngargantuan datasets has given rise to formidable bodies of critical work that\nhas called for caution while generating these large datasets. These address\nconcerns surrounding the dubious curation practices used to generate these\ndatasets, the sordid quality of alt-text data available on the world wide web,\nthe problematic content of the CommonCrawl dataset often used as a source for\ntraining large language models, and the entrenched biases in large-scale\nvisio-linguistic models (such as OpenAI's CLIP model) trained on opaque\ndatasets (WebImageText). In the backdrop of these specific calls of caution, we\nexamine the recently released LAION-400M dataset, which is a CLIP-filtered\ndataset of Image-Alt-text pairs parsed from the Common-Crawl dataset. We found\nthat the dataset contains, troublesome and explicit images and text pairs of\nrape, pornography, malign stereotypes, racist and ethnic slurs, and other\nextremely problematic content. We outline numerous implications, concerns and\ndownstream harms regarding the current state of large scale datasets while\nraising open questions for various stakeholders including the AI community,\nregulators, policy makers and data subjects.", "title": "Multimodal datasets: misogyny, pornography, and malignant stereotypes", "url": "http://arxiv.org/abs/2110.01963v1" }
null
null
no_new_dataset
admin
null
false
null
49194018-becd-4eba-bb85-cf010c6adf43
null
Validated
2023-10-04 15:19:51.891535
{ "text_length": 1446 }
1no_new_dataset
TITLE: Beyond Static Datasets: A Deep Interaction Approach to LLM Evaluation ABSTRACT: Large Language Models (LLMs) have made progress in various real-world tasks, which stimulates requirements for the evaluation of LLMs. Existing LLM evaluation methods are mainly supervised signal-based which depends on static datasets and cannot evaluate the ability of LLMs in dynamic real-world scenarios where deep interaction widely exists. Other LLM evaluation methods are human-based which are costly and time-consuming and are incapable of large-scale evaluation of LLMs. To address the issues above, we propose a novel Deep Interaction-based LLM-evaluation framework. In our proposed framework, LLMs' performances in real-world domains can be evaluated from their deep interaction with other LLMs in elaborately designed evaluation tasks. Furthermore, our proposed framework is a general evaluation method that can be applied to a host of real-world tasks such as machine translation and code generation. We demonstrate the effectiveness of our proposed method through extensive experiments on four elaborately designed evaluation tasks.
{ "abstract": "Large Language Models (LLMs) have made progress in various real-world tasks,\nwhich stimulates requirements for the evaluation of LLMs. Existing LLM\nevaluation methods are mainly supervised signal-based which depends on static\ndatasets and cannot evaluate the ability of LLMs in dynamic real-world\nscenarios where deep interaction widely exists. Other LLM evaluation methods\nare human-based which are costly and time-consuming and are incapable of\nlarge-scale evaluation of LLMs. To address the issues above, we propose a novel\nDeep Interaction-based LLM-evaluation framework. In our proposed framework,\nLLMs' performances in real-world domains can be evaluated from their deep\ninteraction with other LLMs in elaborately designed evaluation tasks.\nFurthermore, our proposed framework is a general evaluation method that can be\napplied to a host of real-world tasks such as machine translation and code\ngeneration. We demonstrate the effectiveness of our proposed method through\nextensive experiments on four elaborately designed evaluation tasks.", "title": "Beyond Static Datasets: A Deep Interaction Approach to LLM Evaluation", "url": "http://arxiv.org/abs/2309.04369v1" }
null
null
no_new_dataset
admin
null
false
null
1d2efca1-4ebf-4aee-833b-c6ad768deeab
null
Validated
2023-10-04 15:19:51.863716
{ "text_length": 1149 }
1no_new_dataset
TITLE: Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets ABSTRACT: Increasingly larger datasets have become a standard ingredient to advancing the state of the art in NLP. However, data quality might have already become the bottleneck to unlock further gains. Given the diversity and the sizes of modern datasets, standard data filtering is not straight-forward to apply, because of the multifacetedness of the harmful data and elusiveness of filtering rules that would generalize across multiple tasks. We study the fitness of task-agnostic self-influence scores of training examples for data cleaning, analyze their efficacy in capturing naturally occurring outliers, and investigate to what extent self-influence based data cleaning can improve downstream performance in machine translation, question answering and text classification, building up on recent approaches to self-influence calculation and automated curriculum learning.
{ "abstract": "Increasingly larger datasets have become a standard ingredient to advancing\nthe state of the art in NLP. However, data quality might have already become\nthe bottleneck to unlock further gains. Given the diversity and the sizes of\nmodern datasets, standard data filtering is not straight-forward to apply,\nbecause of the multifacetedness of the harmful data and elusiveness of\nfiltering rules that would generalize across multiple tasks. We study the\nfitness of task-agnostic self-influence scores of training examples for data\ncleaning, analyze their efficacy in capturing naturally occurring outliers, and\ninvestigate to what extent self-influence based data cleaning can improve\ndownstream performance in machine translation, question answering and text\nclassification, building up on recent approaches to self-influence calculation\nand automated curriculum learning.", "title": "Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets", "url": "http://arxiv.org/abs/2302.13959v1" }
null
null
no_new_dataset
admin
null
false
null
9eae1cda-bd86-4771-a987-567949a6816c
null
Validated
2023-10-04 15:19:51.880913
{ "text_length": 1009 }
1no_new_dataset
TITLE: A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning ABSTRACT: In this work we introduce Sen4AgriNet, a Sentinel-2 based time series multi country benchmark dataset, tailored for agricultural monitoring applications with Machine and Deep Learning. Sen4AgriNet dataset is annotated from farmer declarations collected via the Land Parcel Identification System (LPIS) for harmonizing country wide labels. These declarations have only recently been made available as open data, allowing for the first time the labeling of satellite imagery from ground truth data. We proceed to propose and standardise a new crop type taxonomy across Europe that address Common Agriculture Policy (CAP) needs, based on the Food and Agriculture Organization (FAO) Indicative Crop Classification scheme. Sen4AgriNet is the only multi-country, multi-year dataset that includes all spectral information. It is constructed to cover the period 2016-2020 for Catalonia and France, while it can be extended to include additional countries. Currently, it contains 42.5 million parcels, which makes it significantly larger than other available archives. We extract two sub-datasets to highlight its value for diverse Deep Learning applications; the Object Aggregated Dataset (OAD) and the Patches Assembled Dataset (PAD). OAD capitalizes zonal statistics of each parcel, thus creating a powerful label-to-features instance for classification algorithms. On the other hand, PAD structure generalizes the classification problem to parcel extraction and semantic segmentation and labeling. The PAD and OAD are examined under three different scenarios to showcase and model the effects of spatial and temporal variability across different years and different countries.
{ "abstract": "In this work we introduce Sen4AgriNet, a Sentinel-2 based time series multi\ncountry benchmark dataset, tailored for agricultural monitoring applications\nwith Machine and Deep Learning. Sen4AgriNet dataset is annotated from farmer\ndeclarations collected via the Land Parcel Identification System (LPIS) for\nharmonizing country wide labels. These declarations have only recently been\nmade available as open data, allowing for the first time the labeling of\nsatellite imagery from ground truth data. We proceed to propose and standardise\na new crop type taxonomy across Europe that address Common Agriculture Policy\n(CAP) needs, based on the Food and Agriculture Organization (FAO) Indicative\nCrop Classification scheme. Sen4AgriNet is the only multi-country, multi-year\ndataset that includes all spectral information. It is constructed to cover the\nperiod 2016-2020 for Catalonia and France, while it can be extended to include\nadditional countries. Currently, it contains 42.5 million parcels, which makes\nit significantly larger than other available archives. We extract two\nsub-datasets to highlight its value for diverse Deep Learning applications; the\nObject Aggregated Dataset (OAD) and the Patches Assembled Dataset (PAD). OAD\ncapitalizes zonal statistics of each parcel, thus creating a powerful\nlabel-to-features instance for classification algorithms. On the other hand,\nPAD structure generalizes the classification problem to parcel extraction and\nsemantic segmentation and labeling. The PAD and OAD are examined under three\ndifferent scenarios to showcase and model the effects of spatial and temporal\nvariability across different years and different countries.", "title": "A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning", "url": "http://arxiv.org/abs/2204.00951v2" }
null
null
new_dataset
admin
null
false
null
ea698d75-902d-4963-9509-26a132f5cf45
null
Validated
2023-10-04 15:19:51.887262
{ "text_length": 1822 }
0new_dataset
TITLE: MiraBest: A Dataset of Morphologically Classified Radio Galaxies for Machine Learning ABSTRACT: The volume of data from current and future observatories has motivated the increased development and application of automated machine learning methodologies for astronomy. However, less attention has been given to the production of standardised datasets for assessing the performance of different machine learning algorithms within astronomy and astrophysics. Here we describe in detail the MiraBest dataset, a publicly available batched dataset of 1256 radio-loud AGN from NVSS and FIRST, filtered to $0.03 < z < 0.1$, manually labelled by Miraghaei and Best (2017) according to the Fanaroff-Riley morphological classification, created for machine learning applications and compatible for use with standard deep learning libraries. We outline the principles underlying the construction of the dataset, the sample selection and pre-processing methodology, dataset structure and composition, as well as a comparison of MiraBest to other datasets used in the literature. Existing applications that utilise the MiraBest dataset are reviewed, and an extended dataset of 2100 sources is created by cross-matching MiraBest with other catalogues of radio-loud AGN that have been used more widely in the literature for machine learning applications.
{ "abstract": "The volume of data from current and future observatories has motivated the\nincreased development and application of automated machine learning\nmethodologies for astronomy. However, less attention has been given to the\nproduction of standardised datasets for assessing the performance of different\nmachine learning algorithms within astronomy and astrophysics. Here we describe\nin detail the MiraBest dataset, a publicly available batched dataset of 1256\nradio-loud AGN from NVSS and FIRST, filtered to $0.03 < z < 0.1$, manually\nlabelled by Miraghaei and Best (2017) according to the Fanaroff-Riley\nmorphological classification, created for machine learning applications and\ncompatible for use with standard deep learning libraries. We outline the\nprinciples underlying the construction of the dataset, the sample selection and\npre-processing methodology, dataset structure and composition, as well as a\ncomparison of MiraBest to other datasets used in the literature. Existing\napplications that utilise the MiraBest dataset are reviewed, and an extended\ndataset of 2100 sources is created by cross-matching MiraBest with other\ncatalogues of radio-loud AGN that have been used more widely in the literature\nfor machine learning applications.", "title": "MiraBest: A Dataset of Morphologically Classified Radio Galaxies for Machine Learning", "url": "http://arxiv.org/abs/2305.11108v1" }
null
null
new_dataset
admin
null
false
null
4c443452-2456-4876-8fa9-04f2a564348f
null
Validated
2023-10-04 15:19:51.877135
{ "text_length": 1361 }
0new_dataset
TITLE: Towards emotion recognition for virtual environments: an evaluation of EEG features on benchmark dataset ABSTRACT: One of the challenges in virtual environments is the difficulty users have in interacting with these increasingly complex systems. Ultimately, endowing machines with the ability to perceive users emotions will enable a more intuitive and reliable interaction. Consequently, using the electroencephalogram as a bio-signal sensor, the affective state of a user can be modelled and subsequently utilised in order to achieve a system that can recognise and react to the user's emotions. This paper investigates features extracted from electroencephalogram signals for the purpose of affective state modelling based on Russell's Circumplex Model. Investigations are presented that aim to provide the foundation for future work in modelling user affect to enhance interaction experience in virtual environments. The DEAP dataset was used within this work, along with a Support Vector Machine and Random Forest, which yielded reasonable classification accuracies for Valence and Arousal using feature vectors based on statistical measurements and band power from the \'z, \b{eta}, \'z, and \'z\'z waves and High Order Crossing of the EEG signal.
{ "abstract": "One of the challenges in virtual environments is the difficulty users have in\ninteracting with these increasingly complex systems. Ultimately, endowing\nmachines with the ability to perceive users emotions will enable a more\nintuitive and reliable interaction. Consequently, using the\nelectroencephalogram as a bio-signal sensor, the affective state of a user can\nbe modelled and subsequently utilised in order to achieve a system that can\nrecognise and react to the user's emotions. This paper investigates features\nextracted from electroencephalogram signals for the purpose of affective state\nmodelling based on Russell's Circumplex Model. Investigations are presented\nthat aim to provide the foundation for future work in modelling user affect to\nenhance interaction experience in virtual environments. The DEAP dataset was\nused within this work, along with a Support Vector Machine and Random Forest,\nwhich yielded reasonable classification accuracies for Valence and Arousal\nusing feature vectors based on statistical measurements and band power from the\n\\'z, \\b{eta}, \\'z, and \\'z\\'z waves and High Order Crossing of the EEG signal.", "title": "Towards emotion recognition for virtual environments: an evaluation of EEG features on benchmark dataset", "url": "http://arxiv.org/abs/2210.13876v1" }
null
null
no_new_dataset
admin
null
false
null
e755446b-2f17-4138-8928-166ca1d0359a
null
Validated
2023-10-04 15:19:51.883302
{ "text_length": 1277 }
1no_new_dataset
TITLE: The R-U-A-Robot Dataset: Helping Avoid Chatbot Deception by Detecting User Questions About Human or Non-Human Identity ABSTRACT: Humans are increasingly interacting with machines through language, sometimes in contexts where the user may not know they are talking to a machine (like over the phone or a text chatbot). We aim to understand how system designers and researchers might allow their systems to confirm its non-human identity. We collect over 2,500 phrasings related to the intent of ``Are you a robot?". This is paired with over 2,500 adversarially selected utterances where only confirming the system is non-human would be insufficient or disfluent. We compare classifiers to recognize the intent and discuss the precision/recall and model complexity tradeoffs. Such classifiers could be integrated into dialog systems to avoid undesired deception. We then explore how both a generative research model (Blender) as well as two deployed systems (Amazon Alexa, Google Assistant) handle this intent, finding that systems often fail to confirm their non-human identity. Finally, we try to understand what a good response to the intent would be, and conduct a user study to compare the important aspects when responding to this intent.
{ "abstract": "Humans are increasingly interacting with machines through language, sometimes\nin contexts where the user may not know they are talking to a machine (like\nover the phone or a text chatbot). We aim to understand how system designers\nand researchers might allow their systems to confirm its non-human identity. We\ncollect over 2,500 phrasings related to the intent of ``Are you a robot?\". This\nis paired with over 2,500 adversarially selected utterances where only\nconfirming the system is non-human would be insufficient or disfluent. We\ncompare classifiers to recognize the intent and discuss the precision/recall\nand model complexity tradeoffs. Such classifiers could be integrated into\ndialog systems to avoid undesired deception. We then explore how both a\ngenerative research model (Blender) as well as two deployed systems (Amazon\nAlexa, Google Assistant) handle this intent, finding that systems often fail to\nconfirm their non-human identity. Finally, we try to understand what a good\nresponse to the intent would be, and conduct a user study to compare the\nimportant aspects when responding to this intent.", "title": "The R-U-A-Robot Dataset: Helping Avoid Chatbot Deception by Detecting User Questions About Human or Non-Human Identity", "url": "http://arxiv.org/abs/2106.02692v1" }
null
null
new_dataset
admin
null
false
null
1060f302-c825-48c4-a0fb-3448d567ca31
null
Validated
2023-10-04 15:19:51.894322
{ "text_length": 1266 }
0new_dataset
TITLE: QUAK: A Synthetic Quality Estimation Dataset for Korean-English Neural Machine Translation ABSTRACT: With the recent advance in neural machine translation demonstrating its importance, research on quality estimation (QE) has been steadily progressing. QE aims to automatically predict the quality of machine translation (MT) output without reference sentences. Despite its high utility in the real world, there remain several limitations concerning manual QE data creation: inevitably incurred non-trivial costs due to the need for translation experts, and issues with data scaling and language expansion. To tackle these limitations, we present QUAK, a Korean-English synthetic QE dataset generated in a fully automatic manner. This consists of three sub-QUAK datasets QUAK-M, QUAK-P, and QUAK-H, produced through three strategies that are relatively free from language constraints. Since each strategy requires no human effort, which facilitates scalability, we scale our data up to 1.58M for QUAK-P, H and 6.58M for QUAK-M. As an experiment, we quantitatively analyze word-level QE results in various ways while performing statistical analysis. Moreover, we show that datasets scaled in an efficient way also contribute to performance improvements by observing meaningful performance gains in QUAK-M, P when adding data up to 1.58M.
{ "abstract": "With the recent advance in neural machine translation demonstrating its\nimportance, research on quality estimation (QE) has been steadily progressing.\nQE aims to automatically predict the quality of machine translation (MT) output\nwithout reference sentences. Despite its high utility in the real world, there\nremain several limitations concerning manual QE data creation: inevitably\nincurred non-trivial costs due to the need for translation experts, and issues\nwith data scaling and language expansion. To tackle these limitations, we\npresent QUAK, a Korean-English synthetic QE dataset generated in a fully\nautomatic manner. This consists of three sub-QUAK datasets QUAK-M, QUAK-P, and\nQUAK-H, produced through three strategies that are relatively free from\nlanguage constraints. Since each strategy requires no human effort, which\nfacilitates scalability, we scale our data up to 1.58M for QUAK-P, H and 6.58M\nfor QUAK-M. As an experiment, we quantitatively analyze word-level QE results\nin various ways while performing statistical analysis. Moreover, we show that\ndatasets scaled in an efficient way also contribute to performance improvements\nby observing meaningful performance gains in QUAK-M, P when adding data up to\n1.58M.", "title": "QUAK: A Synthetic Quality Estimation Dataset for Korean-English Neural Machine Translation", "url": "http://arxiv.org/abs/2209.15285v2" }
null
null
new_dataset
admin
null
false
null
82a07db4-10d4-425b-9232-0a749b8a9a72
null
Validated
2023-10-04 15:19:51.883688
{ "text_length": 1359 }
0new_dataset
TITLE: BubbleML: A Multi-Physics Dataset and Benchmarks for Machine Learning ABSTRACT: In the field of phase change phenomena, the lack of accessible and diverse datasets suitable for machine learning (ML) training poses a significant challenge. Existing experimental datasets are often restricted, with limited availability and sparse ground truth data, impeding our understanding of this complex multiphysics phenomena. To bridge this gap, we present the BubbleML Dataset \footnote{\label{git_dataset}\url{https://github.com/HPCForge/BubbleML}} which leverages physics-driven simulations to provide accurate ground truth information for various boiling scenarios, encompassing nucleate pool boiling, flow boiling, and sub-cooled boiling. This extensive dataset covers a wide range of parameters, including varying gravity conditions, flow rates, sub-cooling levels, and wall superheat, comprising 79 simulations. BubbleML is validated against experimental observations and trends, establishing it as an invaluable resource for ML research. Furthermore, we showcase its potential to facilitate exploration of diverse downstream tasks by introducing two benchmarks: (a) optical flow analysis to capture bubble dynamics, and (b) operator networks for learning temperature dynamics. The BubbleML dataset and its benchmarks serve as a catalyst for advancements in ML-driven research on multiphysics phase change phenomena, enabling the development and comparison of state-of-the-art techniques and models.
{ "abstract": "In the field of phase change phenomena, the lack of accessible and diverse\ndatasets suitable for machine learning (ML) training poses a significant\nchallenge. Existing experimental datasets are often restricted, with limited\navailability and sparse ground truth data, impeding our understanding of this\ncomplex multiphysics phenomena. To bridge this gap, we present the BubbleML\nDataset\n\\footnote{\\label{git_dataset}\\url{https://github.com/HPCForge/BubbleML}} which\nleverages physics-driven simulations to provide accurate ground truth\ninformation for various boiling scenarios, encompassing nucleate pool boiling,\nflow boiling, and sub-cooled boiling. This extensive dataset covers a wide\nrange of parameters, including varying gravity conditions, flow rates,\nsub-cooling levels, and wall superheat, comprising 79 simulations. BubbleML is\nvalidated against experimental observations and trends, establishing it as an\ninvaluable resource for ML research. Furthermore, we showcase its potential to\nfacilitate exploration of diverse downstream tasks by introducing two\nbenchmarks: (a) optical flow analysis to capture bubble dynamics, and (b)\noperator networks for learning temperature dynamics. The BubbleML dataset and\nits benchmarks serve as a catalyst for advancements in ML-driven research on\nmultiphysics phase change phenomena, enabling the development and comparison of\nstate-of-the-art techniques and models.", "title": "BubbleML: A Multi-Physics Dataset and Benchmarks for Machine Learning", "url": "http://arxiv.org/abs/2307.14623v2" }
null
null
new_dataset
admin
null
false
null
7f232573-1677-43b9-b0a4-a9884f04ace9
null
Validated
2023-10-04 15:19:51.864669
{ "text_length": 1519 }
0new_dataset
TITLE: Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses ABSTRACT: The ongoing deployment of the fifth generation (5G) wireless networks constantly reveals limitations concerning its original concept as a key driver of Internet of Everything (IoE) applications. These 5G challenges are behind worldwide efforts to enable future networks, such as sixth generation (6G) networks, to efficiently support sophisticated applications ranging from autonomous driving capabilities to the Metaverse. Edge learning is a new and powerful approach to training models across distributed clients while protecting the privacy of their data. This approach is expected to be embedded within future network infrastructures, including 6G, to solve challenging problems such as resource management and behavior prediction. This survey article provides a holistic review of the most recent research focused on edge learning vulnerabilities and defenses for 6G-enabled IoT. We summarize the existing surveys on machine learning for 6G IoT security and machine learning-associated threats in three different learning modes: centralized, federated, and distributed. Then, we provide an overview of enabling emerging technologies for 6G IoT intelligence. Moreover, we provide a holistic survey of existing research on attacks against machine learning and classify threat models into eight categories, including backdoor attacks, adversarial examples, combined attacks, poisoning attacks, Sybil attacks, byzantine attacks, inference attacks, and dropping attacks. In addition, we provide a comprehensive and detailed taxonomy and a side-by-side comparison of the state-of-the-art defense methods against edge learning vulnerabilities. Finally, as new attacks and defense technologies are realized, new research and future overall prospects for 6G-enabled IoT are discussed.
{ "abstract": "The ongoing deployment of the fifth generation (5G) wireless networks\nconstantly reveals limitations concerning its original concept as a key driver\nof Internet of Everything (IoE) applications. These 5G challenges are behind\nworldwide efforts to enable future networks, such as sixth generation (6G)\nnetworks, to efficiently support sophisticated applications ranging from\nautonomous driving capabilities to the Metaverse. Edge learning is a new and\npowerful approach to training models across distributed clients while\nprotecting the privacy of their data. This approach is expected to be embedded\nwithin future network infrastructures, including 6G, to solve challenging\nproblems such as resource management and behavior prediction. This survey\narticle provides a holistic review of the most recent research focused on edge\nlearning vulnerabilities and defenses for 6G-enabled IoT. We summarize the\nexisting surveys on machine learning for 6G IoT security and machine\nlearning-associated threats in three different learning modes: centralized,\nfederated, and distributed. Then, we provide an overview of enabling emerging\ntechnologies for 6G IoT intelligence. Moreover, we provide a holistic survey of\nexisting research on attacks against machine learning and classify threat\nmodels into eight categories, including backdoor attacks, adversarial examples,\ncombined attacks, poisoning attacks, Sybil attacks, byzantine attacks,\ninference attacks, and dropping attacks. In addition, we provide a\ncomprehensive and detailed taxonomy and a side-by-side comparison of the\nstate-of-the-art defense methods against edge learning vulnerabilities.\nFinally, as new attacks and defense technologies are realized, new research and\nfuture overall prospects for 6G-enabled IoT are discussed.", "title": "Edge Learning for 6G-enabled Internet of Things: A Comprehensive Survey of Vulnerabilities, Datasets, and Defenses", "url": "http://arxiv.org/abs/2306.10309v1" }
null
null
no_new_dataset
admin
null
false
null
65a92c80-b04b-4046-9fc0-8b4eb585150d
null
Validated
2023-10-04 15:19:51.870472
{ "text_length": 1929 }
1no_new_dataset
TITLE: Bangla Text Dataset and Exploratory Analysis for Online Harassment Detection ABSTRACT: Being the seventh most spoken language in the world, the use of the Bangla language online has increased in recent times. Hence, it has become very important to analyze Bangla text data to maintain a safe and harassment-free online place. The data that has been made accessible in this article has been gathered and marked from the comments of people in public posts by celebrities, government officials, athletes on Facebook. The total amount of collected comments is 44001. The dataset is compiled with the aim of developing the ability of machines to differentiate whether a comment is a bully expression or not with the help of Natural Language Processing and to what extent it is improper if it is an inappropriate comment. The comments are labeled with different categories of harassment. Exploratory analysis from different perspectives is also included in this paper to have a detailed overview. Due to the scarcity of data collection of categorized Bengali language comments, this dataset can have a significant role for research in detecting bully words, identifying inappropriate comments, detecting different categories of Bengali bullies, etc. The dataset is publicly available at https://data.mendeley.com/datasets/9xjx8twk8p.
{ "abstract": "Being the seventh most spoken language in the world, the use of the Bangla\nlanguage online has increased in recent times. Hence, it has become very\nimportant to analyze Bangla text data to maintain a safe and harassment-free\nonline place. The data that has been made accessible in this article has been\ngathered and marked from the comments of people in public posts by celebrities,\ngovernment officials, athletes on Facebook. The total amount of collected\ncomments is 44001. The dataset is compiled with the aim of developing the\nability of machines to differentiate whether a comment is a bully expression or\nnot with the help of Natural Language Processing and to what extent it is\nimproper if it is an inappropriate comment. The comments are labeled with\ndifferent categories of harassment. Exploratory analysis from different\nperspectives is also included in this paper to have a detailed overview. Due to\nthe scarcity of data collection of categorized Bengali language comments, this\ndataset can have a significant role for research in detecting bully words,\nidentifying inappropriate comments, detecting different categories of Bengali\nbullies, etc. The dataset is publicly available at\nhttps://data.mendeley.com/datasets/9xjx8twk8p.", "title": "Bangla Text Dataset and Exploratory Analysis for Online Harassment Detection", "url": "http://arxiv.org/abs/2102.02478v1" }
null
null
new_dataset
admin
null
false
null
28daee8f-65cb-4692-99ec-ee428b14669f
null
Validated
2023-10-04 15:19:51.895995
{ "text_length": 1351 }
0new_dataset
TITLE: A Wideband Signal Recognition Dataset ABSTRACT: Signal recognition is a spectrum sensing problem that jointly requires detection, localization in time and frequency, and classification. This is a step beyond most spectrum sensing work which involves signal detection to estimate "present" or "not present" detections for either a single channel or fixed sized channels or classification which assumes a signal is present. We define the signal recognition task, present the metrics of precision and recall to the RF domain, and review recent machine-learning based approaches to this problem. We introduce a new dataset that is useful for training neural networks to perform these tasks and show a training framework to train wideband signal recognizers.
{ "abstract": "Signal recognition is a spectrum sensing problem that jointly requires\ndetection, localization in time and frequency, and classification. This is a\nstep beyond most spectrum sensing work which involves signal detection to\nestimate \"present\" or \"not present\" detections for either a single channel or\nfixed sized channels or classification which assumes a signal is present. We\ndefine the signal recognition task, present the metrics of precision and recall\nto the RF domain, and review recent machine-learning based approaches to this\nproblem. We introduce a new dataset that is useful for training neural networks\nto perform these tasks and show a training framework to train wideband signal\nrecognizers.", "title": "A Wideband Signal Recognition Dataset", "url": "http://arxiv.org/abs/2110.00518v1" }
null
null
no_new_dataset
admin
null
false
null
0880825e-0337-4018-b95d-6e5209e389dc
null
Validated
2023-10-04 15:19:51.891804
{ "text_length": 777 }
1no_new_dataset
TITLE: Open-Source Ground-based Sky Image Datasets for Very Short-term Solar Forecasting, Cloud Analysis and Modeling: A Comprehensive Survey ABSTRACT: Sky-image-based solar forecasting using deep learning has been recognized as a promising approach in reducing the uncertainty in solar power generation. However, one of the biggest challenges is the lack of massive and diversified sky image samples. In this study, we present a comprehensive survey of open-source ground-based sky image datasets for very short-term solar forecasting (i.e., forecasting horizon less than 30 minutes), as well as related research areas which can potentially help improve solar forecasting methods, including cloud segmentation, cloud classification and cloud motion prediction. We first identify 72 open-source sky image datasets that satisfy the needs of machine/deep learning. Then a database of information about various aspects of the identified datasets is constructed. To evaluate each surveyed datasets, we further develop a multi-criteria ranking system based on 8 dimensions of the datasets which could have important impacts on usage of the data. Finally, we provide insights on the usage of these datasets for different applications. We hope this paper can provide an overview for researchers who are looking for datasets for very short-term solar forecasting and related areas.
{ "abstract": "Sky-image-based solar forecasting using deep learning has been recognized as\na promising approach in reducing the uncertainty in solar power generation.\nHowever, one of the biggest challenges is the lack of massive and diversified\nsky image samples. In this study, we present a comprehensive survey of\nopen-source ground-based sky image datasets for very short-term solar\nforecasting (i.e., forecasting horizon less than 30 minutes), as well as\nrelated research areas which can potentially help improve solar forecasting\nmethods, including cloud segmentation, cloud classification and cloud motion\nprediction. We first identify 72 open-source sky image datasets that satisfy\nthe needs of machine/deep learning. Then a database of information about\nvarious aspects of the identified datasets is constructed. To evaluate each\nsurveyed datasets, we further develop a multi-criteria ranking system based on\n8 dimensions of the datasets which could have important impacts on usage of the\ndata. Finally, we provide insights on the usage of these datasets for different\napplications. We hope this paper can provide an overview for researchers who\nare looking for datasets for very short-term solar forecasting and related\nareas.", "title": "Open-Source Ground-based Sky Image Datasets for Very Short-term Solar Forecasting, Cloud Analysis and Modeling: A Comprehensive Survey", "url": "http://arxiv.org/abs/2211.14709v2" }
null
null
no_new_dataset
admin
null
false
null
fc922454-b38a-43cb-a96e-e81d3599d890
null
Validated
2023-10-04 15:19:51.882581
{ "text_length": 1390 }
1no_new_dataset
TITLE: CityNet: A Multi-city Multi-modal Dataset for Smart City Applications ABSTRACT: Data-driven approaches have been applied to many problems in urban computing. However, in the research community, such approaches are commonly studied under data from limited sources, and are thus unable to characterize the complexity of urban data coming from multiple entities and the correlations among them. Consequently, an inclusive and multifaceted dataset is necessary to facilitate more extensive studies on urban computing. In this paper, we present CityNet, a multi-modal urban dataset containing data from 7 cities, each of which coming from 3 data sources. We first present the generation process of CityNet as well as its basic properties. In addition, to facilitate the use of CityNet, we carry out extensive machine learning experiments, including spatio-temporal predictions, transfer learning, and reinforcement learning. The experimental results not only provide benchmarks for a wide range of tasks and methods, but also uncover internal correlations among cities and tasks within CityNet that, with adequate leverage, can improve performances on various tasks. With the benchmarking results and the correlations uncovered, we believe that CityNet can contribute to the field of urban computing by supporting research on many advanced topics.
{ "abstract": "Data-driven approaches have been applied to many problems in urban computing.\nHowever, in the research community, such approaches are commonly studied under\ndata from limited sources, and are thus unable to characterize the complexity\nof urban data coming from multiple entities and the correlations among them.\nConsequently, an inclusive and multifaceted dataset is necessary to facilitate\nmore extensive studies on urban computing. In this paper, we present CityNet, a\nmulti-modal urban dataset containing data from 7 cities, each of which coming\nfrom 3 data sources. We first present the generation process of CityNet as well\nas its basic properties. In addition, to facilitate the use of CityNet, we\ncarry out extensive machine learning experiments, including spatio-temporal\npredictions, transfer learning, and reinforcement learning. The experimental\nresults not only provide benchmarks for a wide range of tasks and methods, but\nalso uncover internal correlations among cities and tasks within CityNet that,\nwith adequate leverage, can improve performances on various tasks. With the\nbenchmarking results and the correlations uncovered, we believe that CityNet\ncan contribute to the field of urban computing by supporting research on many\nadvanced topics.", "title": "CityNet: A Multi-city Multi-modal Dataset for Smart City Applications", "url": "http://arxiv.org/abs/2106.15802v1" }
null
null
new_dataset
admin
null
false
null
1536bf23-0e92-4de3-af8e-8fdfe1f68469
null
Validated
2023-10-04 15:19:51.893932
{ "text_length": 1366 }
0new_dataset
TITLE: 3DSC - A New Dataset of Superconductors Including Crystal Structures ABSTRACT: Data-driven methods, in particular machine learning, can help to speed up the discovery of new materials by finding hidden patterns in existing data and using them to identify promising candidate materials. In the case of superconductors, which are a highly interesting but also a complex class of materials with many relevant applications, the use of data science tools is to date slowed down by a lack of accessible data. In this work, we present a new and publicly available superconductivity dataset ('3DSC'), featuring the critical temperature $T_\mathrm{c}$ of superconducting materials additionally to tested non-superconductors. In contrast to existing databases such as the SuperCon database which contains information on the chemical composition, the 3DSC is augmented by the approximate three-dimensional crystal structure of each material. We perform a statistical analysis and machine learning experiments to show that access to this structural information improves the prediction of the critical temperature $T_\mathrm{c}$ of materials. Furthermore, we see the 3DSC not as a finished dataset, but we provide ideas and directions for further research to improve the 3DSC in multiple ways. We are confident that this database will be useful in applying state-of-the-art machine learning methods to eventually find new superconductors.
{ "abstract": "Data-driven methods, in particular machine learning, can help to speed up the\ndiscovery of new materials by finding hidden patterns in existing data and\nusing them to identify promising candidate materials. In the case of\nsuperconductors, which are a highly interesting but also a complex class of\nmaterials with many relevant applications, the use of data science tools is to\ndate slowed down by a lack of accessible data. In this work, we present a new\nand publicly available superconductivity dataset ('3DSC'), featuring the\ncritical temperature $T_\\mathrm{c}$ of superconducting materials additionally\nto tested non-superconductors. In contrast to existing databases such as the\nSuperCon database which contains information on the chemical composition, the\n3DSC is augmented by the approximate three-dimensional crystal structure of\neach material. We perform a statistical analysis and machine learning\nexperiments to show that access to this structural information improves the\nprediction of the critical temperature $T_\\mathrm{c}$ of materials.\nFurthermore, we see the 3DSC not as a finished dataset, but we provide ideas\nand directions for further research to improve the 3DSC in multiple ways. We\nare confident that this database will be useful in applying state-of-the-art\nmachine learning methods to eventually find new superconductors.", "title": "3DSC - A New Dataset of Superconductors Including Crystal Structures", "url": "http://arxiv.org/abs/2212.06071v2" }
null
null
new_dataset
admin
null
false
null
4c86fea3-3d05-4d03-8807-925e61fadbe3
null
Validated
2023-10-04 15:19:51.882262
{ "text_length": 1449 }
0new_dataset
TITLE: Aerial Imagery Pile burn detection using Deep Learning: the FLAME dataset ABSTRACT: Wildfires are one of the costliest and deadliest natural disasters in the US, causing damage to millions of hectares of forest resources and threatening the lives of people and animals. Of particular importance are risks to firefighters and operational forces, which highlights the need for leveraging technology to minimize danger to people and property. FLAME (Fire Luminosity Airborne-based Machine learning Evaluation) offers a dataset of aerial images of fires along with methods for fire detection and segmentation which can help firefighters and researchers to develop optimal fire management strategies. This paper provides a fire image dataset collected by drones during a prescribed burning piled detritus in an Arizona pine forest. The dataset includes video recordings and thermal heatmaps captured by infrared cameras. The captured videos and images are annotated and labeled frame-wise to help researchers easily apply their fire detection and modeling algorithms. The paper also highlights solutions to two machine learning problems: (1) Binary classification of video frames based on the presence [and absence] of fire flames. An Artificial Neural Network (ANN) method is developed that achieved a 76% classification accuracy. (2) Fire detection using segmentation methods to precisely determine fire borders. A deep learning method is designed based on the U-Net up-sampling and down-sampling approach to extract a fire mask from the video frames. Our FLAME method approached a precision of 92% and a recall of 84%. Future research will expand the technique for free burning broadcast fire using thermal images.
{ "abstract": "Wildfires are one of the costliest and deadliest natural disasters in the US,\ncausing damage to millions of hectares of forest resources and threatening the\nlives of people and animals. Of particular importance are risks to firefighters\nand operational forces, which highlights the need for leveraging technology to\nminimize danger to people and property. FLAME (Fire Luminosity Airborne-based\nMachine learning Evaluation) offers a dataset of aerial images of fires along\nwith methods for fire detection and segmentation which can help firefighters\nand researchers to develop optimal fire management strategies. This paper\nprovides a fire image dataset collected by drones during a prescribed burning\npiled detritus in an Arizona pine forest. The dataset includes video recordings\nand thermal heatmaps captured by infrared cameras. The captured videos and\nimages are annotated and labeled frame-wise to help researchers easily apply\ntheir fire detection and modeling algorithms. The paper also highlights\nsolutions to two machine learning problems: (1) Binary classification of video\nframes based on the presence [and absence] of fire flames. An Artificial Neural\nNetwork (ANN) method is developed that achieved a 76% classification accuracy.\n(2) Fire detection using segmentation methods to precisely determine fire\nborders. A deep learning method is designed based on the U-Net up-sampling and\ndown-sampling approach to extract a fire mask from the video frames. Our FLAME\nmethod approached a precision of 92% and a recall of 84%. Future research will\nexpand the technique for free burning broadcast fire using thermal images.", "title": "Aerial Imagery Pile burn detection using Deep Learning: the FLAME dataset", "url": "http://arxiv.org/abs/2012.14036v1" }
null
null
new_dataset
admin
null
false
null
04da8f6d-205e-4410-899b-2a02b014c8ad
null
Validated
2023-10-04 15:19:51.896499
{ "text_length": 1736 }
0new_dataset
TITLE: TweetDIS: A Large Twitter Dataset for Natural Disasters Built using Weak Supervision ABSTRACT: Social media is often utilized as a lifeline for communication during natural disasters. Traditionally, natural disaster tweets are filtered from the Twitter stream using the name of the natural disaster and the filtered tweets are sent for human annotation. The process of human annotation to create labeled sets for machine learning models is laborious, time consuming, at times inaccurate, and more importantly not scalable in terms of size and real-time use. In this work, we curate a silver standard dataset using weak supervision. In order to validate its utility, we train machine learning models on the weakly supervised data to identify three different types of natural disasters i.e earthquakes, hurricanes and floods. Our results demonstrate that models trained on the silver standard dataset achieved performance greater than 90% when classifying a manually curated, gold-standard dataset. To enable reproducible research and additional downstream utility, we release the silver standard dataset for the scientific community.
{ "abstract": "Social media is often utilized as a lifeline for communication during natural\ndisasters. Traditionally, natural disaster tweets are filtered from the Twitter\nstream using the name of the natural disaster and the filtered tweets are sent\nfor human annotation. The process of human annotation to create labeled sets\nfor machine learning models is laborious, time consuming, at times inaccurate,\nand more importantly not scalable in terms of size and real-time use. In this\nwork, we curate a silver standard dataset using weak supervision. In order to\nvalidate its utility, we train machine learning models on the weakly supervised\ndata to identify three different types of natural disasters i.e earthquakes,\nhurricanes and floods. Our results demonstrate that models trained on the\nsilver standard dataset achieved performance greater than 90% when classifying\na manually curated, gold-standard dataset. To enable reproducible research and\nadditional downstream utility, we release the silver standard dataset for the\nscientific community.", "title": "TweetDIS: A Large Twitter Dataset for Natural Disasters Built using Weak Supervision", "url": "http://arxiv.org/abs/2207.04947v1" }
null
null
new_dataset
admin
null
false
null
cee0e736-5c41-4882-affa-1e27cafad9b2
null
Validated
2023-10-04 15:19:51.885368
{ "text_length": 1156 }
0new_dataset
TITLE: A FAIR and AI-ready Higgs boson decay dataset ABSTRACT: To enable the reusability of massive scientific datasets by humans and machines, researchers aim to adhere to the principles of findability, accessibility, interoperability, and reusability (FAIR) for data and artificial intelligence (AI) models. This article provides a domain-agnostic, step-by-step assessment guide to evaluate whether or not a given dataset meets these principles. We demonstrate how to use this guide to evaluate the FAIRness of an open simulated dataset produced by the CMS Collaboration at the CERN Large Hadron Collider. This dataset consists of Higgs boson decays and quark and gluon background, and is available through the CERN Open Data Portal. We use additional available tools to assess the FAIRness of this dataset, and incorporate feedback from members of the FAIR community to validate our results. This article is accompanied by a Jupyter notebook to visualize and explore this dataset. This study marks the first in a planned series of articles that will guide scientists in the creation of FAIR AI models and datasets in high energy particle physics.
{ "abstract": "To enable the reusability of massive scientific datasets by humans and\nmachines, researchers aim to adhere to the principles of findability,\naccessibility, interoperability, and reusability (FAIR) for data and artificial\nintelligence (AI) models. This article provides a domain-agnostic, step-by-step\nassessment guide to evaluate whether or not a given dataset meets these\nprinciples. We demonstrate how to use this guide to evaluate the FAIRness of an\nopen simulated dataset produced by the CMS Collaboration at the CERN Large\nHadron Collider. This dataset consists of Higgs boson decays and quark and\ngluon background, and is available through the CERN Open Data Portal. We use\nadditional available tools to assess the FAIRness of this dataset, and\nincorporate feedback from members of the FAIR community to validate our\nresults. This article is accompanied by a Jupyter notebook to visualize and\nexplore this dataset. This study marks the first in a planned series of\narticles that will guide scientists in the creation of FAIR AI models and\ndatasets in high energy particle physics.", "title": "A FAIR and AI-ready Higgs boson decay dataset", "url": "http://arxiv.org/abs/2108.02214v2" }
null
null
new_dataset
admin
null
false
null
36892492-7629-4e37-97bb-c4ea6d9ff7b7
null
Validated
2023-10-04 15:19:51.893220
{ "text_length": 1166 }
0new_dataset
TITLE: DuReader_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World Applications ABSTRACT: Machine reading comprehension (MRC) is a crucial task in natural language processing and has achieved remarkable advancements. However, most of the neural MRC models are still far from robust and fail to generalize well in real-world applications. In order to comprehensively verify the robustness and generalization of MRC models, we introduce a real-world Chinese dataset -- DuReader_robust. It is designed to evaluate the MRC models from three aspects: over-sensitivity, over-stability and generalization. Comparing to previous work, the instances in DuReader_robust are natural texts, rather than the altered unnatural texts. It presents the challenges when applying MRC models to real-world applications. The experimental results show that MRC models do not perform well on the challenge test set. Moreover, we analyze the behavior of existing models on the challenge test set, which may provide suggestions for future model development. The dataset and codes are publicly available at https://github.com/baidu/DuReader.
{ "abstract": "Machine reading comprehension (MRC) is a crucial task in natural language\nprocessing and has achieved remarkable advancements. However, most of the\nneural MRC models are still far from robust and fail to generalize well in\nreal-world applications. In order to comprehensively verify the robustness and\ngeneralization of MRC models, we introduce a real-world Chinese dataset --\nDuReader_robust. It is designed to evaluate the MRC models from three aspects:\nover-sensitivity, over-stability and generalization. Comparing to previous\nwork, the instances in DuReader_robust are natural texts, rather than the\naltered unnatural texts. It presents the challenges when applying MRC models to\nreal-world applications. The experimental results show that MRC models do not\nperform well on the challenge test set. Moreover, we analyze the behavior of\nexisting models on the challenge test set, which may provide suggestions for\nfuture model development. The dataset and codes are publicly available at\nhttps://github.com/baidu/DuReader.", "title": "DuReader_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World Applications", "url": "http://arxiv.org/abs/2004.11142v2" }
null
null
new_dataset
admin
null
false
null
1328354b-0919-4070-949f-efbc0212e99f
null
Validated
2023-10-04 15:19:51.900805
{ "text_length": 1203 }
0new_dataset
TITLE: High-Dimensional Feature Selection for Genomic Datasets ABSTRACT: A central problem in machine learning and pattern recognition is the process of recognizing the most important features. In this paper, we provide a new feature selection method (DRPT) that consists of first removing the irrelevant features and then detecting correlations between the remaining features. Let $D=[A\mid \mathbf{b}]$ be a dataset, where $\mathbf{b}$ is the class label and $A$ is a matrix whose columns are the features. We solve $A\mathbf{x} = \mathbf{b}$ using the least squares method and the pseudo-inverse of $A$. Each component of $\mathbf{x}$ can be viewed as an assigned weight to the corresponding column (feature). We define a threshold based on the local maxima of $\mathbf{x}$ and remove those features whose weights are smaller than the threshold. To detect the correlations in the reduced matrix, which we still call $A$, we consider a perturbation $\tilde A$ of $A$. We prove that correlations are encoded in $\Delta\mathbf{x}=\mid \mathbf{x} -\tilde{\mathbf{x}}\mid $, where $\tilde{\mathbf{x}}$ is the least quares solution of $\tilde A\tilde{\mathbf{x}}=\mathbf{b}$. We cluster features first based on $\Delta\mathbf{x}$ and then using the entropy of features. Finally, a feature is selected from each sub-cluster based on its weight and entropy. The effectiveness of DRPT has been verified by performing a series of comparisons with seven state-of-the-art feature selection methods over ten genetic datasets ranging up from 9,117 to 267,604 features. The results show that, over all, the performance of DRPT is favorable in several aspects compared to each feature selection algorithm. \e
{ "abstract": "A central problem in machine learning and pattern recognition is the process\nof recognizing the most important features. In this paper, we provide a new\nfeature selection method (DRPT) that consists of first removing the irrelevant\nfeatures and then detecting correlations between the remaining features. Let\n$D=[A\\mid \\mathbf{b}]$ be a dataset, where $\\mathbf{b}$ is the class label and\n$A$ is a matrix whose columns are the features. We solve $A\\mathbf{x} =\n\\mathbf{b}$ using the least squares method and the pseudo-inverse of $A$. Each\ncomponent of $\\mathbf{x}$ can be viewed as an assigned weight to the\ncorresponding column (feature). We define a threshold based on the local maxima\nof $\\mathbf{x}$ and remove those features whose weights are smaller than the\nthreshold.\n To detect the correlations in the reduced matrix, which we still call $A$, we\nconsider a perturbation $\\tilde A$ of $A$. We prove that correlations are\nencoded in $\\Delta\\mathbf{x}=\\mid \\mathbf{x} -\\tilde{\\mathbf{x}}\\mid $, where\n$\\tilde{\\mathbf{x}}$ is the least quares solution of\n $\\tilde A\\tilde{\\mathbf{x}}=\\mathbf{b}$. We cluster features first based on\n$\\Delta\\mathbf{x}$ and then using the entropy of features. Finally, a feature\nis selected from each sub-cluster based on its weight and entropy. The\neffectiveness of DRPT has been verified by performing a series of comparisons\nwith seven state-of-the-art feature selection methods over ten genetic datasets\nranging up from 9,117 to 267,604 features. The results show that, over all, the\nperformance of DRPT is favorable in several aspects compared to each feature\nselection algorithm.\n \\e", "title": "High-Dimensional Feature Selection for Genomic Datasets", "url": "http://arxiv.org/abs/2002.12104v2" }
null
null
no_new_dataset
admin
null
false
null
29b240e8-770d-4bfb-a098-f62cccc36e25
null
Validated
2023-10-04 15:19:51.901481
{ "text_length": 1718 }
1no_new_dataset
TITLE: PLOD: An Abbreviation Detection Dataset for Scientific Documents ABSTRACT: The detection and extraction of abbreviations from unstructured texts can help to improve the performance of Natural Language Processing tasks, such as machine translation and information retrieval. However, in terms of publicly available datasets, there is not enough data for training deep-neural-networks-based models to the point of generalising well over data. This paper presents PLOD, a large-scale dataset for abbreviation detection and extraction that contains 160k+ segments automatically annotated with abbreviations and their long forms. We performed manual validation over a set of instances and a complete automatic validation for this dataset. We then used it to generate several baseline models for detecting abbreviations and long forms. The best models achieved an F1-score of 0.92 for abbreviations and 0.89 for detecting their corresponding long forms. We release this dataset along with our code and all the models publicly in https://github.com/surrey-nlp/PLOD-AbbreviationDetection
{ "abstract": "The detection and extraction of abbreviations from unstructured texts can\nhelp to improve the performance of Natural Language Processing tasks, such as\nmachine translation and information retrieval. However, in terms of publicly\navailable datasets, there is not enough data for training\ndeep-neural-networks-based models to the point of generalising well over data.\nThis paper presents PLOD, a large-scale dataset for abbreviation detection and\nextraction that contains 160k+ segments automatically annotated with\nabbreviations and their long forms. We performed manual validation over a set\nof instances and a complete automatic validation for this dataset. We then used\nit to generate several baseline models for detecting abbreviations and long\nforms. The best models achieved an F1-score of 0.92 for abbreviations and 0.89\nfor detecting their corresponding long forms. We release this dataset along\nwith our code and all the models publicly in\nhttps://github.com/surrey-nlp/PLOD-AbbreviationDetection", "title": "PLOD: An Abbreviation Detection Dataset for Scientific Documents", "url": "http://arxiv.org/abs/2204.12061v2" }
null
null
new_dataset
admin
null
false
null
49ce570d-c32e-4ed3-8109-e18a1ec22cfd
null
Validated
2023-10-04 15:19:51.886892
{ "text_length": 1103 }
0new_dataset
TITLE: Integration of Feature Selection Techniques using a Sleep Quality Dataset for Comparing Regression Algorithms ABSTRACT: This research aims to examine the usefulness of integrating various feature selection methods with regression algorithms for sleep quality prediction. A publicly accessible sleep quality dataset is used to analyze the effect of different feature selection techniques on the performance of four regression algorithms - Linear regression, Ridge regression, Lasso Regression and Random Forest Regressor. The results are compared to determine the optimal combination of feature selection techniques and regression algorithms. The conclusion of the study enriches the current literature on using machine learning for sleep quality prediction and has practical significance for personalizing sleep recommendations for individuals.
{ "abstract": "This research aims to examine the usefulness of integrating various feature\nselection methods with regression algorithms for sleep quality prediction. A\npublicly accessible sleep quality dataset is used to analyze the effect of\ndifferent feature selection techniques on the performance of four regression\nalgorithms - Linear regression, Ridge regression, Lasso Regression and Random\nForest Regressor. The results are compared to determine the optimal combination\nof feature selection techniques and regression algorithms. The conclusion of\nthe study enriches the current literature on using machine learning for sleep\nquality prediction and has practical significance for personalizing sleep\nrecommendations for individuals.", "title": "Integration of Feature Selection Techniques using a Sleep Quality Dataset for Comparing Regression Algorithms", "url": "http://arxiv.org/abs/2303.02467v1" }
null
null
no_new_dataset
admin
null
false
null
1e5818bc-7639-4555-bdbb-9388c419eb96
null
Validated
2023-10-04 15:19:51.880703
{ "text_length": 868 }
1no_new_dataset
TITLE: BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles ABSTRACT: A riddle is a question or statement with double or veiled meanings, followed by an unexpected answer. Solving riddle is a challenging task for both machine and human, testing the capability of understanding figurative, creative natural language and reasoning with commonsense knowledge. We introduce BiRdQA, a bilingual multiple-choice question answering dataset with 6614 English riddles and 8751 Chinese riddles. For each riddle-answer pair, we provide four distractors with additional information from Wikipedia. The distractors are automatically generated at scale with minimal bias. Existing monolingual and multilingual QA models fail to perform well on our dataset, indicating that there is a long way to go before machine can beat human on solving tricky riddles. The dataset has been released to the community.
{ "abstract": "A riddle is a question or statement with double or veiled meanings, followed\nby an unexpected answer. Solving riddle is a challenging task for both machine\nand human, testing the capability of understanding figurative, creative natural\nlanguage and reasoning with commonsense knowledge. We introduce BiRdQA, a\nbilingual multiple-choice question answering dataset with 6614 English riddles\nand 8751 Chinese riddles. For each riddle-answer pair, we provide four\ndistractors with additional information from Wikipedia. The distractors are\nautomatically generated at scale with minimal bias. Existing monolingual and\nmultilingual QA models fail to perform well on our dataset, indicating that\nthere is a long way to go before machine can beat human on solving tricky\nriddles. The dataset has been released to the community.", "title": "BiRdQA: A Bilingual Dataset for Question Answering on Tricky Riddles", "url": "http://arxiv.org/abs/2109.11087v2" }
null
null
new_dataset
admin
null
false
null
039a5a67-2e67-40ac-8b05-939db7e0d062
null
Validated
2023-10-04 15:19:51.891971
{ "text_length": 922 }
0new_dataset
TITLE: PETCI: A Parallel English Translation Dataset of Chinese Idioms ABSTRACT: Idioms are an important language phenomenon in Chinese, but idiom translation is notoriously hard. Current machine translation models perform poorly on idiom translation, while idioms are sparse in many translation datasets. We present PETCI, a parallel English translation dataset of Chinese idioms, aiming to improve idiom translation by both human and machine. The dataset is built by leveraging human and machine effort. Baseline generation models show unsatisfactory abilities to improve translation, but structure-aware classification models show good performance on distinguishing good translations. Furthermore, the size of PETCI can be easily increased without expertise. Overall, PETCI can be helpful to language learners and machine translation systems.
{ "abstract": "Idioms are an important language phenomenon in Chinese, but idiom translation\nis notoriously hard. Current machine translation models perform poorly on idiom\ntranslation, while idioms are sparse in many translation datasets. We present\nPETCI, a parallel English translation dataset of Chinese idioms, aiming to\nimprove idiom translation by both human and machine. The dataset is built by\nleveraging human and machine effort. Baseline generation models show\nunsatisfactory abilities to improve translation, but structure-aware\nclassification models show good performance on distinguishing good\ntranslations. Furthermore, the size of PETCI can be easily increased without\nexpertise. Overall, PETCI can be helpful to language learners and machine\ntranslation systems.", "title": "PETCI: A Parallel English Translation Dataset of Chinese Idioms", "url": "http://arxiv.org/abs/2202.09509v1" }
null
null
new_dataset
admin
null
false
null
059f20e4-b930-4657-8b6e-0f207e611471
null
Validated
2023-10-04 15:19:51.888220
{ "text_length": 862 }
0new_dataset
TITLE: Scalable Data Annotation Pipeline for High-Quality Large Speech Datasets Development ABSTRACT: This paper introduces a human-in-the-loop (HITL) data annotation pipeline to generate high-quality, large-scale speech datasets. The pipeline combines human and machine advantages to more quickly, accurately, and cost-effectively annotate datasets with machine pre-labeling and fully manual auditing. Quality control mechanisms such as blind testing, behavior monitoring, and data validation have been adopted in the annotation pipeline to mitigate potential bias introduced by machine-generated labels. Our A/B testing and pilot results demonstrated the HITL pipeline can improve annotation speed and capacity by at least 80% and quality is comparable to or higher than manual double pass annotation. We are leveraging this scalable pipeline to create and continuously grow ultra-high volume off-the-shelf (UHV-OTS) speech corpora for multiple languages, with the capability to expand to 10,000+ hours per language annually. Customized datasets can be produced from the UHV-OTS corpora using dynamic packaging. UHV-OTS is a long-term Appen project to support commercial and academic research data needs in speech processing. Appen will donate a number of free speech datasets from the UHV-OTS each year to support academic and open source community research under the CC-BY-SA license. We are also releasing the code of the data pre-processing and pre-tagging pipeline under the Apache 2.0 license to allow reproduction of the results reported in the paper.
{ "abstract": "This paper introduces a human-in-the-loop (HITL) data annotation pipeline to\ngenerate high-quality, large-scale speech datasets. The pipeline combines human\nand machine advantages to more quickly, accurately, and cost-effectively\nannotate datasets with machine pre-labeling and fully manual auditing. Quality\ncontrol mechanisms such as blind testing, behavior monitoring, and data\nvalidation have been adopted in the annotation pipeline to mitigate potential\nbias introduced by machine-generated labels. Our A/B testing and pilot results\ndemonstrated the HITL pipeline can improve annotation speed and capacity by at\nleast 80% and quality is comparable to or higher than manual double pass\nannotation. We are leveraging this scalable pipeline to create and continuously\ngrow ultra-high volume off-the-shelf (UHV-OTS) speech corpora for multiple\nlanguages, with the capability to expand to 10,000+ hours per language\nannually. Customized datasets can be produced from the UHV-OTS corpora using\ndynamic packaging. UHV-OTS is a long-term Appen project to support commercial\nand academic research data needs in speech processing. Appen will donate a\nnumber of free speech datasets from the UHV-OTS each year to support academic\nand open source community research under the CC-BY-SA license. We are also\nreleasing the code of the data pre-processing and pre-tagging pipeline under\nthe Apache 2.0 license to allow reproduction of the results reported in the\npaper.", "title": "Scalable Data Annotation Pipeline for High-Quality Large Speech Datasets Development", "url": "http://arxiv.org/abs/2109.01164v1" }
null
null
no_new_dataset
admin
null
false
null
32d14287-160c-4093-8000-895c55529aab
null
Validated
2023-10-04 15:19:51.892362
{ "text_length": 1577 }
1no_new_dataset
TITLE: Fitting a Collider in a Quantum Computer: Tackling the Challenges of Quantum Machine Learning for Big Datasets ABSTRACT: Current quantum systems have significant limitations affecting the processing of large datasets with high dimensionality, typical of high energy physics. In the present paper, feature and data prototype selection techniques were studied to tackle this challenge. A grid search was performed and quantum machine learning models were trained and benchmarked against classical shallow machine learning methods, trained both in the reduced and the complete datasets. The performance of the quantum algorithms was found to be comparable to the classical ones, even when using large datasets. Sequential Backward Selection and Principal Component Analysis techniques were used for feature's selection and while the former can produce the better quantum machine learning models in specific cases, it is more unstable. Additionally, we show that such variability in the results is caused by the use of discrete variables, highlighting the suitability of Principal Component analysis transformed data for quantum machine learning applications in the high energy physics context.
{ "abstract": "Current quantum systems have significant limitations affecting the processing\nof large datasets with high dimensionality, typical of high energy physics. In\nthe present paper, feature and data prototype selection techniques were studied\nto tackle this challenge. A grid search was performed and quantum machine\nlearning models were trained and benchmarked against classical shallow machine\nlearning methods, trained both in the reduced and the complete datasets. The\nperformance of the quantum algorithms was found to be comparable to the\nclassical ones, even when using large datasets. Sequential Backward Selection\nand Principal Component Analysis techniques were used for feature's selection\nand while the former can produce the better quantum machine learning models in\nspecific cases, it is more unstable. Additionally, we show that such\nvariability in the results is caused by the use of discrete variables,\nhighlighting the suitability of Principal Component analysis transformed data\nfor quantum machine learning applications in the high energy physics context.", "title": "Fitting a Collider in a Quantum Computer: Tackling the Challenges of Quantum Machine Learning for Big Datasets", "url": "http://arxiv.org/abs/2211.03233v3" }
null
null
no_new_dataset
admin
null
false
null
35431acd-0816-4cda-8aef-50c1cab95c25
null
Validated
2023-10-04 15:19:51.883019
{ "text_length": 1214 }
1no_new_dataset
TITLE: How complex is the microarray dataset? A novel data complexity metric for biological high-dimensional microarray data ABSTRACT: Data complexity analysis quantifies the hardness of constructing a predictive model on a given dataset. However, the effectiveness of existing data complexity measures can be challenged by the existence of irrelevant features and feature interactions in biological micro-array data. We propose a novel data complexity measure, depth, that leverages an evolutionary inspired feature selection algorithm to quantify the complexity of micro-array data. By examining feature subsets of varying sizes, the approach offers a novel perspective on data complexity analysis. Unlike traditional metrics, depth is robust to irrelevant features and effectively captures complexity stemming from feature interactions. On synthetic micro-array data, depth outperforms existing methods in robustness to irrelevant features and identifying complexity from feature interactions. Applied to case-control genotype and gene-expression micro-array datasets, the results reveal that a single feature of gene-expression data can account for over 90% of the performance of multi-feature model, confirming the adequacy of the commonly used differentially expressed gene (DEG) feature selection method for the gene expression data. Our study also demonstrates that constructing predictive models for genotype data is harder than gene expression data. The results in this paper provide evidence for the use of interpretable machine learning algorithms on microarray data.
{ "abstract": "Data complexity analysis quantifies the hardness of constructing a predictive\nmodel on a given dataset. However, the effectiveness of existing data\ncomplexity measures can be challenged by the existence of irrelevant features\nand feature interactions in biological micro-array data. We propose a novel\ndata complexity measure, depth, that leverages an evolutionary inspired feature\nselection algorithm to quantify the complexity of micro-array data. By\nexamining feature subsets of varying sizes, the approach offers a novel\nperspective on data complexity analysis. Unlike traditional metrics, depth is\nrobust to irrelevant features and effectively captures complexity stemming from\nfeature interactions. On synthetic micro-array data, depth outperforms existing\nmethods in robustness to irrelevant features and identifying complexity from\nfeature interactions. Applied to case-control genotype and gene-expression\nmicro-array datasets, the results reveal that a single feature of\ngene-expression data can account for over 90% of the performance of\nmulti-feature model, confirming the adequacy of the commonly used\ndifferentially expressed gene (DEG) feature selection method for the gene\nexpression data. Our study also demonstrates that constructing predictive\nmodels for genotype data is harder than gene expression data. The results in\nthis paper provide evidence for the use of interpretable machine learning\nalgorithms on microarray data.", "title": "How complex is the microarray dataset? A novel data complexity metric for biological high-dimensional microarray data", "url": "http://arxiv.org/abs/2308.06430v1" }
null
null
no_new_dataset
admin
null
false
null
693316a9-54e8-4a0a-9ce9-e301c7ad95fb
null
Validated
2023-10-04 15:19:51.864204
{ "text_length": 1596 }
1no_new_dataset
TITLE: The Anatomy of Video Editing: A Dataset and Benchmark Suite for AI-Assisted Video Editing ABSTRACT: Machine learning is transforming the video editing industry. Recent advances in computer vision have leveled-up video editing tasks such as intelligent reframing, rotoscoping, color grading, or applying digital makeups. However, most of the solutions have focused on video manipulation and VFX. This work introduces the Anatomy of Video Editing, a dataset, and benchmark, to foster research in AI-assisted video editing. Our benchmark suite focuses on video editing tasks, beyond visual effects, such as automatic footage organization and assisted video assembling. To enable research on these fronts, we annotate more than 1.5M tags, with relevant concepts to cinematography, from 196176 shots sampled from movie scenes. We establish competitive baseline methods and detailed analyses for each of the tasks. We hope our work sparks innovative research towards underexplored areas of AI-assisted video editing.
{ "abstract": "Machine learning is transforming the video editing industry. Recent advances\nin computer vision have leveled-up video editing tasks such as intelligent\nreframing, rotoscoping, color grading, or applying digital makeups. However,\nmost of the solutions have focused on video manipulation and VFX. This work\nintroduces the Anatomy of Video Editing, a dataset, and benchmark, to foster\nresearch in AI-assisted video editing. Our benchmark suite focuses on video\nediting tasks, beyond visual effects, such as automatic footage organization\nand assisted video assembling. To enable research on these fronts, we annotate\nmore than 1.5M tags, with relevant concepts to cinematography, from 196176\nshots sampled from movie scenes. We establish competitive baseline methods and\ndetailed analyses for each of the tasks. We hope our work sparks innovative\nresearch towards underexplored areas of AI-assisted video editing.", "title": "The Anatomy of Video Editing: A Dataset and Benchmark Suite for AI-Assisted Video Editing", "url": "http://arxiv.org/abs/2207.09812v2" }
null
null
new_dataset
admin
null
false
null
11319163-7853-4697-90fd-84e355a86b50
null
Validated
2023-10-04 15:19:51.885223
{ "text_length": 1034 }
0new_dataset
TITLE: Learning to Taste: A Multimodal Wine Dataset ABSTRACT: We present WineSensed, a large multimodal wine dataset for studying the relations between visual perception, language, and flavor. The dataset encompasses 897k images of wine labels and 824k reviews of wines curated from the Vivino platform. It has over 350k unique vintages, annotated with year, region, rating, alcohol percentage, price, and grape composition. We obtained fine-grained flavor annotations on a subset by conducting a wine-tasting experiment with 256 participants who were asked to rank wines based on their similarity in flavor, resulting in more than 5k pairwise flavor distances. We propose a low-dimensional concept embedding algorithm that combines human experience with automatic machine similarity kernels. We demonstrate that this shared concept embedding space improves upon separate embedding spaces for coarse flavor classification (alcohol percentage, country, grape, price, rating) and aligns with the intricate human perception of flavor.
{ "abstract": "We present WineSensed, a large multimodal wine dataset for studying the\nrelations between visual perception, language, and flavor. The dataset\nencompasses 897k images of wine labels and 824k reviews of wines curated from\nthe Vivino platform. It has over 350k unique vintages, annotated with year,\nregion, rating, alcohol percentage, price, and grape composition. We obtained\nfine-grained flavor annotations on a subset by conducting a wine-tasting\nexperiment with 256 participants who were asked to rank wines based on their\nsimilarity in flavor, resulting in more than 5k pairwise flavor distances. We\npropose a low-dimensional concept embedding algorithm that combines human\nexperience with automatic machine similarity kernels. We demonstrate that this\nshared concept embedding space improves upon separate embedding spaces for\ncoarse flavor classification (alcohol percentage, country, grape, price,\nrating) and aligns with the intricate human perception of flavor.", "title": "Learning to Taste: A Multimodal Wine Dataset", "url": "http://arxiv.org/abs/2308.16900v3" }
null
null
new_dataset
admin
null
false
null
191f9e97-60a1-433b-93bb-626c11328f9c
null
Validated
2023-10-04 15:19:51.863863
{ "text_length": 1048 }
0new_dataset
TITLE: A large scale multi-view RGBD visual affordance learning dataset ABSTRACT: The physical and textural attributes of objects have been widely studied for recognition, detection and segmentation tasks in computer vision.~A number of datasets, such as large scale ImageNet, have been proposed for feature learning using data hungry deep neural networks and for hand-crafted feature extraction. To intelligently interact with objects, robots and intelligent machines need the ability to infer beyond the traditional physical/textural attributes, and understand/learn visual cues, called visual affordances, for affordance recognition, detection and segmentation. To date there is no publicly available large dataset for visual affordance understanding and learning. In this paper, we introduce a large scale multi-view RGBD visual affordance learning dataset, a benchmark of 47210 RGBD images from 37 object categories, annotated with 15 visual affordance categories. To the best of our knowledge, this is the first ever and the largest multi-view RGBD visual affordance learning dataset. We benchmark the proposed dataset for affordance segmentation and recognition tasks using popular Vision Transformer and Convolutional Neural Networks. Several state-of-the-art deep learning networks are evaluated each for affordance recognition and segmentation tasks. Our experimental results showcase the challenging nature of the dataset and present definite prospects for new and robust affordance learning algorithms. The dataset is publicly available at https://sites.google.com/view/afaqshah/dataset.
{ "abstract": "The physical and textural attributes of objects have been widely studied for\nrecognition, detection and segmentation tasks in computer vision.~A number of\ndatasets, such as large scale ImageNet, have been proposed for feature learning\nusing data hungry deep neural networks and for hand-crafted feature extraction.\nTo intelligently interact with objects, robots and intelligent machines need\nthe ability to infer beyond the traditional physical/textural attributes, and\nunderstand/learn visual cues, called visual affordances, for affordance\nrecognition, detection and segmentation. To date there is no publicly available\nlarge dataset for visual affordance understanding and learning. In this paper,\nwe introduce a large scale multi-view RGBD visual affordance learning dataset,\na benchmark of 47210 RGBD images from 37 object categories, annotated with 15\nvisual affordance categories. To the best of our knowledge, this is the first\never and the largest multi-view RGBD visual affordance learning dataset. We\nbenchmark the proposed dataset for affordance segmentation and recognition\ntasks using popular Vision Transformer and Convolutional Neural Networks.\nSeveral state-of-the-art deep learning networks are evaluated each for\naffordance recognition and segmentation tasks. Our experimental results\nshowcase the challenging nature of the dataset and present definite prospects\nfor new and robust affordance learning algorithms. The dataset is publicly\navailable at https://sites.google.com/view/afaqshah/dataset.", "title": "A large scale multi-view RGBD visual affordance learning dataset", "url": "http://arxiv.org/abs/2203.14092v3" }
null
null
new_dataset
admin
null
false
null
436edb4d-7071-40c1-adb7-dd7dbbddd46e
null
Validated
2023-10-04 15:19:51.887429
{ "text_length": 1616 }
0new_dataset
TITLE: Creating a Dataset for High-Performance Computing Code Translation using LLMs: A Bridge Between OpenMP Fortran and C++ ABSTRACT: In this study, we present a novel dataset for training machine learning models translating between OpenMP Fortran and C++ code. To ensure reliability and applicability, the dataset is created from a range of representative open-source OpenMP benchmarks. It is also refined using a meticulous code similarity test. The effectiveness of our dataset is assessed using both quantitative (CodeBLEU) and qualitative (human evaluation) methods. We showcase how this dataset significantly elevates the translation competencies of large language models (LLMs). Specifically, models without prior coding knowledge experienced a boost of $\mathbf{\times~5.1}$ in their CodeBLEU scores, while models with some coding familiarity saw an impressive $\mathbf{\times~9.9}$-fold increase. The best fine-tuned model using our dataset outperforms GPT-4. It is also reaching human-level accuracy. This work underscores the immense potential of our dataset in propelling advancements in the domain of code translation for high-performance computing. The dataset is accessible at \href{https://github.com/bin123apple/Fortran-CPP-HPC-code-translation-dataset}{OpenMP-Fortran-CPP-Translation}.
{ "abstract": "In this study, we present a novel dataset for training machine learning\nmodels translating between OpenMP Fortran and C++ code. To ensure reliability\nand applicability, the dataset is created from a range of representative\nopen-source OpenMP benchmarks. It is also refined using a meticulous code\nsimilarity test. The effectiveness of our dataset is assessed using both\nquantitative (CodeBLEU) and qualitative (human evaluation) methods. We showcase\nhow this dataset significantly elevates the translation competencies of large\nlanguage models (LLMs). Specifically, models without prior coding knowledge\nexperienced a boost of $\\mathbf{\\times~5.1}$ in their CodeBLEU scores, while\nmodels with some coding familiarity saw an impressive\n$\\mathbf{\\times~9.9}$-fold increase. The best fine-tuned model using our\ndataset outperforms GPT-4. It is also reaching human-level accuracy. This work\nunderscores the immense potential of our dataset in propelling advancements in\nthe domain of code translation for high-performance computing. The dataset is\naccessible at\n\\href{https://github.com/bin123apple/Fortran-CPP-HPC-code-translation-dataset}{OpenMP-Fortran-CPP-Translation}.", "title": "Creating a Dataset for High-Performance Computing Code Translation using LLMs: A Bridge Between OpenMP Fortran and C++", "url": "http://arxiv.org/abs/2307.07686v4" }
null
null
new_dataset
admin
null
false
null
97f47c33-6bc8-4ad4-a478-d0631fbd7b1e
null
Validated
2023-10-04 15:19:51.867272
{ "text_length": 1322 }
0new_dataset
TITLE: Avast-CTU Public CAPE Dataset ABSTRACT: There is a limited amount of publicly available data to support research in malware analysis technology. Particularly, there are virtually no publicly available datasets generated from rich sandboxes such as Cuckoo/CAPE. The benefit of using dynamic sandboxes is the realistic simulation of file execution in the target machine and obtaining a log of such execution. The machine can be infected by malware hence there is a good chance of capturing the malicious behavior in the execution logs, thus allowing researchers to study such behavior in detail. Although the subsequent analysis of log information is extensively covered in industrial cybersecurity backends, to our knowledge there has been only limited effort invested in academia to advance such log analysis capabilities using cutting edge techniques. We make this sample dataset available to support designing new machine learning methods for malware detection, especially for automatic detection of generic malicious behavior. The dataset has been collected in cooperation between Avast Software and Czech Technical University - AI Center (AIC).
{ "abstract": "There is a limited amount of publicly available data to support research in\nmalware analysis technology. Particularly, there are virtually no publicly\navailable datasets generated from rich sandboxes such as Cuckoo/CAPE. The\nbenefit of using dynamic sandboxes is the realistic simulation of file\nexecution in the target machine and obtaining a log of such execution. The\nmachine can be infected by malware hence there is a good chance of capturing\nthe malicious behavior in the execution logs, thus allowing researchers to\nstudy such behavior in detail. Although the subsequent analysis of log\ninformation is extensively covered in industrial cybersecurity backends, to our\nknowledge there has been only limited effort invested in academia to advance\nsuch log analysis capabilities using cutting edge techniques. We make this\nsample dataset available to support designing new machine learning methods for\nmalware detection, especially for automatic detection of generic malicious\nbehavior. The dataset has been collected in cooperation between Avast Software\nand Czech Technical University - AI Center (AIC).", "title": "Avast-CTU Public CAPE Dataset", "url": "http://arxiv.org/abs/2209.03188v1" }
null
null
new_dataset
admin
null
false
null
96829754-3cc8-440a-b232-1ca48c2a3c34
null
Validated
2023-10-04 15:19:51.884298
{ "text_length": 1172 }
0new_dataset
TITLE: Rapid model transfer for medical image segmentation via iterative human-in-the-loop update: from labelled public to unlabelled clinical datasets for multi-organ segmentation in CT ABSTRACT: Despite the remarkable success on medical image analysis with deep learning, it is still under exploration regarding how to rapidly transfer AI models from one dataset to another for clinical applications. This paper presents a novel and generic human-in-the-loop scheme for efficiently transferring a segmentation model from a small-scale labelled dataset to a larger-scale unlabelled dataset for multi-organ segmentation in CT. To achieve this, we propose to use an igniter network which can learn from a small-scale labelled dataset and generate coarse annotations to start the process of human-machine interaction. Then, we use a sustainer network for our larger-scale dataset, and iteratively updated it on the new annotated data. Moreover, we propose a flexible labelling strategy for the annotator to reduce the initial annotation workload. The model performance and the time cost of annotation in each subject evaluated on our private dataset are reported and analysed. The results show that our scheme can not only improve the performance by 19.7% on Dice, but also expedite the cost time of manual labelling from 13.87 min to 1.51 min per CT volume during the model transfer, demonstrating the clinical usefulness with promising potentials.
{ "abstract": "Despite the remarkable success on medical image analysis with deep learning,\nit is still under exploration regarding how to rapidly transfer AI models from\none dataset to another for clinical applications. This paper presents a novel\nand generic human-in-the-loop scheme for efficiently transferring a\nsegmentation model from a small-scale labelled dataset to a larger-scale\nunlabelled dataset for multi-organ segmentation in CT. To achieve this, we\npropose to use an igniter network which can learn from a small-scale labelled\ndataset and generate coarse annotations to start the process of human-machine\ninteraction. Then, we use a sustainer network for our larger-scale dataset, and\niteratively updated it on the new annotated data. Moreover, we propose a\nflexible labelling strategy for the annotator to reduce the initial annotation\nworkload. The model performance and the time cost of annotation in each subject\nevaluated on our private dataset are reported and analysed. The results show\nthat our scheme can not only improve the performance by 19.7% on Dice, but also\nexpedite the cost time of manual labelling from 13.87 min to 1.51 min per CT\nvolume during the model transfer, demonstrating the clinical usefulness with\npromising potentials.", "title": "Rapid model transfer for medical image segmentation via iterative human-in-the-loop update: from labelled public to unlabelled clinical datasets for multi-organ segmentation in CT", "url": "http://arxiv.org/abs/2204.06243v1" }
null
null
no_new_dataset
admin
null
false
null
45870f3b-2658-4bf1-9e19-b0db0de9d331
null
Default
2023-10-04 15:19:51.887085
{ "text_length": 1464 }
1no_new_dataset
TITLE: Simultaneous imputation and disease classification in incomplete medical datasets using Multigraph Geometric Matrix Completion (MGMC) ABSTRACT: Large-scale population-based studies in medicine are a key resource towards better diagnosis, monitoring, and treatment of diseases. They also serve as enablers of clinical decision support systems, in particular Computer Aided Diagnosis (CADx) using machine learning (ML). Numerous ML approaches for CADx have been proposed in literature. However, these approaches assume full data availability, which is not always feasible in clinical data. To account for missing data, incomplete data samples are either removed or imputed, which could lead to data bias and may negatively affect classification performance. As a solution, we propose an end-to-end learning of imputation and disease prediction of incomplete medical datasets via Multigraph Geometric Matrix Completion (MGMC). MGMC uses multiple recurrent graph convolutional networks, where each graph represents an independent population model based on a key clinical meta-feature like age, sex, or cognitive function. Graph signal aggregation from local patient neighborhoods, combined with multigraph signal fusion via self-attention, has a regularizing effect on both matrix reconstruction and classification performance. Our proposed approach is able to impute class relevant features as well as perform accurate classification on two publicly available medical datasets. We empirically show the superiority of our proposed approach in terms of classification and imputation performance when compared with state-of-the-art approaches. MGMC enables disease prediction in multimodal and incomplete medical datasets. These findings could serve as baseline for future CADx approaches which utilize incomplete datasets.
{ "abstract": "Large-scale population-based studies in medicine are a key resource towards\nbetter diagnosis, monitoring, and treatment of diseases. They also serve as\nenablers of clinical decision support systems, in particular Computer Aided\nDiagnosis (CADx) using machine learning (ML). Numerous ML approaches for CADx\nhave been proposed in literature. However, these approaches assume full data\navailability, which is not always feasible in clinical data. To account for\nmissing data, incomplete data samples are either removed or imputed, which\ncould lead to data bias and may negatively affect classification performance.\nAs a solution, we propose an end-to-end learning of imputation and disease\nprediction of incomplete medical datasets via Multigraph Geometric Matrix\nCompletion (MGMC). MGMC uses multiple recurrent graph convolutional networks,\nwhere each graph represents an independent population model based on a key\nclinical meta-feature like age, sex, or cognitive function. Graph signal\naggregation from local patient neighborhoods, combined with multigraph signal\nfusion via self-attention, has a regularizing effect on both matrix\nreconstruction and classification performance. Our proposed approach is able to\nimpute class relevant features as well as perform accurate classification on\ntwo publicly available medical datasets. We empirically show the superiority of\nour proposed approach in terms of classification and imputation performance\nwhen compared with state-of-the-art approaches. MGMC enables disease prediction\nin multimodal and incomplete medical datasets. These findings could serve as\nbaseline for future CADx approaches which utilize incomplete datasets.", "title": "Simultaneous imputation and disease classification in incomplete medical datasets using Multigraph Geometric Matrix Completion (MGMC)", "url": "http://arxiv.org/abs/2005.06935v1" }
null
null
no_new_dataset
admin
null
false
null
aa23eb99-62da-4edd-bc7c-12a8cc865e6d
null
Validated
2023-10-04 15:19:51.899951
{ "text_length": 1841 }
1no_new_dataset
TITLE: ReCO: A Large Scale Chinese Reading Comprehension Dataset on Opinion ABSTRACT: This paper presents the ReCO, a human-curated ChineseReading Comprehension dataset on Opinion. The questions in ReCO are opinion based queries issued to the commercial search engine. The passages are provided by the crowdworkers who extract the support snippet from the retrieved documents. Finally, an abstractive yes/no/uncertain answer was given by the crowdworkers. The release of ReCO consists of 300k questions that to our knowledge is the largest in Chinese reading comprehension. A prominent characteristic of ReCO is that in addition to the original context paragraph, we also provided the support evidence that could be directly used to answer the question. Quality analysis demonstrates the challenge of ReCO that requires various types of reasoning skills, such as causal inference, logical reasoning, etc. Current QA models that perform very well on many question answering problems, such as BERT, only achieve 77% accuracy on this dataset, a large margin behind humans nearly 92% performance, indicating ReCO presents a good challenge for machine reading comprehension. The codes, datasets are freely available at https://github.com/benywon/ReCO.
{ "abstract": "This paper presents the ReCO, a human-curated ChineseReading Comprehension\ndataset on Opinion. The questions in ReCO are opinion based queries issued to\nthe commercial search engine. The passages are provided by the crowdworkers who\nextract the support snippet from the retrieved documents. Finally, an\nabstractive yes/no/uncertain answer was given by the crowdworkers. The release\nof ReCO consists of 300k questions that to our knowledge is the largest in\nChinese reading comprehension. A prominent characteristic of ReCO is that in\naddition to the original context paragraph, we also provided the support\nevidence that could be directly used to answer the question. Quality analysis\ndemonstrates the challenge of ReCO that requires various types of reasoning\nskills, such as causal inference, logical reasoning, etc. Current QA models\nthat perform very well on many question answering problems, such as BERT, only\nachieve 77% accuracy on this dataset, a large margin behind humans nearly 92%\nperformance, indicating ReCO presents a good challenge for machine reading\ncomprehension. The codes, datasets are freely available at\nhttps://github.com/benywon/ReCO.", "title": "ReCO: A Large Scale Chinese Reading Comprehension Dataset on Opinion", "url": "http://arxiv.org/abs/2006.12146v1" }
null
null
new_dataset
admin
null
false
null
43c3912a-b0dd-4f3d-a46e-fe35a4d8408c
null
Validated
2023-10-04 15:19:51.899468
{ "text_length": 1263 }
0new_dataset
TITLE: Composable Core-sets for Diversity Approximation on Multi-Dataset Streams ABSTRACT: Core-sets refer to subsets of data that maximize some function that is commonly a diversity or group requirement. These subsets are used in place of the original data to accomplish a given task with comparable or even enhanced performance if biases are removed. Composable core-sets are core-sets with the property that subsets of the core set can be unioned together to obtain an approximation for the original data; lending themselves to be used for streamed or distributed data. Recent work has focused on the use of core-sets for training machine learning models. Preceding solutions such as CRAIG have been proven to approximate gradient descent while providing a reduced training time. In this paper, we introduce a core-set construction algorithm for constructing composable core-sets to summarize streamed data for use in active learning environments. If combined with techniques such as CRAIG and heuristics to enhance construction speed, composable core-sets could be used for real time training of models when the amount of sensor data is large. We provide empirical analysis by considering extrapolated data for the runtime of such a brute force algorithm. This algorithm is then analyzed for efficiency through averaged empirical regression and key results and improvements are suggested for further research on the topic.
{ "abstract": "Core-sets refer to subsets of data that maximize some function that is\ncommonly a diversity or group requirement. These subsets are used in place of\nthe original data to accomplish a given task with comparable or even enhanced\nperformance if biases are removed. Composable core-sets are core-sets with the\nproperty that subsets of the core set can be unioned together to obtain an\napproximation for the original data; lending themselves to be used for streamed\nor distributed data. Recent work has focused on the use of core-sets for\ntraining machine learning models. Preceding solutions such as CRAIG have been\nproven to approximate gradient descent while providing a reduced training time.\nIn this paper, we introduce a core-set construction algorithm for constructing\ncomposable core-sets to summarize streamed data for use in active learning\nenvironments. If combined with techniques such as CRAIG and heuristics to\nenhance construction speed, composable core-sets could be used for real time\ntraining of models when the amount of sensor data is large. We provide\nempirical analysis by considering extrapolated data for the runtime of such a\nbrute force algorithm. This algorithm is then analyzed for efficiency through\naveraged empirical regression and key results and improvements are suggested\nfor further research on the topic.", "title": "Composable Core-sets for Diversity Approximation on Multi-Dataset Streams", "url": "http://arxiv.org/abs/2308.05878v1" }
null
null
no_new_dataset
admin
null
false
null
44ae20c9-c33a-4147-bfe2-7dabfb59674b
null
Validated
2023-10-04 15:19:51.864229
{ "text_length": 1443 }
1no_new_dataset
TITLE: SyntheticFur dataset for neural rendering ABSTRACT: We introduce a new dataset called SyntheticFur built specifically for machine learning training. The dataset consists of ray traced synthetic fur renders with corresponding rasterized input buffers and simulation data files. We procedurally generated approximately 140,000 images and 15 simulations with Houdini. The images consist of fur groomed with different skin primitives and move with various motions in a predefined set of lighting environments. We also demonstrated how the dataset could be used with neural rendering to significantly improve fur graphics using inexpensive input buffers by training a conditional generative adversarial network with perceptual loss. We hope the availability of such high fidelity fur renders will encourage new advances with neural rendering for a variety of applications.
{ "abstract": "We introduce a new dataset called SyntheticFur built specifically for machine\nlearning training. The dataset consists of ray traced synthetic fur renders\nwith corresponding rasterized input buffers and simulation data files. We\nprocedurally generated approximately 140,000 images and 15 simulations with\nHoudini. The images consist of fur groomed with different skin primitives and\nmove with various motions in a predefined set of lighting environments. We also\ndemonstrated how the dataset could be used with neural rendering to\nsignificantly improve fur graphics using inexpensive input buffers by training\na conditional generative adversarial network with perceptual loss. We hope the\navailability of such high fidelity fur renders will encourage new advances with\nneural rendering for a variety of applications.", "title": "SyntheticFur dataset for neural rendering", "url": "http://arxiv.org/abs/2105.06409v1" }
null
null
new_dataset
admin
null
false
null
1eb85054-81ea-44c4-ad59-bfe6c5ac9d31
null
Validated
2023-10-04 15:19:51.894476
{ "text_length": 891 }
0new_dataset
TITLE: Anomaly Detection and Inter-Sensor Transfer Learning on Smart Manufacturing Datasets ABSTRACT: Smart manufacturing systems are being deployed at a growing rate because of their ability to interpret a wide variety of sensed information and act on the knowledge gleaned from system observations. In many cases, the principal goal of the smart manufacturing system is to rapidly detect (or anticipate) failures to reduce operational cost and eliminate downtime. This often boils down to detecting anomalies within the sensor date acquired from the system. The smart manufacturing application domain poses certain salient technical challenges. In particular, there are often multiple types of sensors with varying capabilities and costs. The sensor data characteristics change with the operating point of the environment or machines, such as, the RPM of the motor. The anomaly detection process therefore has to be calibrated near an operating point. In this paper, we analyze four datasets from sensors deployed from manufacturing testbeds. We evaluate the performance of several traditional and ML-based forecasting models for predicting the time series of sensor data. Then, considering the sparse data from one kind of sensor, we perform transfer learning from a high data rate sensor to perform defect type classification. Taken together, we show that predictive failure classification can be achieved, thus paving the way for predictive maintenance.
{ "abstract": "Smart manufacturing systems are being deployed at a growing rate because of\ntheir ability to interpret a wide variety of sensed information and act on the\nknowledge gleaned from system observations. In many cases, the principal goal\nof the smart manufacturing system is to rapidly detect (or anticipate) failures\nto reduce operational cost and eliminate downtime. This often boils down to\ndetecting anomalies within the sensor date acquired from the system. The smart\nmanufacturing application domain poses certain salient technical challenges. In\nparticular, there are often multiple types of sensors with varying capabilities\nand costs. The sensor data characteristics change with the operating point of\nthe environment or machines, such as, the RPM of the motor. The anomaly\ndetection process therefore has to be calibrated near an operating point. In\nthis paper, we analyze four datasets from sensors deployed from manufacturing\ntestbeds. We evaluate the performance of several traditional and ML-based\nforecasting models for predicting the time series of sensor data. Then,\nconsidering the sparse data from one kind of sensor, we perform transfer\nlearning from a high data rate sensor to perform defect type classification.\nTaken together, we show that predictive failure classification can be achieved,\nthus paving the way for predictive maintenance.", "title": "Anomaly Detection and Inter-Sensor Transfer Learning on Smart Manufacturing Datasets", "url": "http://arxiv.org/abs/2206.06355v1" }
null
null
no_new_dataset
admin
null
false
null
87794a9a-f4f3-44b5-9795-9c7573fa088d
null
Validated
2023-10-04 15:19:51.885795
{ "text_length": 1475 }
1no_new_dataset
TITLE: A Survey of Historical Document Image Datasets ABSTRACT: This paper presents a systematic literature review of image datasets for document image analysis, focusing on historical documents, such as handwritten manuscripts and early prints. Finding appropriate datasets for historical document analysis is a crucial prerequisite to facilitate research using different machine learning algorithms. However, because of the very large variety of the actual data (e.g., scripts, tasks, dates, support systems, and amount of deterioration), the different formats for data and label representation, and the different evaluation processes and benchmarks, finding appropriate datasets is a difficult task. This work fills this gap, presenting a meta-study on existing datasets. After a systematic selection process (according to PRISMA guidelines), we select 65 studies that are chosen based on different factors, such as the year of publication, number of methods implemented in the article, reliability of the chosen algorithms, dataset size, and journal outlet. We summarize each study by assigning it to one of three pre-defined tasks: document classification, layout structure, or content analysis. We present the statistics, document type, language, tasks, input visual aspects, and ground truth information for every dataset. In addition, we provide the benchmark tasks and results from these papers or recent competitions. We further discuss gaps and challenges in this domain. We advocate for providing conversion tools to common formats (e.g., COCO format for computer vision tasks) and always providing a set of evaluation metrics, instead of just one, to make results comparable across studies.
{ "abstract": "This paper presents a systematic literature review of image datasets for\ndocument image analysis, focusing on historical documents, such as handwritten\nmanuscripts and early prints. Finding appropriate datasets for historical\ndocument analysis is a crucial prerequisite to facilitate research using\ndifferent machine learning algorithms. However, because of the very large\nvariety of the actual data (e.g., scripts, tasks, dates, support systems, and\namount of deterioration), the different formats for data and label\nrepresentation, and the different evaluation processes and benchmarks, finding\nappropriate datasets is a difficult task. This work fills this gap, presenting\na meta-study on existing datasets. After a systematic selection process\n(according to PRISMA guidelines), we select 65 studies that are chosen based on\ndifferent factors, such as the year of publication, number of methods\nimplemented in the article, reliability of the chosen algorithms, dataset size,\nand journal outlet. We summarize each study by assigning it to one of three\npre-defined tasks: document classification, layout structure, or content\nanalysis. We present the statistics, document type, language, tasks, input\nvisual aspects, and ground truth information for every dataset. In addition, we\nprovide the benchmark tasks and results from these papers or recent\ncompetitions. We further discuss gaps and challenges in this domain. We\nadvocate for providing conversion tools to common formats (e.g., COCO format\nfor computer vision tasks) and always providing a set of evaluation metrics,\ninstead of just one, to make results comparable across studies.", "title": "A Survey of Historical Document Image Datasets", "url": "http://arxiv.org/abs/2203.08504v3" }
null
null
no_new_dataset
admin
null
false
null
097287f1-3c5b-48ff-a316-7805c6ccda57
null
Validated
2023-10-04 15:19:51.887692
{ "text_length": 1720 }
1no_new_dataset
TITLE: MRCLens: an MRC Dataset Bias Detection Toolkit ABSTRACT: Many recent neural models have shown remarkable empirical results in Machine Reading Comprehension, but evidence suggests sometimes the models take advantage of dataset biases to predict and fail to generalize on out-of-sample data. While many other approaches have been proposed to address this issue from the computation perspective such as new architectures or training procedures, we believe a method that allows researchers to discover biases, and adjust the data or the models in an earlier stage will be beneficial. Thus, we introduce MRCLens, a toolkit that detects whether biases exist before users train the full model. For the convenience of introducing the toolkit, we also provide a categorization of common biases in MRC.
{ "abstract": "Many recent neural models have shown remarkable empirical results in Machine\nReading Comprehension, but evidence suggests sometimes the models take\nadvantage of dataset biases to predict and fail to generalize on out-of-sample\ndata. While many other approaches have been proposed to address this issue from\nthe computation perspective such as new architectures or training procedures,\nwe believe a method that allows researchers to discover biases, and adjust the\ndata or the models in an earlier stage will be beneficial. Thus, we introduce\nMRCLens, a toolkit that detects whether biases exist before users train the\nfull model. For the convenience of introducing the toolkit, we also provide a\ncategorization of common biases in MRC.", "title": "MRCLens: an MRC Dataset Bias Detection Toolkit", "url": "http://arxiv.org/abs/2207.08943v1" }
null
null
no_new_dataset
admin
null
false
null
298e2a99-0e5b-49a9-935b-ddb37e83be36
null
Validated
2023-10-04 15:19:51.885294
{ "text_length": 816 }
1no_new_dataset
TITLE: X-SRL: A Parallel Cross-Lingual Semantic Role Labeling Dataset ABSTRACT: Even though SRL is researched for many languages, major improvements have mostly been obtained for English, for which more resources are available. In fact, existing multilingual SRL datasets contain disparate annotation styles or come from different domains, hampering generalization in multilingual learning. In this work, we propose a method to automatically construct an SRL corpus that is parallel in four languages: English, French, German, Spanish, with unified predicate and role annotations that are fully comparable across languages. We apply high-quality machine translation to the English CoNLL-09 dataset and use multilingual BERT to project its high-quality annotations to the target languages. We include human-validated test sets that we use to measure the projection quality, and show that projection is denser and more precise than a strong baseline. Finally, we train different SOTA models on our novel corpus for mono- and multilingual SRL, showing that the multilingual annotations improve performance especially for the weaker languages.
{ "abstract": "Even though SRL is researched for many languages, major improvements have\nmostly been obtained for English, for which more resources are available. In\nfact, existing multilingual SRL datasets contain disparate annotation styles or\ncome from different domains, hampering generalization in multilingual learning.\nIn this work, we propose a method to automatically construct an SRL corpus that\nis parallel in four languages: English, French, German, Spanish, with unified\npredicate and role annotations that are fully comparable across languages. We\napply high-quality machine translation to the English CoNLL-09 dataset and use\nmultilingual BERT to project its high-quality annotations to the target\nlanguages. We include human-validated test sets that we use to measure the\nprojection quality, and show that projection is denser and more precise than a\nstrong baseline. Finally, we train different SOTA models on our novel corpus\nfor mono- and multilingual SRL, showing that the multilingual annotations\nimprove performance especially for the weaker languages.", "title": "X-SRL: A Parallel Cross-Lingual Semantic Role Labeling Dataset", "url": "http://arxiv.org/abs/2010.01998v1" }
null
null
new_dataset
admin
null
false
null
76f1a3a7-cdb8-4d43-b7a9-3a3483d12a9f
null
Validated
2023-10-04 15:19:51.897858
{ "text_length": 1156 }
0new_dataset
TITLE: SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection ABSTRACT: Online sexism has become an increasing concern in social media platforms as it has affected the healthy development of the Internet and can have negative effects in society. While research in the sexism detection domain is growing, most of this research focuses on English as the language and on Twitter as the platform. Our objective here is to broaden the scope of this research by considering the Chinese language on Sina Weibo. We propose the first Chinese sexism dataset -- Sina Weibo Sexism Review (SWSR) dataset --, as well as a large Chinese lexicon SexHateLex made of abusive and gender-related terms. We introduce our data collection and annotation process, and provide an exploratory analysis of the dataset characteristics to validate its quality and to show how sexism is manifested in Chinese. The SWSR dataset provides labels at different levels of granularity including (i) sexism or non-sexism, (ii) sexism category and (iii) target type, which can be exploited, among others, for building computational methods to identify and investigate finer-grained gender-related abusive language. We conduct experiments for the three sexism classification tasks making use of state-of-the-art machine learning models. Our results show competitive performance, providing a benchmark for sexism detection in the Chinese language, as well as an error analysis highlighting open challenges needing more research in Chinese NLP. The SWSR dataset and SexHateLex lexicon are publicly available.
{ "abstract": "Online sexism has become an increasing concern in social media platforms as\nit has affected the healthy development of the Internet and can have negative\neffects in society. While research in the sexism detection domain is growing,\nmost of this research focuses on English as the language and on Twitter as the\nplatform. Our objective here is to broaden the scope of this research by\nconsidering the Chinese language on Sina Weibo. We propose the first Chinese\nsexism dataset -- Sina Weibo Sexism Review (SWSR) dataset --, as well as a\nlarge Chinese lexicon SexHateLex made of abusive and gender-related terms. We\nintroduce our data collection and annotation process, and provide an\nexploratory analysis of the dataset characteristics to validate its quality and\nto show how sexism is manifested in Chinese. The SWSR dataset provides labels\nat different levels of granularity including (i) sexism or non-sexism, (ii)\nsexism category and (iii) target type, which can be exploited, among others,\nfor building computational methods to identify and investigate finer-grained\ngender-related abusive language. We conduct experiments for the three sexism\nclassification tasks making use of state-of-the-art machine learning models.\nOur results show competitive performance, providing a benchmark for sexism\ndetection in the Chinese language, as well as an error analysis highlighting\nopen challenges needing more research in Chinese NLP. The SWSR dataset and\nSexHateLex lexicon are publicly available.", "title": "SWSR: A Chinese Dataset and Lexicon for Online Sexism Detection", "url": "http://arxiv.org/abs/2108.03070v1" }
null
null
new_dataset
admin
null
false
null
68d4929e-41b5-4668-8a28-1e60d1bbf527
null
Validated
2023-10-04 15:19:51.893051
{ "text_length": 1592 }
0new_dataset
TITLE: IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation metrics for Indian Languages ABSTRACT: The rapid growth of machine translation (MT) systems has necessitated comprehensive studies to meta-evaluate evaluation metrics being used, which enables a better selection of metrics that best reflect MT quality. Unfortunately, most of the research focuses on high-resource languages, mainly English, the observations for which may not always apply to other languages. Indian languages, having over a billion speakers, are linguistically different from English, and to date, there has not been a systematic study of evaluating MT systems from English into Indian languages. In this paper, we fill this gap by creating an MQM dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems, and use it to establish correlations between annotator scores and scores obtained using existing automatic metrics. Our results show that pre-trained metrics, such as COMET, have the highest correlations with annotator scores. Additionally, we find that the metrics do not adequately capture fluency-based errors in Indian languages, and there is a need to develop metrics focused on Indian languages. We hope that our dataset and analysis will help promote further research in this area.
{ "abstract": "The rapid growth of machine translation (MT) systems has necessitated\ncomprehensive studies to meta-evaluate evaluation metrics being used, which\nenables a better selection of metrics that best reflect MT quality.\nUnfortunately, most of the research focuses on high-resource languages, mainly\nEnglish, the observations for which may not always apply to other languages.\nIndian languages, having over a billion speakers, are linguistically different\nfrom English, and to date, there has not been a systematic study of evaluating\nMT systems from English into Indian languages. In this paper, we fill this gap\nby creating an MQM dataset consisting of 7000 fine-grained annotations,\nspanning 5 Indian languages and 7 MT systems, and use it to establish\ncorrelations between annotator scores and scores obtained using existing\nautomatic metrics. Our results show that pre-trained metrics, such as COMET,\nhave the highest correlations with annotator scores. Additionally, we find that\nthe metrics do not adequately capture fluency-based errors in Indian languages,\nand there is a need to develop metrics focused on Indian languages. We hope\nthat our dataset and analysis will help promote further research in this area.", "title": "IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation metrics for Indian Languages", "url": "http://arxiv.org/abs/2212.10180v2" }
null
null
new_dataset
admin
null
false
null
4feda583-c8b3-49fb-a042-887450455857
null
Validated
2023-10-04 15:19:51.881996
{ "text_length": 1337 }
0new_dataset
TITLE: COMPASS: A Formal Framework and Aggregate Dataset for Generalized Surgical Procedure Modeling ABSTRACT: Purpose: We propose a formal framework for the modeling and segmentation of minimally-invasive surgical tasks using a unified set of motion primitives (MPs) to enable more objective labeling and the aggregation of different datasets. Methods: We model dry-lab surgical tasks as finite state machines, representing how the execution of MPs as the basic surgical actions results in the change of surgical context, which characterizes the physical interactions among tools and objects in the surgical environment. We develop methods for labeling surgical context based on video data and for automatic translation of context to MP labels. We then use our framework to create the COntext and Motion Primitive Aggregate Surgical Set (COMPASS), including six dry-lab surgical tasks from three publicly-available datasets (JIGSAWS, DESK, and ROSMA), with kinematic and video data and context and MP labels. Results: Our context labeling method achieves near-perfect agreement between consensus labels from crowd-sourcing and expert surgeons. Segmentation of tasks to MPs results in the creation of the COMPASS dataset that nearly triples the amount of data for modeling and analysis and enables the generation of separate transcripts for the left and right tools. Conclusion: The proposed framework results in high quality labeling of surgical data based on context and fine-grained MPs. Modeling surgical tasks with MPs enables the aggregation of different datasets and the separate analysis of left and right hands for bimanual coordination assessment. Our formal framework and aggregate dataset can support the development of explainable and multi-granularity models for improved surgical process analysis, skill assessment, error detection, and autonomy.
{ "abstract": "Purpose: We propose a formal framework for the modeling and segmentation of\nminimally-invasive surgical tasks using a unified set of motion primitives\n(MPs) to enable more objective labeling and the aggregation of different\ndatasets.\n Methods: We model dry-lab surgical tasks as finite state machines,\nrepresenting how the execution of MPs as the basic surgical actions results in\nthe change of surgical context, which characterizes the physical interactions\namong tools and objects in the surgical environment. We develop methods for\nlabeling surgical context based on video data and for automatic translation of\ncontext to MP labels. We then use our framework to create the COntext and\nMotion Primitive Aggregate Surgical Set (COMPASS), including six dry-lab\nsurgical tasks from three publicly-available datasets (JIGSAWS, DESK, and\nROSMA), with kinematic and video data and context and MP labels.\n Results: Our context labeling method achieves near-perfect agreement between\nconsensus labels from crowd-sourcing and expert surgeons. Segmentation of tasks\nto MPs results in the creation of the COMPASS dataset that nearly triples the\namount of data for modeling and analysis and enables the generation of separate\ntranscripts for the left and right tools.\n Conclusion: The proposed framework results in high quality labeling of\nsurgical data based on context and fine-grained MPs. Modeling surgical tasks\nwith MPs enables the aggregation of different datasets and the separate\nanalysis of left and right hands for bimanual coordination assessment. Our\nformal framework and aggregate dataset can support the development of\nexplainable and multi-granularity models for improved surgical process\nanalysis, skill assessment, error detection, and autonomy.", "title": "COMPASS: A Formal Framework and Aggregate Dataset for Generalized Surgical Procedure Modeling", "url": "http://arxiv.org/abs/2209.06424v5" }
null
null
new_dataset
admin
null
false
null
7ddec2dd-89f4-475d-9328-c02c883bac86
null
Validated
2023-10-04 15:19:51.884031
{ "text_length": 1884 }
0new_dataset
TITLE: JEMMA: An Extensible Java Dataset for ML4Code Applications ABSTRACT: Machine Learning for Source Code (ML4Code) is an active research field in which extensive experimentation is needed to discover how to best use source code's richly structured information. With this in mind, we introduce JEMMA, an Extensible Java Dataset for ML4Code Applications, which is a large-scale, diverse, and high-quality dataset targeted at ML4Code. Our goal with JEMMA is to lower the barrier to entry in ML4Code by providing the building blocks to experiment with source code models and tasks. JEMMA comes with a considerable amount of pre-processed information such as metadata, representations (e.g., code tokens, ASTs, graphs), and several properties (e.g., metrics, static analysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2 million classes and over 8 million methods. JEMMA is also extensible allowing users to add new properties and representations to the dataset, and evaluate tasks on them. Thus, JEMMA becomes a workbench that researchers can use to experiment with novel representations and tasks operating on source code. To demonstrate the utility of the dataset, we also report results from two empirical studies on our data, ultimately showing that significant work lies ahead in the design of context-aware source code models that can reason over a broader network of source code entities in a software project, the very task that JEMMA is designed to help with.
{ "abstract": "Machine Learning for Source Code (ML4Code) is an active research field in\nwhich extensive experimentation is needed to discover how to best use source\ncode's richly structured information. With this in mind, we introduce JEMMA, an\nExtensible Java Dataset for ML4Code Applications, which is a large-scale,\ndiverse, and high-quality dataset targeted at ML4Code. Our goal with JEMMA is\nto lower the barrier to entry in ML4Code by providing the building blocks to\nexperiment with source code models and tasks. JEMMA comes with a considerable\namount of pre-processed information such as metadata, representations (e.g.,\ncode tokens, ASTs, graphs), and several properties (e.g., metrics, static\nanalysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2\nmillion classes and over 8 million methods. JEMMA is also extensible allowing\nusers to add new properties and representations to the dataset, and evaluate\ntasks on them. Thus, JEMMA becomes a workbench that researchers can use to\nexperiment with novel representations and tasks operating on source code. To\ndemonstrate the utility of the dataset, we also report results from two\nempirical studies on our data, ultimately showing that significant work lies\nahead in the design of context-aware source code models that can reason over a\nbroader network of source code entities in a software project, the very task\nthat JEMMA is designed to help with.", "title": "JEMMA: An Extensible Java Dataset for ML4Code Applications", "url": "http://arxiv.org/abs/2212.09132v1" }
null
null
new_dataset
admin
null
false
null
78df4941-9356-4826-8123-bf0ad6771b8f
null
Validated
2023-10-04 15:19:51.882037
{ "text_length": 1510 }
0new_dataset
TITLE: On the Robustness of Dataset Inference ABSTRACT: Machine learning (ML) models are costly to train as they can require a significant amount of data, computational resources and technical expertise. Thus, they constitute valuable intellectual property that needs protection from adversaries wanting to steal them. Ownership verification techniques allow the victims of model stealing attacks to demonstrate that a suspect model was in fact stolen from theirs. Although a number of ownership verification techniques based on watermarking or fingerprinting have been proposed, most of them fall short either in terms of security guarantees (well-equipped adversaries can evade verification) or computational cost. A fingerprinting technique, Dataset Inference (DI), has been shown to offer better robustness and efficiency than prior methods. The authors of DI provided a correctness proof for linear (suspect) models. However, in a subspace of the same setting, we prove that DI suffers from high false positives (FPs) -- it can incorrectly identify an independent model trained with non-overlapping data from the same distribution as stolen. We further prove that DI also triggers FPs in realistic, non-linear suspect models. We then confirm empirically that DI in the black-box setting leads to FPs, with high confidence. Second, we show that DI also suffers from false negatives (FNs) -- an adversary can fool DI (at the cost of incurring some accuracy loss) by regularising a stolen model's decision boundaries using adversarial training, thereby leading to an FN. To this end, we demonstrate that black-box DI fails to identify a model adversarially trained from a stolen dataset -- the setting where DI is the hardest to evade. Finally, we discuss the implications of our findings, the viability of fingerprinting-based ownership verification in general, and suggest directions for future work.
{ "abstract": "Machine learning (ML) models are costly to train as they can require a\nsignificant amount of data, computational resources and technical expertise.\nThus, they constitute valuable intellectual property that needs protection from\nadversaries wanting to steal them. Ownership verification techniques allow the\nvictims of model stealing attacks to demonstrate that a suspect model was in\nfact stolen from theirs.\n Although a number of ownership verification techniques based on watermarking\nor fingerprinting have been proposed, most of them fall short either in terms\nof security guarantees (well-equipped adversaries can evade verification) or\ncomputational cost. A fingerprinting technique, Dataset Inference (DI), has\nbeen shown to offer better robustness and efficiency than prior methods.\n The authors of DI provided a correctness proof for linear (suspect) models.\nHowever, in a subspace of the same setting, we prove that DI suffers from high\nfalse positives (FPs) -- it can incorrectly identify an independent model\ntrained with non-overlapping data from the same distribution as stolen. We\nfurther prove that DI also triggers FPs in realistic, non-linear suspect\nmodels. We then confirm empirically that DI in the black-box setting leads to\nFPs, with high confidence.\n Second, we show that DI also suffers from false negatives (FNs) -- an\nadversary can fool DI (at the cost of incurring some accuracy loss) by\nregularising a stolen model's decision boundaries using adversarial training,\nthereby leading to an FN. To this end, we demonstrate that black-box DI fails\nto identify a model adversarially trained from a stolen dataset -- the setting\nwhere DI is the hardest to evade.\n Finally, we discuss the implications of our findings, the viability of\nfingerprinting-based ownership verification in general, and suggest directions\nfor future work.", "title": "On the Robustness of Dataset Inference", "url": "http://arxiv.org/abs/2210.13631v3" }
null
null
no_new_dataset
admin
null
false
null
0efab7b7-eeb5-4d56-93e5-6d4aa39164d8
null
Validated
2023-10-04 15:19:51.883326
{ "text_length": 1929 }
1no_new_dataset
TITLE: Towards overcoming data scarcity in materials science: unifying models and datasets with a mixture of experts framework ABSTRACT: While machine learning has emerged in recent years as a useful tool for rapid prediction of materials properties, generating sufficient data to reliably train models without overfitting is still impractical for many applications. Towards overcoming this limitation, we present a general framework for leveraging complementary information across different models and datasets for accurate prediction of data scarce materials properties. Our approach, based on a machine learning paradigm called mixture of experts, outperforms pairwise transfer learning on 16 of 19 materials property regression tasks, performing comparably on the remaining three. Unlike pairwise transfer learning, our framework automatically learns to combine information from multiple source tasks in a single training run, alleviating the need for brute-force experiments to determine which source task to transfer from. The approach also provides an interpretable, model-agnostic, and scalable mechanism to transfer information from an arbitrary number of models and datasets to any downstream property prediction task. We anticipate the performance of our framework will further improve as better model architectures, new pre-training tasks, and larger materials datasets are developed by the community.
{ "abstract": "While machine learning has emerged in recent years as a useful tool for rapid\nprediction of materials properties, generating sufficient data to reliably\ntrain models without overfitting is still impractical for many applications.\nTowards overcoming this limitation, we present a general framework for\nleveraging complementary information across different models and datasets for\naccurate prediction of data scarce materials properties. Our approach, based on\na machine learning paradigm called mixture of experts, outperforms pairwise\ntransfer learning on 16 of 19 materials property regression tasks, performing\ncomparably on the remaining three. Unlike pairwise transfer learning, our\nframework automatically learns to combine information from multiple source\ntasks in a single training run, alleviating the need for brute-force\nexperiments to determine which source task to transfer from. The approach also\nprovides an interpretable, model-agnostic, and scalable mechanism to transfer\ninformation from an arbitrary number of models and datasets to any downstream\nproperty prediction task. We anticipate the performance of our framework will\nfurther improve as better model architectures, new pre-training tasks, and\nlarger materials datasets are developed by the community.", "title": "Towards overcoming data scarcity in materials science: unifying models and datasets with a mixture of experts framework", "url": "http://arxiv.org/abs/2207.13880v1" }
null
null
no_new_dataset
admin
null
false
null
32f7ced4-d135-4cb3-a71f-2670ff96c22f
null
Validated
2023-10-04 15:19:51.885028
{ "text_length": 1430 }
1no_new_dataset
TITLE: Wide-scale Monitoring of Satellite Lifetimes: Pitfalls and a Benchmark Dataset ABSTRACT: An important task within the broader goal of Space Situational Awareness (SSA) is to observe changes in the orbits of satellites, where the data spans thousands of objects over long time scales (decades). The Two-Line Element (TLE) data provided by the North American Aerospace Defense Command is the most comprehensive and widely-available dataset cataloguing the orbits of satellites. This makes it a highly-attractive data source on which to perform this observation. However, when attempting to infer changes in satellite behaviour from TLE data, there are a number of potential pitfalls. These mostly relate to specific features of the TLE data which are not always clearly documented in the data sources or popular software packages for manipulating them. These quirks produce a particularly hazardous data type for researchers from adjacent disciplines (such as anomaly detection or machine learning). We highlight these features of TLE data and the resulting pitfalls in order to save future researchers from being trapped. A seperate, significant, issue is that existing contributions to manoeuvre detection from TLE data evaluate their algorithms on different satellites, making comparison between these methods difficult. Moreover, the ground-truth in these datasets is often poor quality, sometimes being based on subjective human assessment. We therefore release and describe in-depth an open, curated, benchmark dataset containing TLE data for 15 satellites alongside high-quality ground-truth manoeuvre timestamps.
{ "abstract": "An important task within the broader goal of Space Situational Awareness\n(SSA) is to observe changes in the orbits of satellites, where the data spans\nthousands of objects over long time scales (decades). The Two-Line Element\n(TLE) data provided by the North American Aerospace Defense Command is the most\ncomprehensive and widely-available dataset cataloguing the orbits of\nsatellites. This makes it a highly-attractive data source on which to perform\nthis observation. However, when attempting to infer changes in satellite\nbehaviour from TLE data, there are a number of potential pitfalls. These mostly\nrelate to specific features of the TLE data which are not always clearly\ndocumented in the data sources or popular software packages for manipulating\nthem. These quirks produce a particularly hazardous data type for researchers\nfrom adjacent disciplines (such as anomaly detection or machine learning). We\nhighlight these features of TLE data and the resulting pitfalls in order to\nsave future researchers from being trapped. A seperate, significant, issue is\nthat existing contributions to manoeuvre detection from TLE data evaluate their\nalgorithms on different satellites, making comparison between these methods\ndifficult. Moreover, the ground-truth in these datasets is often poor quality,\nsometimes being based on subjective human assessment. We therefore release and\ndescribe in-depth an open, curated, benchmark dataset containing TLE data for\n15 satellites alongside high-quality ground-truth manoeuvre timestamps.", "title": "Wide-scale Monitoring of Satellite Lifetimes: Pitfalls and a Benchmark Dataset", "url": "http://arxiv.org/abs/2212.08662v1" }
null
null
new_dataset
admin
null
false
null
835672b4-b22c-483a-8ce2-db7d4f2ea2b0
null
Validated
2023-10-04 15:19:51.882120
{ "text_length": 1642 }
0new_dataset
TITLE: VISEM-Tracking, a human spermatozoa tracking dataset ABSTRACT: A manual assessment of sperm motility requires microscopy observation, which is challenging due to the fast-moving spermatozoa in the field of view. To obtain correct results, manual evaluation requires extensive training. Therefore, computer-assisted sperm analysis (CASA) has become increasingly used in clinics. Despite this, more data is needed to train supervised machine learning approaches in order to improve accuracy and reliability in the assessment of sperm motility and kinematics. In this regard, we provide a dataset called VISEM-Tracking with 20 video recordings of 30 seconds (comprising 29,196 frames) of wet sperm preparations with manually annotated bounding-box coordinates and a set of sperm characteristics analyzed by experts in the domain. In addition to the annotated data, we provide unlabeled video clips for easy-to-use access and analysis of the data via methods such as self- or unsupervised learning. As part of this paper, we present baseline sperm detection performances using the YOLOv5 deep learning (DL) model trained on the VISEM-Tracking dataset. As a result, we show that the dataset can be used to train complex DL models to analyze spermatozoa.
{ "abstract": "A manual assessment of sperm motility requires microscopy observation, which\nis challenging due to the fast-moving spermatozoa in the field of view. To\nobtain correct results, manual evaluation requires extensive training.\nTherefore, computer-assisted sperm analysis (CASA) has become increasingly used\nin clinics. Despite this, more data is needed to train supervised machine\nlearning approaches in order to improve accuracy and reliability in the\nassessment of sperm motility and kinematics. In this regard, we provide a\ndataset called VISEM-Tracking with 20 video recordings of 30 seconds\n(comprising 29,196 frames) of wet sperm preparations with manually annotated\nbounding-box coordinates and a set of sperm characteristics analyzed by experts\nin the domain. In addition to the annotated data, we provide unlabeled video\nclips for easy-to-use access and analysis of the data via methods such as self-\nor unsupervised learning. As part of this paper, we present baseline sperm\ndetection performances using the YOLOv5 deep learning (DL) model trained on the\nVISEM-Tracking dataset. As a result, we show that the dataset can be used to\ntrain complex DL models to analyze spermatozoa.", "title": "VISEM-Tracking, a human spermatozoa tracking dataset", "url": "http://arxiv.org/abs/2212.02842v5" }
null
null
new_dataset
admin
null
false
null
1f5fcec2-e410-4735-a20f-cd894d4c4d78
null
Validated
2023-10-04 15:19:51.882368
{ "text_length": 1272 }
0new_dataset
TITLE: Balanced Split: A new train-test data splitting strategy for imbalanced datasets ABSTRACT: Classification data sets with skewed class proportions are called imbalanced. Class imbalance is a problem since most machine learning classification algorithms are built with an assumption of equal representation of all classes in the training dataset. Therefore to counter the class imbalance problem, many algorithm-level and data-level approaches have been developed. These mainly include ensemble learning and data augmentation techniques. This paper shows a new way to counter the class imbalance problem through a new data-splitting strategy called balanced split. Data splitting can play an important role in correctly classifying imbalanced datasets. We show that the commonly used data-splitting strategies have some disadvantages, and our proposed balanced split has solved those problems.
{ "abstract": "Classification data sets with skewed class proportions are called imbalanced.\nClass imbalance is a problem since most machine learning classification\nalgorithms are built with an assumption of equal representation of all classes\nin the training dataset. Therefore to counter the class imbalance problem, many\nalgorithm-level and data-level approaches have been developed. These mainly\ninclude ensemble learning and data augmentation techniques. This paper shows a\nnew way to counter the class imbalance problem through a new data-splitting\nstrategy called balanced split. Data splitting can play an important role in\ncorrectly classifying imbalanced datasets. We show that the commonly used\ndata-splitting strategies have some disadvantages, and our proposed balanced\nsplit has solved those problems.", "title": "Balanced Split: A new train-test data splitting strategy for imbalanced datasets", "url": "http://arxiv.org/abs/2212.11116v1" }
null
null
no_new_dataset
admin
null
false
null
e3e8f39a-0772-453f-ab31-89e11f266b2a
null
Validated
2023-10-04 15:19:51.882072
{ "text_length": 915 }
1no_new_dataset
TITLE: Efficient and Multiply Robust Risk Estimation under General Forms of Dataset Shift ABSTRACT: Statistical machine learning methods often face the challenge of limited data available from the population of interest. One remedy is to leverage data from auxiliary source populations, which share some conditional distributions or are linked in other ways with the target domain. Techniques leveraging such \emph{dataset shift} conditions are known as \emph{domain adaptation} or \emph{transfer learning}. Despite extensive literature on dataset shift, limited works address how to efficiently use the auxiliary populations to improve the accuracy of risk evaluation for a given machine learning task in the target population. In this paper, we study the general problem of efficiently estimating target population risk under various dataset shift conditions, leveraging semiparametric efficiency theory. We consider a general class of dataset shift conditions, which includes three popular conditions -- covariate, label and concept shift -- as special cases. We allow for partially non-overlapping support between the source and target populations. We develop efficient and multiply robust estimators along with a straightforward specification test of these dataset shift conditions. We also derive efficiency bounds for two other dataset shift conditions, posterior drift and location-scale shift. Simulation studies support the efficiency gains due to leveraging plausible dataset shift conditions.
{ "abstract": "Statistical machine learning methods often face the challenge of limited data\navailable from the population of interest. One remedy is to leverage data from\nauxiliary source populations, which share some conditional distributions or are\nlinked in other ways with the target domain. Techniques leveraging such\n\\emph{dataset shift} conditions are known as \\emph{domain adaptation} or\n\\emph{transfer learning}. Despite extensive literature on dataset shift,\nlimited works address how to efficiently use the auxiliary populations to\nimprove the accuracy of risk evaluation for a given machine learning task in\nthe target population.\n In this paper, we study the general problem of efficiently estimating target\npopulation risk under various dataset shift conditions, leveraging\nsemiparametric efficiency theory. We consider a general class of dataset shift\nconditions, which includes three popular conditions -- covariate, label and\nconcept shift -- as special cases. We allow for partially non-overlapping\nsupport between the source and target populations. We develop efficient and\nmultiply robust estimators along with a straightforward specification test of\nthese dataset shift conditions. We also derive efficiency bounds for two other\ndataset shift conditions, posterior drift and location-scale shift. Simulation\nstudies support the efficiency gains due to leveraging plausible dataset shift\nconditions.", "title": "Efficient and Multiply Robust Risk Estimation under General Forms of Dataset Shift", "url": "http://arxiv.org/abs/2306.16406v2" }
null
null
no_new_dataset
admin
null
false
null
db59162e-5ae6-4d8b-9d3d-d73bb48089df
null
Validated
2023-10-04 15:19:51.868849
{ "text_length": 1523 }
1no_new_dataset
TITLE: Collecting Interactive Multi-modal Datasets for Grounded Language Understanding ABSTRACT: Human intelligence can remarkably adapt quickly to new tasks and environments. Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. To facilitate research which can enable similar capabilities in machines, we made the following contributions (1) formalized the collaborative embodied agent using natural language task; (2) developed a tool for extensive and scalable data collection; and (3) collected the first dataset for interactive grounded language understanding.
{ "abstract": "Human intelligence can remarkably adapt quickly to new tasks and\nenvironments. Starting from a very young age, humans acquire new skills and\nlearn how to solve new tasks either by imitating the behavior of others or by\nfollowing provided natural language instructions. To facilitate research which\ncan enable similar capabilities in machines, we made the following\ncontributions (1) formalized the collaborative embodied agent using natural\nlanguage task; (2) developed a tool for extensive and scalable data collection;\nand (3) collected the first dataset for interactive grounded language\nunderstanding.", "title": "Collecting Interactive Multi-modal Datasets for Grounded Language Understanding", "url": "http://arxiv.org/abs/2211.06552v3" }
null
null
new_dataset
admin
null
false
null
69022dbd-cb1f-463a-a81d-54b5c6f9fad7
null
Validated
2023-10-04 15:19:51.882866
{ "text_length": 719 }
0new_dataset
TITLE: AtomSets -- A Hierarchical Transfer Learning Framework for Small and Large Materials Datasets ABSTRACT: Predicting materials properties from composition or structure is of great interest to the materials science community. Deep learning has recently garnered considerable interest in materials predictive tasks with low model errors when dealing with large materials data. However, deep learning models suffer in the small data regime that is common in materials science. Here we leverage the transfer learning concept and the graph network deep learning framework and develop the AtomSets machine learning framework for consistent high model accuracy at both small and large materials data. The AtomSets models can work with both compositional and structural materials data. By combining with transfer learned features from graph networks, they can achieve state-of-the-art accuracy from using small compositional data (<400) to large structural data (>130,000). The AtomSets models show much lower errors than the state-of-the-art graph network models at small data limits and the classical machine learning models at large data limits. They also transfer better in the simulated materials discovery process where the targeted materials have property values out of the training data limits. The models require minimal domain knowledge inputs and are free from feature engineering. The presented AtomSets model framework opens new routes for machine learning-assisted materials design and discovery.
{ "abstract": "Predicting materials properties from composition or structure is of great\ninterest to the materials science community. Deep learning has recently\ngarnered considerable interest in materials predictive tasks with low model\nerrors when dealing with large materials data. However, deep learning models\nsuffer in the small data regime that is common in materials science. Here we\nleverage the transfer learning concept and the graph network deep learning\nframework and develop the AtomSets machine learning framework for consistent\nhigh model accuracy at both small and large materials data. The AtomSets models\ncan work with both compositional and structural materials data. By combining\nwith transfer learned features from graph networks, they can achieve\nstate-of-the-art accuracy from using small compositional data (<400) to large\nstructural data (>130,000). The AtomSets models show much lower errors than the\nstate-of-the-art graph network models at small data limits and the classical\nmachine learning models at large data limits. They also transfer better in the\nsimulated materials discovery process where the targeted materials have\nproperty values out of the training data limits. The models require minimal\ndomain knowledge inputs and are free from feature engineering. The presented\nAtomSets model framework opens new routes for machine learning-assisted\nmaterials design and discovery.", "title": "AtomSets -- A Hierarchical Transfer Learning Framework for Small and Large Materials Datasets", "url": "http://arxiv.org/abs/2102.02401v2" }
null
null
no_new_dataset
admin
null
false
null
17846108-5358-4233-9534-d08e1c66bc1a
null
Validated
2023-10-04 15:19:51.896018
{ "text_length": 1524 }
1no_new_dataset
TITLE: InDL: A New Dataset and Benchmark for In-Diagram Logic Interpretation based on Visual Illusion ABSTRACT: This paper introduces a novel approach to evaluating deep learning models' capacity for in-diagram logic interpretation. Leveraging the intriguing realm of visual illusions, we establish a unique dataset, InDL, designed to rigorously test and benchmark these models. Deep learning has witnessed remarkable progress in domains such as computer vision and natural language processing. However, models often stumble in tasks requiring logical reasoning due to their inherent 'black box' characteristics, which obscure the decision-making process. Our work presents a new lens to understand these models better by focusing on their handling of visual illusions -- a complex interplay of perception and logic. We utilize six classic geometric optical illusions to create a comparative framework between human and machine visual perception. This methodology offers a quantifiable measure to rank models, elucidating potential weaknesses and providing actionable insights for model improvements. Our experimental results affirm the efficacy of our benchmarking strategy, demonstrating its ability to effectively rank models based on their logic interpretation ability. As part of our commitment to reproducible research, the source code and datasets will be made publicly available at https://github.com/rabbit-magic-wh/InDL
{ "abstract": "This paper introduces a novel approach to evaluating deep learning models'\ncapacity for in-diagram logic interpretation. Leveraging the intriguing realm\nof visual illusions, we establish a unique dataset, InDL, designed to\nrigorously test and benchmark these models. Deep learning has witnessed\nremarkable progress in domains such as computer vision and natural language\nprocessing. However, models often stumble in tasks requiring logical reasoning\ndue to their inherent 'black box' characteristics, which obscure the\ndecision-making process. Our work presents a new lens to understand these\nmodels better by focusing on their handling of visual illusions -- a complex\ninterplay of perception and logic. We utilize six classic geometric optical\nillusions to create a comparative framework between human and machine visual\nperception. This methodology offers a quantifiable measure to rank models,\nelucidating potential weaknesses and providing actionable insights for model\nimprovements. Our experimental results affirm the efficacy of our benchmarking\nstrategy, demonstrating its ability to effectively rank models based on their\nlogic interpretation ability. As part of our commitment to reproducible\nresearch, the source code and datasets will be made publicly available at\nhttps://github.com/rabbit-magic-wh/InDL", "title": "InDL: A New Dataset and Benchmark for In-Diagram Logic Interpretation based on Visual Illusion", "url": "http://arxiv.org/abs/2305.17716v4" }
null
null
new_dataset
admin
null
false
null
44c5093f-f723-4a1f-9657-35bb72f49c9d
null
Validated
2023-10-04 15:19:51.876624
{ "text_length": 1446 }
0new_dataset
TITLE: An Automatically Created Novel Bug Dataset and its Validation in Bug Prediction ABSTRACT: Bugs are inescapable during software development due to frequent code changes, tight deadlines, etc.; therefore, it is important to have tools to find these errors. One way of performing bug identification is to analyze the characteristics of buggy source code elements from the past and predict the present ones based on the same characteristics, using e.g. machine learning models. To support model building tasks, code elements and their characteristics are collected in so-called bug datasets which serve as the input for learning. We present the \emph{BugHunter Dataset}: a novel kind of automatically constructed and freely available bug dataset containing code elements (files, classes, methods) with a wide set of code metrics and bug information. Other available bug datasets follow the traditional approach of gathering the characteristics of all source code elements (buggy and non-buggy) at only one or more pre-selected release versions of the code. Our approach, on the other hand, captures the buggy and the fixed states of the same source code elements from the narrowest timeframe we can identify for a bug's presence, regardless of release versions. To show the usefulness of the new dataset, we built and evaluated bug prediction models and achieved F-measure values over 0.74.
{ "abstract": "Bugs are inescapable during software development due to frequent code\nchanges, tight deadlines, etc.; therefore, it is important to have tools to\nfind these errors. One way of performing bug identification is to analyze the\ncharacteristics of buggy source code elements from the past and predict the\npresent ones based on the same characteristics, using e.g. machine learning\nmodels. To support model building tasks, code elements and their\ncharacteristics are collected in so-called bug datasets which serve as the\ninput for learning.\n We present the \\emph{BugHunter Dataset}: a novel kind of automatically\nconstructed and freely available bug dataset containing code elements (files,\nclasses, methods) with a wide set of code metrics and bug information. Other\navailable bug datasets follow the traditional approach of gathering the\ncharacteristics of all source code elements (buggy and non-buggy) at only one\nor more pre-selected release versions of the code. Our approach, on the other\nhand, captures the buggy and the fixed states of the same source code elements\nfrom the narrowest timeframe we can identify for a bug's presence, regardless\nof release versions. To show the usefulness of the new dataset, we built and\nevaluated bug prediction models and achieved F-measure values over 0.74.", "title": "An Automatically Created Novel Bug Dataset and its Validation in Bug Prediction", "url": "http://arxiv.org/abs/2006.10158v1" }
null
null
new_dataset
admin
null
false
null
e32c35ae-50d5-42b8-bfaf-fc1321e89f6d
null
Validated
2023-10-04 15:19:51.899516
{ "text_length": 1412 }
0new_dataset
TITLE: MT-GenEval: A Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation ABSTRACT: As generic machine translation (MT) quality has improved, the need for targeted benchmarks that explore fine-grained aspects of quality has increased. In particular, gender accuracy in translation can have implications in terms of output fluency, translation accuracy, and ethics. In this paper, we introduce MT-GenEval, a benchmark for evaluating gender accuracy in translation from English into eight widely-spoken languages. MT-GenEval complements existing benchmarks by providing realistic, gender-balanced, counterfactual data in eight language pairs where the gender of individuals is unambiguous in the input segment, including multi-sentence segments requiring inter-sentential gender agreement. Our data and code is publicly available under a CC BY SA 3.0 license.
{ "abstract": "As generic machine translation (MT) quality has improved, the need for\ntargeted benchmarks that explore fine-grained aspects of quality has increased.\nIn particular, gender accuracy in translation can have implications in terms of\noutput fluency, translation accuracy, and ethics. In this paper, we introduce\nMT-GenEval, a benchmark for evaluating gender accuracy in translation from\nEnglish into eight widely-spoken languages. MT-GenEval complements existing\nbenchmarks by providing realistic, gender-balanced, counterfactual data in\neight language pairs where the gender of individuals is unambiguous in the\ninput segment, including multi-sentence segments requiring inter-sentential\ngender agreement. Our data and code is publicly available under a CC BY SA 3.0\nlicense.", "title": "MT-GenEval: A Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation", "url": "http://arxiv.org/abs/2211.01355v1" }
null
null
new_dataset
admin
null
false
null
ab4f1aad-3341-467c-ab61-51da43758828
null
Validated
2023-10-04 15:19:51.883208
{ "text_length": 913 }
0new_dataset
TITLE: MorisienMT: A Dataset for Mauritian Creole Machine Translation ABSTRACT: In this paper, we describe MorisienMT, a dataset for benchmarking machine translation quality of Mauritian Creole. Mauritian Creole (Morisien) is the lingua franca of the Republic of Mauritius and is a French-based creole language. MorisienMT consists of a parallel corpus between English and Morisien, French and Morisien and a monolingual corpus for Morisien. We first give an overview of Morisien and then describe the steps taken to create the corpora and, from it, the training and evaluation splits. Thereafter, we establish a variety of baseline models using the created parallel corpora as well as large French--English corpora for transfer learning. We release our datasets publicly for research purposes and hope that this spurs research for Morisien machine translation.
{ "abstract": "In this paper, we describe MorisienMT, a dataset for benchmarking machine\ntranslation quality of Mauritian Creole. Mauritian Creole (Morisien) is the\nlingua franca of the Republic of Mauritius and is a French-based creole\nlanguage. MorisienMT consists of a parallel corpus between English and\nMorisien, French and Morisien and a monolingual corpus for Morisien. We first\ngive an overview of Morisien and then describe the steps taken to create the\ncorpora and, from it, the training and evaluation splits. Thereafter, we\nestablish a variety of baseline models using the created parallel corpora as\nwell as large French--English corpora for transfer learning. We release our\ndatasets publicly for research purposes and hope that this spurs research for\nMorisien machine translation.", "title": "MorisienMT: A Dataset for Mauritian Creole Machine Translation", "url": "http://arxiv.org/abs/2206.02421v1" }
null
null
new_dataset
admin
null
false
null
17ea1f92-18af-4a52-ad55-394522c74156
null
Validated
2023-10-04 15:19:51.885990
{ "text_length": 878 }
0new_dataset
TITLE: Dark solitons in Bose-Einstein condensates: a dataset for many-body physics research ABSTRACT: We establish a dataset of over $1.6\times10^4$ experimental images of Bose--Einstein condensates containing solitonic excitations to enable machine learning (ML) for many-body physics research. About $33~\%$ of this dataset has manually assigned and carefully curated labels. The remainder is automatically labeled using SolDet -- an implementation of a physics-informed ML data analysis framework -- consisting of a convolutional-neural-network-based classifier and OD as well as a statistically motivated physics-informed classifier and a quality metric. This technical note constitutes the definitive reference of the dataset, providing an opportunity for the data science community to develop more sophisticated analysis tools, to further understand nonlinear many-body physics, and even advance cold atom experiments.
{ "abstract": "We establish a dataset of over $1.6\\times10^4$ experimental images of\nBose--Einstein condensates containing solitonic excitations to enable machine\nlearning (ML) for many-body physics research. About $33~\\%$ of this dataset has\nmanually assigned and carefully curated labels. The remainder is automatically\nlabeled using SolDet -- an implementation of a physics-informed ML data\nanalysis framework -- consisting of a convolutional-neural-network-based\nclassifier and OD as well as a statistically motivated physics-informed\nclassifier and a quality metric. This technical note constitutes the definitive\nreference of the dataset, providing an opportunity for the data science\ncommunity to develop more sophisticated analysis tools, to further understand\nnonlinear many-body physics, and even advance cold atom experiments.", "title": "Dark solitons in Bose-Einstein condensates: a dataset for many-body physics research", "url": "http://arxiv.org/abs/2205.09114v2" }
null
null
new_dataset
admin
null
false
null
da0b69b2-3d55-4aaf-9740-5392412c8e77
null
Validated
2023-10-04 15:19:51.886420
{ "text_length": 941 }
0new_dataset
TITLE: DDXPlus: A New Dataset For Automatic Medical Diagnosis ABSTRACT: There has been a rapidly growing interest in Automatic Symptom Detection (ASD) and Automatic Diagnosis (AD) systems in the machine learning research literature, aiming to assist doctors in telemedicine services. These systems are designed to interact with patients, collect evidence about their symptoms and relevant antecedents, and possibly make predictions about the underlying diseases. Doctors would review the interactions, including the evidence and the predictions, collect if necessary additional information from patients, before deciding on next steps. Despite recent progress in this area, an important piece of doctors' interactions with patients is missing in the design of these systems, namely the differential diagnosis. Its absence is largely due to the lack of datasets that include such information for models to train on. In this work, we present a large-scale synthetic dataset of roughly 1.3 million patients that includes a differential diagnosis, along with the ground truth pathology, symptoms and antecedents for each patient. Unlike existing datasets which only contain binary symptoms and antecedents, this dataset also contains categorical and multi-choice symptoms and antecedents useful for efficient data collection. Moreover, some symptoms are organized in a hierarchy, making it possible to design systems able to interact with patients in a logical way. As a proof-of-concept, we extend two existing AD and ASD systems to incorporate the differential diagnosis, and provide empirical evidence that using differentials as training signals is essential for the efficiency of such systems or for helping doctors better understand the reasoning of those systems.
{ "abstract": "There has been a rapidly growing interest in Automatic Symptom Detection\n(ASD) and Automatic Diagnosis (AD) systems in the machine learning research\nliterature, aiming to assist doctors in telemedicine services. These systems\nare designed to interact with patients, collect evidence about their symptoms\nand relevant antecedents, and possibly make predictions about the underlying\ndiseases. Doctors would review the interactions, including the evidence and the\npredictions, collect if necessary additional information from patients, before\ndeciding on next steps. Despite recent progress in this area, an important\npiece of doctors' interactions with patients is missing in the design of these\nsystems, namely the differential diagnosis. Its absence is largely due to the\nlack of datasets that include such information for models to train on. In this\nwork, we present a large-scale synthetic dataset of roughly 1.3 million\npatients that includes a differential diagnosis, along with the ground truth\npathology, symptoms and antecedents for each patient. Unlike existing datasets\nwhich only contain binary symptoms and antecedents, this dataset also contains\ncategorical and multi-choice symptoms and antecedents useful for efficient data\ncollection. Moreover, some symptoms are organized in a hierarchy, making it\npossible to design systems able to interact with patients in a logical way. As\na proof-of-concept, we extend two existing AD and ASD systems to incorporate\nthe differential diagnosis, and provide empirical evidence that using\ndifferentials as training signals is essential for the efficiency of such\nsystems or for helping doctors better understand the reasoning of those\nsystems.", "title": "DDXPlus: A New Dataset For Automatic Medical Diagnosis", "url": "http://arxiv.org/abs/2205.09148v3" }
null
null
new_dataset
admin
null
false
null
3a493bff-7fb5-45c0-99a8-92edcc687c7c
null
Validated
2023-10-04 15:19:51.886397
{ "text_length": 1783 }
0new_dataset
TITLE: Gotham Testbed: a Reproducible IoT Testbed for Security Experiments and Dataset Generation ABSTRACT: The growing adoption of the Internet of Things (IoT) has brought a significant increase in attacks targeting those devices. Machine learning (ML) methods have shown promising results for intrusion detection; however, the scarcity of IoT datasets remains a limiting factor in developing ML-based security systems for IoT scenarios. Static datasets get outdated due to evolving IoT architectures and threat landscape; meanwhile, the testbeds used to generate them are rarely published. This paper presents the Gotham testbed, a reproducible and flexible security testbed extendable to accommodate new emulated devices, services or attackers. Gotham is used to build an IoT scenario composed of 100 emulated devices communicating via MQTT, CoAP and RTSP protocols, among others, in a topology composed of 30 switches and 10 routers. The scenario presents three threat actors, including the entire Mirai botnet lifecycle and additional red-teaming tools performing DoS, scanning, and attacks targeting IoT protocols. The testbed has many purposes, including a cyber range, testing security solutions, and capturing network and application data to generate datasets. We hope that researchers can leverage and adapt Gotham to include other devices, state-of-the-art attacks and topologies to share scenarios and datasets that reflect the current IoT settings and threat landscape.
{ "abstract": "The growing adoption of the Internet of Things (IoT) has brought a\nsignificant increase in attacks targeting those devices. Machine learning (ML)\nmethods have shown promising results for intrusion detection; however, the\nscarcity of IoT datasets remains a limiting factor in developing ML-based\nsecurity systems for IoT scenarios. Static datasets get outdated due to\nevolving IoT architectures and threat landscape; meanwhile, the testbeds used\nto generate them are rarely published. This paper presents the Gotham testbed,\na reproducible and flexible security testbed extendable to accommodate new\nemulated devices, services or attackers. Gotham is used to build an IoT\nscenario composed of 100 emulated devices communicating via MQTT, CoAP and RTSP\nprotocols, among others, in a topology composed of 30 switches and 10 routers.\nThe scenario presents three threat actors, including the entire Mirai botnet\nlifecycle and additional red-teaming tools performing DoS, scanning, and\nattacks targeting IoT protocols. The testbed has many purposes, including a\ncyber range, testing security solutions, and capturing network and application\ndata to generate datasets. We hope that researchers can leverage and adapt\nGotham to include other devices, state-of-the-art attacks and topologies to\nshare scenarios and datasets that reflect the current IoT settings and threat\nlandscape.", "title": "Gotham Testbed: a Reproducible IoT Testbed for Security Experiments and Dataset Generation", "url": "http://arxiv.org/abs/2207.13981v3" }
null
null
no_new_dataset
admin
null
false
null
e2b29606-5173-4004-9152-6b35ae499189
null
Validated
2023-10-04 15:19:51.885003
{ "text_length": 1499 }
1no_new_dataset
TITLE: A Review on Text-Based Emotion Detection -- Techniques, Applications, Datasets, and Future Directions ABSTRACT: Artificial Intelligence (AI) has been used for processing data to make decisions, interact with humans, and understand their feelings and emotions. With the advent of the internet, people share and express their thoughts on day-to-day activities and global and local events through text messaging applications. Hence, it is essential for machines to understand emotions in opinions, feedback, and textual dialogues to provide emotionally aware responses to users in today's online world. The field of text-based emotion detection (TBED) is advancing to provide automated solutions to various applications, such as businesses, and finances, to name a few. TBED has gained a lot of attention in recent times. The paper presents a systematic literature review of the existing literature published between 2005 to 2021 in TBED. This review has meticulously examined 63 research papers from IEEE, Science Direct, Scopus, and Web of Science databases to address four primary research questions. It also reviews the different applications of TBED across various research domains and highlights its use. An overview of various emotion models, techniques, feature extraction methods, datasets, and research challenges with future directions has also been represented.
{ "abstract": "Artificial Intelligence (AI) has been used for processing data to make\ndecisions, interact with humans, and understand their feelings and emotions.\nWith the advent of the internet, people share and express their thoughts on\nday-to-day activities and global and local events through text messaging\napplications. Hence, it is essential for machines to understand emotions in\nopinions, feedback, and textual dialogues to provide emotionally aware\nresponses to users in today's online world. The field of text-based emotion\ndetection (TBED) is advancing to provide automated solutions to various\napplications, such as businesses, and finances, to name a few. TBED has gained\na lot of attention in recent times. The paper presents a systematic literature\nreview of the existing literature published between 2005 to 2021 in TBED. This\nreview has meticulously examined 63 research papers from IEEE, Science Direct,\nScopus, and Web of Science databases to address four primary research\nquestions. It also reviews the different applications of TBED across various\nresearch domains and highlights its use. An overview of various emotion models,\ntechniques, feature extraction methods, datasets, and research challenges with\nfuture directions has also been represented.", "title": "A Review on Text-Based Emotion Detection -- Techniques, Applications, Datasets, and Future Directions", "url": "http://arxiv.org/abs/2205.03235v1" }
null
null
no_new_dataset
admin
null
false
null
cf72d805-c281-4628-81c8-54cb3d37966b
null
Validated
2023-10-04 15:19:51.886846
{ "text_length": 1394 }
1no_new_dataset
TITLE: ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches ABSTRACT: Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding, and requires careful hyperparameter tuning, potentially leading to suboptimal robustness evaluations. To overcome these issues, we propose ImageNet-Patch, a dataset to benchmark machine-learning models against adversarial patches. It consists of a set of patches, optimized to generalize across different models, and readily applicable to ImageNet data after preprocessing them with affine transformations. This process enables an approximate yet faster robustness evaluation, leveraging the transferability of adversarial perturbations. We showcase the usefulness of this dataset by testing the effectiveness of the computed patches against 127 models. We conclude by discussing how our dataset could be used as a benchmark for robustness, and how our methodology can be generalized to other domains. We open source our dataset and evaluation code at https://github.com/pralab/ImageNet-Patch.
{ "abstract": "Adversarial patches are optimized contiguous pixel blocks in an input image\nthat cause a machine-learning model to misclassify it. However, their\noptimization is computationally demanding, and requires careful hyperparameter\ntuning, potentially leading to suboptimal robustness evaluations. To overcome\nthese issues, we propose ImageNet-Patch, a dataset to benchmark\nmachine-learning models against adversarial patches. It consists of a set of\npatches, optimized to generalize across different models, and readily\napplicable to ImageNet data after preprocessing them with affine\ntransformations. This process enables an approximate yet faster robustness\nevaluation, leveraging the transferability of adversarial perturbations. We\nshowcase the usefulness of this dataset by testing the effectiveness of the\ncomputed patches against 127 models. We conclude by discussing how our dataset\ncould be used as a benchmark for robustness, and how our methodology can be\ngeneralized to other domains. We open source our dataset and evaluation code at\nhttps://github.com/pralab/ImageNet-Patch.", "title": "ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches", "url": "http://arxiv.org/abs/2203.04412v1" }
null
null
new_dataset
admin
null
false
null
af3e2cb3-9ad5-49d5-968d-566a25b33bc8
null
Validated
2023-10-04 15:19:51.887859
{ "text_length": 1215 }
0new_dataset
TITLE: NICHE: A Curated Dataset of Engineered Machine Learning Projects in Python ABSTRACT: Machine learning (ML) has gained much attention and been incorporated into our daily lives. While there are numerous publicly available ML projects on open source platforms such as GitHub, there have been limited attempts in filtering those projects to curate ML projects of high quality. The limited availability of such a high-quality dataset poses an obstacle in understanding ML projects. To help clear this obstacle, we present NICHE, a manually labelled dataset consisting of 572 ML projects. Based on evidences of good software engineering practices, we label 441 of these projects as engineered and 131 as non-engineered. This dataset can help researchers understand the practices that are followed in high-quality ML projects. It can also be used as a benchmark for classifiers designed to identify engineered ML projects.
{ "abstract": "Machine learning (ML) has gained much attention and been incorporated into\nour daily lives. While there are numerous publicly available ML projects on\nopen source platforms such as GitHub, there have been limited attempts in\nfiltering those projects to curate ML projects of high quality. The limited\navailability of such a high-quality dataset poses an obstacle in understanding\nML projects. To help clear this obstacle, we present NICHE, a manually labelled\ndataset consisting of 572 ML projects. Based on evidences of good software\nengineering practices, we label 441 of these projects as engineered and 131 as\nnon-engineered. This dataset can help researchers understand the practices that\nare followed in high-quality ML projects. It can also be used as a benchmark\nfor classifiers designed to identify engineered ML projects.", "title": "NICHE: A Curated Dataset of Engineered Machine Learning Projects in Python", "url": "http://arxiv.org/abs/2303.06286v1" }
null
null
new_dataset
admin
null
false
null
215a9da7-f171-4e54-acc6-d06fbd46e8a2
null
Validated
2023-10-04 15:19:51.880486
{ "text_length": 940 }
0new_dataset
TITLE: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets ABSTRACT: Machine Reading Comprehension (MRC) is a challenging Natural Language Processing(NLP) research field with wide real-world applications. The great progress of this field in recent years is mainly due to the emergence of large-scale datasets and deep learning. At present, a lot of MRC models have already surpassed human performance on various benchmark datasets despite the obvious giant gap between existing MRC models and genuine human-level reading comprehension. This shows the need for improving existing datasets, evaluation metrics, and models to move current MRC models toward "real" understanding. To address the current lack of comprehensive survey of existing MRC tasks, evaluation metrics, and datasets, herein, (1) we analyze 57 MRC tasks and datasets and propose a more precise classification method of MRC tasks with 4 different attributes; (2) we summarized 9 evaluation metrics of MRC tasks, 7 attributes and 10 characteristics of MRC datasets; (3) We also discuss key open issues in MRC research and highlighted future research directions. In addition, we have collected, organized, and published our data on the companion website(https://mrc-datasets.github.io/) where MRC researchers could directly access each MRC dataset, papers, baseline projects, and the leaderboard.
{ "abstract": "Machine Reading Comprehension (MRC) is a challenging Natural Language\nProcessing(NLP) research field with wide real-world applications. The great\nprogress of this field in recent years is mainly due to the emergence of\nlarge-scale datasets and deep learning. At present, a lot of MRC models have\nalready surpassed human performance on various benchmark datasets despite the\nobvious giant gap between existing MRC models and genuine human-level reading\ncomprehension. This shows the need for improving existing datasets, evaluation\nmetrics, and models to move current MRC models toward \"real\" understanding. To\naddress the current lack of comprehensive survey of existing MRC tasks,\nevaluation metrics, and datasets, herein, (1) we analyze 57 MRC tasks and\ndatasets and propose a more precise classification method of MRC tasks with 4\ndifferent attributes; (2) we summarized 9 evaluation metrics of MRC tasks, 7\nattributes and 10 characteristics of MRC datasets; (3) We also discuss key open\nissues in MRC research and highlighted future research directions. In addition,\nwe have collected, organized, and published our data on the companion\nwebsite(https://mrc-datasets.github.io/) where MRC researchers could directly\naccess each MRC dataset, papers, baseline projects, and the leaderboard.", "title": "A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets", "url": "http://arxiv.org/abs/2006.11880v2" }
null
null
no_new_dataset
admin
null
false
null
a1fc9765-8994-494a-ab66-4830d3f76163
null
Validated
2023-10-04 15:19:51.899493
{ "text_length": 1417 }
1no_new_dataset
TITLE: Critical Evaluation of LOCO dataset with Machine Learning ABSTRACT: Purpose: Object detection is rapidly evolving through machine learning technology in automation systems. Well prepared data is necessary to train the algorithms. Accordingly, the objective of this paper is to describe a re-evaluation of the so-called Logistics Objects in Context (LOCO) dataset, which is the first dataset for object detection in the field of intralogistics. Methodology: We use an experimental research approach with three steps to evaluate the LOCO dataset. Firstly, the images on GitHub were analyzed to understand the dataset better. Secondly, Google Drive Cloud was used for training purposes to revisit the algorithmic implementation and training. Lastly, the LOCO dataset was examined, if it is possible to achieve the same training results in comparison to the original publications. Findings: The mean average precision, a common benchmark in object detection, achieved in our study was 64.54%, and shows a significant increase from the initial study of the LOCO authors, achieving 41%. However, improvement potential is seen specifically within object types of forklifts and pallet truck. Originality: This paper presents the first critical replication study of the LOCO dataset for object detection in intralogistics. It shows that the training with better hyperparameters based on LOCO can even achieve a higher accuracy than presented in the original publication. However, there is also further room for improving the LOCO dataset.
{ "abstract": "Purpose: Object detection is rapidly evolving through machine learning\ntechnology in automation systems. Well prepared data is necessary to train the\nalgorithms. Accordingly, the objective of this paper is to describe a\nre-evaluation of the so-called Logistics Objects in Context (LOCO) dataset,\nwhich is the first dataset for object detection in the field of intralogistics.\n Methodology: We use an experimental research approach with three steps to\nevaluate the LOCO dataset. Firstly, the images on GitHub were analyzed to\nunderstand the dataset better. Secondly, Google Drive Cloud was used for\ntraining purposes to revisit the algorithmic implementation and training.\nLastly, the LOCO dataset was examined, if it is possible to achieve the same\ntraining results in comparison to the original publications.\n Findings: The mean average precision, a common benchmark in object detection,\nachieved in our study was 64.54%, and shows a significant increase from the\ninitial study of the LOCO authors, achieving 41%. However, improvement\npotential is seen specifically within object types of forklifts and pallet\ntruck.\n Originality: This paper presents the first critical replication study of the\nLOCO dataset for object detection in intralogistics. It shows that the training\nwith better hyperparameters based on LOCO can even achieve a higher accuracy\nthan presented in the original publication. However, there is also further room\nfor improving the LOCO dataset.", "title": "Critical Evaluation of LOCO dataset with Machine Learning", "url": "http://arxiv.org/abs/2209.13499v1" }
null
null
no_new_dataset
admin
null
false
null
7892fa27-e42b-4d96-936e-9fd751f74bbe
null
Validated
2023-10-04 15:19:51.883784
{ "text_length": 1559 }
1no_new_dataset
TITLE: TEET! Tunisian Dataset for Toxic Speech Detection ABSTRACT: The complete freedom of expression in social media has its costs especially in spreading harmful and abusive content that may induce people to act accordingly. Therefore, the need of detecting automatically such a content becomes an urgent task that will help and enhance the efficiency in limiting this toxic spread. Compared to other Arabic dialects which are mostly based on MSA, the Tunisian dialect is a combination of many other languages like MSA, Tamazight, Italian and French. Because of its rich language, dealing with NLP problems can be challenging due to the lack of large annotated datasets. In this paper we are introducing a new annotated dataset composed of approximately 10k of comments. We provide an in-depth exploration of its vocabulary through feature engineering approaches as well as the results of the classification performance of machine learning classifiers like NB and SVM and deep learning models such as ARBERT, MARBERT and XLM-R.
{ "abstract": "The complete freedom of expression in social media has its costs especially\nin spreading harmful and abusive content that may induce people to act\naccordingly. Therefore, the need of detecting automatically such a content\nbecomes an urgent task that will help and enhance the efficiency in limiting\nthis toxic spread. Compared to other Arabic dialects which are mostly based on\nMSA, the Tunisian dialect is a combination of many other languages like MSA,\nTamazight, Italian and French. Because of its rich language, dealing with NLP\nproblems can be challenging due to the lack of large annotated datasets. In\nthis paper we are introducing a new annotated dataset composed of approximately\n10k of comments. We provide an in-depth exploration of its vocabulary through\nfeature engineering approaches as well as the results of the classification\nperformance of machine learning classifiers like NB and SVM and deep learning\nmodels such as ARBERT, MARBERT and XLM-R.", "title": "TEET! Tunisian Dataset for Toxic Speech Detection", "url": "http://arxiv.org/abs/2110.05287v1" }
null
null
new_dataset
admin
null
false
null
0b1d7b5d-449d-408c-8918-0196db4b58e4
null
Validated
2023-10-04 15:19:51.891436
{ "text_length": 1046 }
0new_dataset
TITLE: JEDI: Joint Expert Distillation in a Semi-Supervised Multi-Dataset Student-Teacher Scenario for Video Action Recognition ABSTRACT: We propose JEDI, a multi-dataset semi-supervised learning method, which efficiently combines knowledge from multiple experts, learned on different datasets, to train and improve the performance of individual, per dataset, student models. Our approach achieves this by addressing two important problems in current machine learning research: generalization across datasets and limitations of supervised training due to scarcity of labeled data. We start with an arbitrary number of experts, pretrained on their own specific dataset, which form the initial set of student models. The teachers are immediately derived by concatenating the feature representations from the penultimate layers of the students. We then train all models in a student-teacher semi-supervised learning scenario until convergence. In our efficient approach, student-teacher training is carried out jointly and end-to-end, showing that both students and teachers improve their generalization capacity during training. We validate our approach on four video action recognition datasets. By simultaneously considering all datasets within a unified semi-supervised setting, we demonstrate significant improvements over the initial experts.
{ "abstract": "We propose JEDI, a multi-dataset semi-supervised learning method, which\nefficiently combines knowledge from multiple experts, learned on different\ndatasets, to train and improve the performance of individual, per dataset,\nstudent models. Our approach achieves this by addressing two important problems\nin current machine learning research: generalization across datasets and\nlimitations of supervised training due to scarcity of labeled data. We start\nwith an arbitrary number of experts, pretrained on their own specific dataset,\nwhich form the initial set of student models. The teachers are immediately\nderived by concatenating the feature representations from the penultimate\nlayers of the students. We then train all models in a student-teacher\nsemi-supervised learning scenario until convergence. In our efficient approach,\nstudent-teacher training is carried out jointly and end-to-end, showing that\nboth students and teachers improve their generalization capacity during\ntraining. We validate our approach on four video action recognition datasets.\nBy simultaneously considering all datasets within a unified semi-supervised\nsetting, we demonstrate significant improvements over the initial experts.", "title": "JEDI: Joint Expert Distillation in a Semi-Supervised Multi-Dataset Student-Teacher Scenario for Video Action Recognition", "url": "http://arxiv.org/abs/2308.04934v1" }
null
null
no_new_dataset
admin
null
false
null
5c688c39-30d0-46bc-98de-a61d2a3beab4
null
Validated
2023-10-04 15:19:51.864252
{ "text_length": 1362 }
1no_new_dataset
TITLE: Evaluating resampling methods on a real-life highly imbalanced online credit card payments dataset ABSTRACT: Various problems of any credit card fraud detection based on machine learning come from the imbalanced aspect of transaction datasets. Indeed, the number of frauds compared to the number of regular transactions is tiny and has been shown to damage learning performances, e.g., at worst, the algorithm can learn to classify all the transactions as regular. Resampling methods and cost-sensitive approaches are known to be good candidates to leverage this issue of imbalanced datasets. This paper evaluates numerous state-of-the-art resampling methods on a large real-life online credit card payments dataset. We show they are inefficient because methods are intractable or because metrics do not exhibit substantial improvements. Our work contributes to this domain in (1) that we compare many state-of-the-art resampling methods on a large-scale dataset and in (2) that we use a real-life online credit card payments dataset.
{ "abstract": "Various problems of any credit card fraud detection based on machine learning\ncome from the imbalanced aspect of transaction datasets. Indeed, the number of\nfrauds compared to the number of regular transactions is tiny and has been\nshown to damage learning performances, e.g., at worst, the algorithm can learn\nto classify all the transactions as regular. Resampling methods and\ncost-sensitive approaches are known to be good candidates to leverage this\nissue of imbalanced datasets. This paper evaluates numerous state-of-the-art\nresampling methods on a large real-life online credit card payments dataset. We\nshow they are inefficient because methods are intractable or because metrics do\nnot exhibit substantial improvements. Our work contributes to this domain in\n(1) that we compare many state-of-the-art resampling methods on a large-scale\ndataset and in (2) that we use a real-life online credit card payments dataset.", "title": "Evaluating resampling methods on a real-life highly imbalanced online credit card payments dataset", "url": "http://arxiv.org/abs/2206.13152v1" }
null
null
no_new_dataset
admin
null
false
null
e32c31dc-e430-4cce-bda1-78b8ad39277f
null
Default
2023-10-04 15:19:51.885559
{ "text_length": 1058 }
1no_new_dataset
TITLE: Deepfake Detection Analyzing Hybrid Dataset Utilizing CNN and SVM ABSTRACT: Social media is currently being used by many individuals online as a major source of information. However, not all information shared online is true, even photos and videos can be doctored. Deepfakes have recently risen with the rise of technological advancement and have allowed nefarious online users to replace one face with a computer generated face of anyone they would like, including important political and cultural figures. Deepfakes are now a tool to be able to spread mass misinformation. There is now an immense need to create models that are able to detect deepfakes and keep them from being spread as seemingly real images or videos. In this paper, we propose a new deepfake detection schema using two popular machine learning algorithms.
{ "abstract": "Social media is currently being used by many individuals online as a major\nsource of information. However, not all information shared online is true, even\nphotos and videos can be doctored. Deepfakes have recently risen with the rise\nof technological advancement and have allowed nefarious online users to replace\none face with a computer generated face of anyone they would like, including\nimportant political and cultural figures. Deepfakes are now a tool to be able\nto spread mass misinformation. There is now an immense need to create models\nthat are able to detect deepfakes and keep them from being spread as seemingly\nreal images or videos. In this paper, we propose a new deepfake detection\nschema using two popular machine learning algorithms.", "title": "Deepfake Detection Analyzing Hybrid Dataset Utilizing CNN and SVM", "url": "http://arxiv.org/abs/2302.10280v1" }
null
null
no_new_dataset
admin
null
false
null
98502cd6-6e28-4963-bbae-9909d1cf8b4a
null
Validated
2023-10-04 15:19:51.881388
{ "text_length": 852 }
1no_new_dataset
TITLE: LogicInference: A New Dataset for Teaching Logical Inference to seq2seq Models ABSTRACT: Machine learning models such as Transformers or LSTMs struggle with tasks that are compositional in nature such as those involving reasoning/inference. Although many datasets exist to evaluate compositional generalization, when it comes to evaluating inference abilities, options are more limited. This paper presents LogicInference, a new dataset to evaluate the ability of models to perform logical inference. The dataset focuses on inference using propositional logic and a small subset of first-order logic, represented both in semi-formal logical notation, as well as in natural language. We also report initial results using a collection of machine learning models to establish an initial baseline in this dataset.
{ "abstract": "Machine learning models such as Transformers or LSTMs struggle with tasks\nthat are compositional in nature such as those involving reasoning/inference.\nAlthough many datasets exist to evaluate compositional generalization, when it\ncomes to evaluating inference abilities, options are more limited. This paper\npresents LogicInference, a new dataset to evaluate the ability of models to\nperform logical inference. The dataset focuses on inference using propositional\nlogic and a small subset of first-order logic, represented both in semi-formal\nlogical notation, as well as in natural language. We also report initial\nresults using a collection of machine learning models to establish an initial\nbaseline in this dataset.", "title": "LogicInference: A New Dataset for Teaching Logical Inference to seq2seq Models", "url": "http://arxiv.org/abs/2203.15099v3" }
null
null
new_dataset
admin
null
false
null
72758d2e-55af-43bb-ae45-cd9957b2f182
null
Validated
2023-10-04 15:19:51.887358
{ "text_length": 833 }
0new_dataset
TITLE: MN-DS: A Multilabeled News Dataset for News Articles Hierarchical Classification ABSTRACT: This article presents a dataset of 10,917 news articles with hierarchical news categories collected between 1 January 2019 and 31 December 2019. We manually labeled the articles based on a hierarchical taxonomy with 17 first-level and 109 second-level categories. This dataset can be used to train machine learning models for automatically classifying news articles by topic. This dataset can be helpful for researchers working on news structuring, classification, and predicting future events based on released news.
{ "abstract": "This article presents a dataset of 10,917 news articles with hierarchical\nnews categories collected between 1 January 2019 and 31 December 2019. We\nmanually labeled the articles based on a hierarchical taxonomy with 17\nfirst-level and 109 second-level categories. This dataset can be used to train\nmachine learning models for automatically classifying news articles by topic.\nThis dataset can be helpful for researchers working on news structuring,\nclassification, and predicting future events based on released news.", "title": "MN-DS: A Multilabeled News Dataset for News Articles Hierarchical Classification", "url": "http://arxiv.org/abs/2212.12061v3" }
null
null
new_dataset
admin
null
false
null
665b1a9e-4874-4e55-bf79-5a5f60551958
null
Validated
2023-10-04 15:19:51.881870
{ "text_length": 632 }
0new_dataset
TITLE: SC2EGSet: StarCraft II Esport Replay and Game-state Dataset ABSTRACT: As a relatively new form of sport, esports offers unparalleled data availability. Despite the vast amounts of data that are generated by game engines, it can be challenging to extract them and verify their integrity for the purposes of practical and scientific use. Our work aims to open esports to a broader scientific community by supplying raw and pre-processed files from StarCraft II esports tournaments. These files can be used in statistical and machine learning modeling tasks and related to various laboratory-based measurements (e.g., behavioral tests, brain imaging). We have gathered publicly available game-engine generated "replays" of tournament matches and performed data extraction and cleanup using a low-level application programming interface (API) parser library. Additionally, we open-sourced and published all the custom tools that were developed in the process of creating our dataset. These tools include PyTorch and PyTorch Lightning API abstractions to load and model the data. Our dataset contains replays from major and premiere StarCraft II tournaments since 2016. To prepare the dataset, we processed 55 tournament "replaypacks" that contained 17930 files with game-state information. Based on initial investigation of available StarCraft II datasets, we observed that our dataset is the largest publicly available source of StarCraft II esports data upon its publication. Analysis of the extracted data holds promise for further Artificial Intelligence (AI), Machine Learning (ML), psychological, Human-Computer Interaction (HCI), and sports-related studies in a variety of supervised and self-supervised tasks.
{ "abstract": "As a relatively new form of sport, esports offers unparalleled data\navailability. Despite the vast amounts of data that are generated by game\nengines, it can be challenging to extract them and verify their integrity for\nthe purposes of practical and scientific use.\n Our work aims to open esports to a broader scientific community by supplying\nraw and pre-processed files from StarCraft II esports tournaments. These files\ncan be used in statistical and machine learning modeling tasks and related to\nvarious laboratory-based measurements (e.g., behavioral tests, brain imaging).\nWe have gathered publicly available game-engine generated \"replays\" of\ntournament matches and performed data extraction and cleanup using a low-level\napplication programming interface (API) parser library.\n Additionally, we open-sourced and published all the custom tools that were\ndeveloped in the process of creating our dataset. These tools include PyTorch\nand PyTorch Lightning API abstractions to load and model the data.\n Our dataset contains replays from major and premiere StarCraft II tournaments\nsince 2016. To prepare the dataset, we processed 55 tournament \"replaypacks\"\nthat contained 17930 files with game-state information. Based on initial\ninvestigation of available StarCraft II datasets, we observed that our dataset\nis the largest publicly available source of StarCraft II esports data upon its\npublication.\n Analysis of the extracted data holds promise for further Artificial\nIntelligence (AI), Machine Learning (ML), psychological, Human-Computer\nInteraction (HCI), and sports-related studies in a variety of supervised and\nself-supervised tasks.", "title": "SC2EGSet: StarCraft II Esport Replay and Game-state Dataset", "url": "http://arxiv.org/abs/2207.03428v2" }
null
null
new_dataset
admin
null
false
null
d19a263f-324e-451a-8241-d4bbde0e9f3d
null
Validated
2023-10-04 15:19:51.885415
{ "text_length": 1745 }
0new_dataset
TITLE: COVIDx CT-3: A Large-scale, Multinational, Open-Source Benchmark Dataset for Computer-aided COVID-19 Screening from Chest CT Images ABSTRACT: Computed tomography (CT) has been widely explored as a COVID-19 screening and assessment tool to complement RT-PCR testing. To assist radiologists with CT-based COVID-19 screening, a number of computer-aided systems have been proposed. However, many proposed systems are built using CT data which is limited in both quantity and diversity. Motivated to support efforts in the development of machine learning-driven screening systems, we introduce COVIDx CT-3, a large-scale multinational benchmark dataset for detection of COVID-19 cases from chest CT images. COVIDx CT-3 includes 431,205 CT slices from 6,068 patients across at least 17 countries, which to the best of our knowledge represents the largest, most diverse dataset of COVID-19 CT images in open-access form. Additionally, we examine the data diversity and potential biases of the COVIDx CT-3 dataset, finding that significant geographic and class imbalances remain despite efforts to curate data from a wide variety of sources.
{ "abstract": "Computed tomography (CT) has been widely explored as a COVID-19 screening and\nassessment tool to complement RT-PCR testing. To assist radiologists with\nCT-based COVID-19 screening, a number of computer-aided systems have been\nproposed. However, many proposed systems are built using CT data which is\nlimited in both quantity and diversity. Motivated to support efforts in the\ndevelopment of machine learning-driven screening systems, we introduce COVIDx\nCT-3, a large-scale multinational benchmark dataset for detection of COVID-19\ncases from chest CT images. COVIDx CT-3 includes 431,205 CT slices from 6,068\npatients across at least 17 countries, which to the best of our knowledge\nrepresents the largest, most diverse dataset of COVID-19 CT images in\nopen-access form. Additionally, we examine the data diversity and potential\nbiases of the COVIDx CT-3 dataset, finding that significant geographic and\nclass imbalances remain despite efforts to curate data from a wide variety of\nsources.", "title": "COVIDx CT-3: A Large-scale, Multinational, Open-Source Benchmark Dataset for Computer-aided COVID-19 Screening from Chest CT Images", "url": "http://arxiv.org/abs/2206.03043v3" }
null
null
new_dataset
admin
null
false
null
14f90c95-ff42-402d-8bb3-ccd3ece8cc0d
null
Validated
2023-10-04 15:19:51.885966
{ "text_length": 1157 }
0new_dataset