text
stringlengths 493
2.1k
| inputs
dict | prediction
null | prediction_agent
null | annotation
stringclasses 2
values | annotation_agent
stringclasses 1
value | vectors
null | multi_label
bool 1
class | explanation
null | id
stringlengths 36
36
| metadata
null | status
stringclasses 2
values | event_timestamp
stringlengths 26
26
| metrics
dict | label
class label 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TITLE: MIMII DG: Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection for Domain Generalization Task
ABSTRACT: We present a machine sound dataset to benchmark domain generalization
techniques for anomalous sound detection (ASD). Domain shifts are differences
in data distributions that can degrade the detection performance, and handling
them is a major issue for the application of ASD systems. While currently
available datasets for ASD tasks assume that occurrences of domain shifts are
known, in practice, they can be difficult to detect. To handle such domain
shifts, domain generalization techniques that perform well regardless of the
domains should be investigated. In this paper, we present the first ASD dataset
for the domain generalization techniques, called MIMII DG. The dataset consists
of five machine types and three domain shift scenarios for each machine type.
The dataset is dedicated to the domain generalization task with features such
as multiple different values for parameters that cause domain shifts and
introduction of domain shifts that can be difficult to detect, such as shifts
in the background noise. Experimental results using two baseline systems
indicate that the dataset reproduces domain shift scenarios and is useful for
benchmarking domain generalization techniques. | {
"abstract": "We present a machine sound dataset to benchmark domain generalization\ntechniques for anomalous sound detection (ASD). Domain shifts are differences\nin data distributions that can degrade the detection performance, and handling\nthem is a major issue for the application of ASD systems. While currently\navailable datasets for ASD tasks assume that occurrences of domain shifts are\nknown, in practice, they can be difficult to detect. To handle such domain\nshifts, domain generalization techniques that perform well regardless of the\ndomains should be investigated. In this paper, we present the first ASD dataset\nfor the domain generalization techniques, called MIMII DG. The dataset consists\nof five machine types and three domain shift scenarios for each machine type.\nThe dataset is dedicated to the domain generalization task with features such\nas multiple different values for parameters that cause domain shifts and\nintroduction of domain shifts that can be difficult to detect, such as shifts\nin the background noise. Experimental results using two baseline systems\nindicate that the dataset reproduces domain shift scenarios and is useful for\nbenchmarking domain generalization techniques.",
"title": "MIMII DG: Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection for Domain Generalization Task",
"url": "http://arxiv.org/abs/2205.13879v2"
} | null | null | new_dataset | admin | null | false | null | df147356-9cb2-4184-836a-da7f9eb9ffee | null | Validated | 2023-10-04 15:19:51.886183 | {
"text_length": 1351
} | 0new_dataset
|
TITLE: FGraDA: A Dataset and Benchmark for Fine-Grained Domain Adaptation in Machine Translation
ABSTRACT: Previous research for adapting a general neural machine translation (NMT)
model into a specific domain usually neglects the diversity in translation
within the same domain, which is a core problem for domain adaptation in
real-world scenarios. One representative of such challenging scenarios is to
deploy a translation system for a conference with a specific topic, e.g.,
global warming or coronavirus, where there are usually extremely less resources
due to the limited schedule. To motivate wider investigation in such a
scenario, we present a real-world fine-grained domain adaptation task in
machine translation (FGraDA). The FGraDA dataset consists of Chinese-English
translation task for four sub-domains of information technology: autonomous
vehicles, AI education, real-time networks, and smart phone. Each sub-domain is
equipped with a development set and test set for evaluation purposes. To be
closer to reality, FGraDA does not employ any in-domain bilingual training data
but provides bilingual dictionaries and wiki knowledge base, which can be
easier obtained within a short time. We benchmark the fine-grained domain
adaptation task and present in-depth analyses showing that there are still
challenging problems to further improve the performance with heterogeneous
resources. | {
"abstract": "Previous research for adapting a general neural machine translation (NMT)\nmodel into a specific domain usually neglects the diversity in translation\nwithin the same domain, which is a core problem for domain adaptation in\nreal-world scenarios. One representative of such challenging scenarios is to\ndeploy a translation system for a conference with a specific topic, e.g.,\nglobal warming or coronavirus, where there are usually extremely less resources\ndue to the limited schedule. To motivate wider investigation in such a\nscenario, we present a real-world fine-grained domain adaptation task in\nmachine translation (FGraDA). The FGraDA dataset consists of Chinese-English\ntranslation task for four sub-domains of information technology: autonomous\nvehicles, AI education, real-time networks, and smart phone. Each sub-domain is\nequipped with a development set and test set for evaluation purposes. To be\ncloser to reality, FGraDA does not employ any in-domain bilingual training data\nbut provides bilingual dictionaries and wiki knowledge base, which can be\neasier obtained within a short time. We benchmark the fine-grained domain\nadaptation task and present in-depth analyses showing that there are still\nchallenging problems to further improve the performance with heterogeneous\nresources.",
"title": "FGraDA: A Dataset and Benchmark for Fine-Grained Domain Adaptation in Machine Translation",
"url": "http://arxiv.org/abs/2012.15717v2"
} | null | null | new_dataset | admin | null | false | null | 093eec3f-f3c0-4a6e-90a1-5da75de79b0e | null | Validated | 2023-10-04 15:19:51.896427 | {
"text_length": 1418
} | 0new_dataset
|
TITLE: Commander's Intent: A Dataset and Modeling Approach for Human-AI Task Specification in Strategic Play
ABSTRACT: Effective Human-AI teaming requires the ability to communicate the goals of
the team and constraints under which you need the agent to operate. Providing
the ability to specify the shared intent or operation criteria of the team can
enable an AI agent to perform its primary function while still being able to
cater to the specific desires of the current team. While significant work has
been conducted to instruct an agent to perform a task, via language or
demonstrations, prior work lacks a focus on building agents which can operate
within the parameters specified by a team. Worse yet, there is a dearth of
research pertaining to enabling humans to provide their specifications through
unstructured, naturalist language. In this paper, we propose the use of goals
and constraints as a scaffold to modulate and evaluate autonomous agents. We
contribute to this field by presenting a novel dataset, and an associated data
collection protocol, which maps language descriptions to goals and constraints
corresponding to specific strategies developed by human participants for the
board game Risk. Leveraging state-of-the-art language models and augmentation
procedures, we develop a machine learning framework which can be used to
identify goals and constraints from unstructured strategy descriptions. To
empirically validate our approach we conduct a human-subjects study to
establish a human-baseline for our dataset. Our results show that our machine
learning architecture is better able to interpret unstructured language
descriptions into strategy specifications than human raters tasked with
performing the same machine translation task (F(1,272.53) = 17.025, p < 0.001). | {
"abstract": "Effective Human-AI teaming requires the ability to communicate the goals of\nthe team and constraints under which you need the agent to operate. Providing\nthe ability to specify the shared intent or operation criteria of the team can\nenable an AI agent to perform its primary function while still being able to\ncater to the specific desires of the current team. While significant work has\nbeen conducted to instruct an agent to perform a task, via language or\ndemonstrations, prior work lacks a focus on building agents which can operate\nwithin the parameters specified by a team. Worse yet, there is a dearth of\nresearch pertaining to enabling humans to provide their specifications through\nunstructured, naturalist language. In this paper, we propose the use of goals\nand constraints as a scaffold to modulate and evaluate autonomous agents. We\ncontribute to this field by presenting a novel dataset, and an associated data\ncollection protocol, which maps language descriptions to goals and constraints\ncorresponding to specific strategies developed by human participants for the\nboard game Risk. Leveraging state-of-the-art language models and augmentation\nprocedures, we develop a machine learning framework which can be used to\nidentify goals and constraints from unstructured strategy descriptions. To\nempirically validate our approach we conduct a human-subjects study to\nestablish a human-baseline for our dataset. Our results show that our machine\nlearning architecture is better able to interpret unstructured language\ndescriptions into strategy specifications than human raters tasked with\nperforming the same machine translation task (F(1,272.53) = 17.025, p < 0.001).",
"title": "Commander's Intent: A Dataset and Modeling Approach for Human-AI Task Specification in Strategic Play",
"url": "http://arxiv.org/abs/2208.08374v1"
} | null | null | new_dataset | admin | null | false | null | f59375fa-83ae-406a-b541-3df9881bafce | null | Validated | 2023-10-04 15:19:51.884721 | {
"text_length": 1815
} | 0new_dataset
|
TITLE: Massive MIMO Channel Prediction Via Meta-Learning and Deep Denoising: Is a Small Dataset Enough?
ABSTRACT: Accurate channel knowledge is critical in massive multiple-input
multiple-output (MIMO), which motivates the use of channel prediction. Machine
learning techniques for channel prediction hold much promise, but current
schemes are limited in their ability to adapt to changes in the environment
because they require large training overheads. To accurately predict wireless
channels for new environments with reduced training overhead, we propose a fast
adaptive channel prediction technique based on a meta-learning algorithm for
massive MIMO communications. We exploit the model-agnostic meta-learning (MAML)
algorithm to achieve quick adaptation with a small amount of labeled data.
Also, to improve the prediction accuracy, we adopt the denoising process for
the training data by using deep image prior (DIP). Numerical results show that
the proposed MAML-based channel predictor can improve the prediction accuracy
with only a few fine-tuning samples. The DIP-based denoising process gives an
additional gain in channel prediction, especially in low signal-to-noise ratio
regimes. | {
"abstract": "Accurate channel knowledge is critical in massive multiple-input\nmultiple-output (MIMO), which motivates the use of channel prediction. Machine\nlearning techniques for channel prediction hold much promise, but current\nschemes are limited in their ability to adapt to changes in the environment\nbecause they require large training overheads. To accurately predict wireless\nchannels for new environments with reduced training overhead, we propose a fast\nadaptive channel prediction technique based on a meta-learning algorithm for\nmassive MIMO communications. We exploit the model-agnostic meta-learning (MAML)\nalgorithm to achieve quick adaptation with a small amount of labeled data.\nAlso, to improve the prediction accuracy, we adopt the denoising process for\nthe training data by using deep image prior (DIP). Numerical results show that\nthe proposed MAML-based channel predictor can improve the prediction accuracy\nwith only a few fine-tuning samples. The DIP-based denoising process gives an\nadditional gain in channel prediction, especially in low signal-to-noise ratio\nregimes.",
"title": "Massive MIMO Channel Prediction Via Meta-Learning and Deep Denoising: Is a Small Dataset Enough?",
"url": "http://arxiv.org/abs/2210.08770v1"
} | null | null | no_new_dataset | admin | null | false | null | 60b9cb2d-7bc6-46ad-bc59-6a260bc9a069 | null | Validated | 2023-10-04 15:19:51.883475 | {
"text_length": 1214
} | 1no_new_dataset
|
TITLE: The first large scale collection of diverse Hausa language datasets
ABSTRACT: Hausa language belongs to the Afroasiatic phylum, and with more
first-language speakers than any other sub-Saharan African language. With a
majority of its speakers residing in the Northern and Southern areas of Nigeria
and the Republic of Niger, respectively, it is estimated that over 100 million
people speak the language. Hence, making it one of the most spoken Chadic
language. While Hausa is considered well-studied and documented language among
the sub-Saharan African languages, it is viewed as a low resource language from
the perspective of natural language processing (NLP) due to limited resources
to utilise in NLP-related tasks. This is common to most languages in Africa;
thus, it is crucial to enrich such languages with resources that will support
and speed the pace of conducting various downstream tasks to meet the demand of
the modern society. While there exist useful datasets, notably from news sites
and religious texts, more diversity is needed in the corpus.
We provide an expansive collection of curated datasets consisting of both
formal and informal forms of the language from refutable websites and online
social media networks, respectively. The collection is large and more diverse
than the existing corpora by providing the first and largest set of Hausa
social media data posts to capture the peculiarities in the language. The
collection also consists of a parallel dataset, which can be used for tasks
such as machine translation with applications in areas such as the detection of
spurious or inciteful online content. We describe the curation process -- from
the collection, preprocessing and how to obtain the data -- and proffer some
research problems that could be addressed using the data. | {
"abstract": "Hausa language belongs to the Afroasiatic phylum, and with more\nfirst-language speakers than any other sub-Saharan African language. With a\nmajority of its speakers residing in the Northern and Southern areas of Nigeria\nand the Republic of Niger, respectively, it is estimated that over 100 million\npeople speak the language. Hence, making it one of the most spoken Chadic\nlanguage. While Hausa is considered well-studied and documented language among\nthe sub-Saharan African languages, it is viewed as a low resource language from\nthe perspective of natural language processing (NLP) due to limited resources\nto utilise in NLP-related tasks. This is common to most languages in Africa;\nthus, it is crucial to enrich such languages with resources that will support\nand speed the pace of conducting various downstream tasks to meet the demand of\nthe modern society. While there exist useful datasets, notably from news sites\nand religious texts, more diversity is needed in the corpus.\n We provide an expansive collection of curated datasets consisting of both\nformal and informal forms of the language from refutable websites and online\nsocial media networks, respectively. The collection is large and more diverse\nthan the existing corpora by providing the first and largest set of Hausa\nsocial media data posts to capture the peculiarities in the language. The\ncollection also consists of a parallel dataset, which can be used for tasks\nsuch as machine translation with applications in areas such as the detection of\nspurious or inciteful online content. We describe the curation process -- from\nthe collection, preprocessing and how to obtain the data -- and proffer some\nresearch problems that could be addressed using the data.",
"title": "The first large scale collection of diverse Hausa language datasets",
"url": "http://arxiv.org/abs/2102.06991v2"
} | null | null | new_dataset | admin | null | false | null | 67e5a5bc-1dd6-4a04-a495-5edcad1124a9 | null | Validated | 2023-10-04 15:19:51.895828 | {
"text_length": 1835
} | 0new_dataset
|
TITLE: AI4D -- African Language Dataset Challenge
ABSTRACT: As language and speech technologies become more advanced, the lack of
fundamental digital resources for African languages, such as data, spell
checkers and Part of Speech taggers, means that the digital divide between
these languages and others keeps growing. This work details the organisation of
the AI4D - African Language Dataset Challenge, an effort to incentivize the
creation, organization and discovery of African language datasets through a
competitive challenge. We particularly encouraged the submission of annotated
datasets which can be used for training task-specific supervised machine
learning models. | {
"abstract": "As language and speech technologies become more advanced, the lack of\nfundamental digital resources for African languages, such as data, spell\ncheckers and Part of Speech taggers, means that the digital divide between\nthese languages and others keeps growing. This work details the organisation of\nthe AI4D - African Language Dataset Challenge, an effort to incentivize the\ncreation, organization and discovery of African language datasets through a\ncompetitive challenge. We particularly encouraged the submission of annotated\ndatasets which can be used for training task-specific supervised machine\nlearning models.",
"title": "AI4D -- African Language Dataset Challenge",
"url": "http://arxiv.org/abs/2007.11865v1"
} | null | null | no_new_dataset | admin | null | false | null | 67f9a208-7b03-4f42-8fe2-9b9b2e41fc39 | null | Validated | 2023-10-04 15:19:51.899069 | {
"text_length": 694
} | 1no_new_dataset
|
TITLE: Dataset Inference for Self-Supervised Models
ABSTRACT: Self-supervised models are increasingly prevalent in machine learning (ML)
since they reduce the need for expensively labeled data. Because of their
versatility in downstream applications, they are increasingly used as a service
exposed via public APIs. At the same time, these encoder models are
particularly vulnerable to model stealing attacks due to the high
dimensionality of vector representations they output. Yet, encoders remain
undefended: existing mitigation strategies for stealing attacks focus on
supervised learning. We introduce a new dataset inference defense, which uses
the private training set of the victim encoder model to attribute its ownership
in the event of stealing. The intuition is that the log-likelihood of an
encoder's output representations is higher on the victim's training data than
on test data if it is stolen from the victim, but not if it is independently
trained. We compute this log-likelihood using density estimation models. As
part of our evaluation, we also propose measuring the fidelity of stolen
encoders and quantifying the effectiveness of the theft detection without
involving downstream tasks; instead, we leverage mutual information and
distance measurements. Our extensive empirical results in the vision domain
demonstrate that dataset inference is a promising direction for defending
self-supervised models against model stealing. | {
"abstract": "Self-supervised models are increasingly prevalent in machine learning (ML)\nsince they reduce the need for expensively labeled data. Because of their\nversatility in downstream applications, they are increasingly used as a service\nexposed via public APIs. At the same time, these encoder models are\nparticularly vulnerable to model stealing attacks due to the high\ndimensionality of vector representations they output. Yet, encoders remain\nundefended: existing mitigation strategies for stealing attacks focus on\nsupervised learning. We introduce a new dataset inference defense, which uses\nthe private training set of the victim encoder model to attribute its ownership\nin the event of stealing. The intuition is that the log-likelihood of an\nencoder's output representations is higher on the victim's training data than\non test data if it is stolen from the victim, but not if it is independently\ntrained. We compute this log-likelihood using density estimation models. As\npart of our evaluation, we also propose measuring the fidelity of stolen\nencoders and quantifying the effectiveness of the theft detection without\ninvolving downstream tasks; instead, we leverage mutual information and\ndistance measurements. Our extensive empirical results in the vision domain\ndemonstrate that dataset inference is a promising direction for defending\nself-supervised models against model stealing.",
"title": "Dataset Inference for Self-Supervised Models",
"url": "http://arxiv.org/abs/2209.09024v3"
} | null | null | new_dataset | admin | null | false | null | a25da927-354a-4a13-abe1-5a0df043c8e7 | null | Validated | 2023-10-04 15:19:51.883979 | {
"text_length": 1467
} | 0new_dataset
|
TITLE: Mitigating Dataset Harms Requires Stewardship: Lessons from 1000 Papers
ABSTRACT: Machine learning datasets have elicited concerns about privacy, bias, and
unethical applications, leading to the retraction of prominent datasets such as
DukeMTMC, MS-Celeb-1M, and Tiny Images. In response, the machine learning
community has called for higher ethical standards in dataset creation. To help
inform these efforts, we studied three influential but ethically problematic
face and person recognition datasets -- Labeled Faces in the Wild (LFW),
MS-Celeb-1M, and DukeMTM -- by analyzing nearly 1000 papers that cite them. We
found that the creation of derivative datasets and models, broader
technological and social change, the lack of clarity of licenses, and dataset
management practices can introduce a wide range of ethical concerns. We
conclude by suggesting a distributed approach to harm mitigation that considers
the entire life cycle of a dataset. | {
"abstract": "Machine learning datasets have elicited concerns about privacy, bias, and\nunethical applications, leading to the retraction of prominent datasets such as\nDukeMTMC, MS-Celeb-1M, and Tiny Images. In response, the machine learning\ncommunity has called for higher ethical standards in dataset creation. To help\ninform these efforts, we studied three influential but ethically problematic\nface and person recognition datasets -- Labeled Faces in the Wild (LFW),\nMS-Celeb-1M, and DukeMTM -- by analyzing nearly 1000 papers that cite them. We\nfound that the creation of derivative datasets and models, broader\ntechnological and social change, the lack of clarity of licenses, and dataset\nmanagement practices can introduce a wide range of ethical concerns. We\nconclude by suggesting a distributed approach to harm mitigation that considers\nthe entire life cycle of a dataset.",
"title": "Mitigating Dataset Harms Requires Stewardship: Lessons from 1000 Papers",
"url": "http://arxiv.org/abs/2108.02922v2"
} | null | null | no_new_dataset | admin | null | false | null | 3f3fa599-cb03-4fc3-8f64-dd0bee62afc1 | null | Validated | 2023-10-04 15:19:51.893075 | {
"text_length": 974
} | 1no_new_dataset
|
TITLE: An annotated instance segmentation XXL-CT dataset from a historic airplane
ABSTRACT: The Me 163 was a Second World War fighter airplane and a result of the German
air force secret developments. One of these airplanes is currently owned and
displayed in the historic aircraft exhibition of the Deutsches Museum in
Munich, Germany. To gain insights with respect to its history, design and state
of preservation, a complete CT scan was obtained using an industrial
XXL-computer tomography scanner.
Using the CT data from the Me 163, all its details can visually be examined
at various levels, ranging from the complete hull down to single sprockets and
rivets. However, while a trained human observer can identify and interpret the
volumetric data with all its parts and connections, a virtual dissection of the
airplane and all its different parts would be quite desirable. Nevertheless,
this means, that an instance segmentation of all components and objects of
interest into disjoint entities from the CT data is necessary.
As of currently, no adequate computer-assisted tools for automated or
semi-automated segmentation of such XXL-airplane data are available, in a first
step, an interactive data annotation and object labeling process has been
established. So far, seven 512 x 512 x 512 voxel sub-volumes from the Me 163
airplane have been annotated and labeled, whose results can potentially be used
for various new applications in the field of digital heritage, non-destructive
testing, or machine-learning.
This work describes the data acquisition process of the airplane using an
industrial XXL-CT scanner, outlines the interactive segmentation and labeling
scheme to annotate sub-volumes of the airplane's CT data, describes and
discusses various challenges with respect to interpreting and handling the
annotated and labeled data. | {
"abstract": "The Me 163 was a Second World War fighter airplane and a result of the German\nair force secret developments. One of these airplanes is currently owned and\ndisplayed in the historic aircraft exhibition of the Deutsches Museum in\nMunich, Germany. To gain insights with respect to its history, design and state\nof preservation, a complete CT scan was obtained using an industrial\nXXL-computer tomography scanner.\n Using the CT data from the Me 163, all its details can visually be examined\nat various levels, ranging from the complete hull down to single sprockets and\nrivets. However, while a trained human observer can identify and interpret the\nvolumetric data with all its parts and connections, a virtual dissection of the\nairplane and all its different parts would be quite desirable. Nevertheless,\nthis means, that an instance segmentation of all components and objects of\ninterest into disjoint entities from the CT data is necessary.\n As of currently, no adequate computer-assisted tools for automated or\nsemi-automated segmentation of such XXL-airplane data are available, in a first\nstep, an interactive data annotation and object labeling process has been\nestablished. So far, seven 512 x 512 x 512 voxel sub-volumes from the Me 163\nairplane have been annotated and labeled, whose results can potentially be used\nfor various new applications in the field of digital heritage, non-destructive\ntesting, or machine-learning.\n This work describes the data acquisition process of the airplane using an\nindustrial XXL-CT scanner, outlines the interactive segmentation and labeling\nscheme to annotate sub-volumes of the airplane's CT data, describes and\ndiscusses various challenges with respect to interpreting and handling the\nannotated and labeled data.",
"title": "An annotated instance segmentation XXL-CT dataset from a historic airplane",
"url": "http://arxiv.org/abs/2212.08639v1"
} | null | null | new_dataset | admin | null | false | null | 226c7620-2e37-4e2b-8168-ee779d5451c9 | null | Validated | 2023-10-04 15:19:51.882096 | {
"text_length": 1870
} | 0new_dataset
|
TITLE: Healthsheet: Development of a Transparency Artifact for Health Datasets
ABSTRACT: Machine learning (ML) approaches have demonstrated promising results in a
wide range of healthcare applications. Data plays a crucial role in developing
ML-based healthcare systems that directly affect people's lives. Many of the
ethical issues surrounding the use of ML in healthcare stem from structural
inequalities underlying the way we collect, use, and handle data. Developing
guidelines to improve documentation practices regarding the creation, use, and
maintenance of ML healthcare datasets is therefore of critical importance. In
this work, we introduce Healthsheet, a contextualized adaptation of the
original datasheet questionnaire ~\cite{gebru2018datasheets} for
health-specific applications. Through a series of semi-structured interviews,
we adapt the datasheets for healthcare data documentation. As part of the
Healthsheet development process and to understand the obstacles researchers
face in creating datasheets, we worked with three publicly-available healthcare
datasets as our case studies, each with different types of structured data:
Electronic health Records (EHR), clinical trial study data, and
smartphone-based performance outcome measures. Our findings from the
interviewee study and case studies show 1) that datasheets should be
contextualized for healthcare, 2) that despite incentives to adopt
accountability practices such as datasheets, there is a lack of consistency in
the broader use of these practices 3) how the ML for health community views
datasheets and particularly \textit{Healthsheets} as diagnostic tool to surface
the limitations and strength of datasets and 4) the relative importance of
different fields in the datasheet to healthcare concerns. | {
"abstract": "Machine learning (ML) approaches have demonstrated promising results in a\nwide range of healthcare applications. Data plays a crucial role in developing\nML-based healthcare systems that directly affect people's lives. Many of the\nethical issues surrounding the use of ML in healthcare stem from structural\ninequalities underlying the way we collect, use, and handle data. Developing\nguidelines to improve documentation practices regarding the creation, use, and\nmaintenance of ML healthcare datasets is therefore of critical importance. In\nthis work, we introduce Healthsheet, a contextualized adaptation of the\noriginal datasheet questionnaire ~\\cite{gebru2018datasheets} for\nhealth-specific applications. Through a series of semi-structured interviews,\nwe adapt the datasheets for healthcare data documentation. As part of the\nHealthsheet development process and to understand the obstacles researchers\nface in creating datasheets, we worked with three publicly-available healthcare\ndatasets as our case studies, each with different types of structured data:\nElectronic health Records (EHR), clinical trial study data, and\nsmartphone-based performance outcome measures. Our findings from the\ninterviewee study and case studies show 1) that datasheets should be\ncontextualized for healthcare, 2) that despite incentives to adopt\naccountability practices such as datasheets, there is a lack of consistency in\nthe broader use of these practices 3) how the ML for health community views\ndatasheets and particularly \\textit{Healthsheets} as diagnostic tool to surface\nthe limitations and strength of datasets and 4) the relative importance of\ndifferent fields in the datasheet to healthcare concerns.",
"title": "Healthsheet: Development of a Transparency Artifact for Health Datasets",
"url": "http://arxiv.org/abs/2202.13028v1"
} | null | null | no_new_dataset | admin | null | false | null | 11460e75-a3e1-47e4-b475-05f92c8cf564 | null | Validated | 2023-10-04 15:19:51.888075 | {
"text_length": 1803
} | 1no_new_dataset
|
TITLE: The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
ABSTRACT: We present Belebele, a multiple-choice machine reading comprehension (MRC)
dataset spanning 122 language variants. Significantly expanding the language
coverage of natural language understanding (NLU) benchmarks, this dataset
enables the evaluation of text models in high-, medium-, and low-resource
languages. Each question is based on a short passage from the Flores-200
dataset and has four multiple-choice answers. The questions were carefully
curated to discriminate between models with different levels of general
language comprehension. The English dataset on its own proves difficult enough
to challenge state-of-the-art language models. Being fully parallel, this
dataset enables direct comparison of model performance across all languages. We
use this dataset to evaluate the capabilities of multilingual masked language
models (MLMs) and large language models (LLMs). We present extensive results
and find that despite significant cross-lingual transfer in English-centric
LLMs, much smaller MLMs pretrained on balanced multilingual data still
understand far more languages. We also observe that larger vocabulary size and
conscious vocabulary construction correlate with better performance on
low-resource languages. Overall, Belebele opens up new avenues for evaluating
and analyzing the multilingual capabilities of NLP systems. | {
"abstract": "We present Belebele, a multiple-choice machine reading comprehension (MRC)\ndataset spanning 122 language variants. Significantly expanding the language\ncoverage of natural language understanding (NLU) benchmarks, this dataset\nenables the evaluation of text models in high-, medium-, and low-resource\nlanguages. Each question is based on a short passage from the Flores-200\ndataset and has four multiple-choice answers. The questions were carefully\ncurated to discriminate between models with different levels of general\nlanguage comprehension. The English dataset on its own proves difficult enough\nto challenge state-of-the-art language models. Being fully parallel, this\ndataset enables direct comparison of model performance across all languages. We\nuse this dataset to evaluate the capabilities of multilingual masked language\nmodels (MLMs) and large language models (LLMs). We present extensive results\nand find that despite significant cross-lingual transfer in English-centric\nLLMs, much smaller MLMs pretrained on balanced multilingual data still\nunderstand far more languages. We also observe that larger vocabulary size and\nconscious vocabulary construction correlate with better performance on\nlow-resource languages. Overall, Belebele opens up new avenues for evaluating\nand analyzing the multilingual capabilities of NLP systems.",
"title": "The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants",
"url": "http://arxiv.org/abs/2308.16884v1"
} | null | null | new_dataset | admin | null | false | null | 17559566-dcdb-4a3a-98ff-470dd30b9926 | null | Validated | 2023-10-04 15:19:51.863888 | {
"text_length": 1466
} | 0new_dataset
|
TITLE: PhoMT: A High-Quality and Large-Scale Benchmark Dataset for Vietnamese-English Machine Translation
ABSTRACT: We introduce a high-quality and large-scale Vietnamese-English parallel
dataset of 3.02M sentence pairs, which is 2.9M pairs larger than the benchmark
Vietnamese-English machine translation corpus IWSLT15. We conduct experiments
comparing strong neural baselines and well-known automatic translation engines
on our dataset and find that in both automatic and human evaluations: the best
performance is obtained by fine-tuning the pre-trained sequence-to-sequence
denoising auto-encoder mBART. To our best knowledge, this is the first
large-scale Vietnamese-English machine translation study. We hope our publicly
available dataset and study can serve as a starting point for future research
and applications on Vietnamese-English machine translation. | {
"abstract": "We introduce a high-quality and large-scale Vietnamese-English parallel\ndataset of 3.02M sentence pairs, which is 2.9M pairs larger than the benchmark\nVietnamese-English machine translation corpus IWSLT15. We conduct experiments\ncomparing strong neural baselines and well-known automatic translation engines\non our dataset and find that in both automatic and human evaluations: the best\nperformance is obtained by fine-tuning the pre-trained sequence-to-sequence\ndenoising auto-encoder mBART. To our best knowledge, this is the first\nlarge-scale Vietnamese-English machine translation study. We hope our publicly\navailable dataset and study can serve as a starting point for future research\nand applications on Vietnamese-English machine translation.",
"title": "PhoMT: A High-Quality and Large-Scale Benchmark Dataset for Vietnamese-English Machine Translation",
"url": "http://arxiv.org/abs/2110.12199v1"
} | null | null | new_dataset | admin | null | false | null | 085dc921-4f42-481f-b8b9-cfa30fecde31 | null | Validated | 2023-10-04 15:19:51.890247 | {
"text_length": 883
} | 0new_dataset
|
TITLE: Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation
ABSTRACT: Recent works have found evidence of gender bias in models of machine
translation and coreference resolution using mostly synthetic diagnostic
datasets. While these quantify bias in a controlled experiment, they often do
so on a small scale and consist mostly of artificial, out-of-distribution
sentences. In this work, we find grammatical patterns indicating stereotypical
and non-stereotypical gender-role assignments (e.g., female nurses versus male
dancers) in corpora from three domains, resulting in a first large-scale gender
bias dataset of 108K diverse real-world English sentences. We manually verify
the quality of our corpus and use it to evaluate gender bias in various
coreference resolution and machine translation models. We find that all tested
models tend to over-rely on gender stereotypes when presented with natural
inputs, which may be especially harmful when deployed in commercial systems.
Finally, we show that our dataset lends itself to finetuning a coreference
resolution model, finding it mitigates bias on a held out set. Our dataset and
models are publicly available at www.github.com/SLAB-NLP/BUG. We hope they will
spur future research into gender bias evaluation mitigation techniques in
realistic settings. | {
"abstract": "Recent works have found evidence of gender bias in models of machine\ntranslation and coreference resolution using mostly synthetic diagnostic\ndatasets. While these quantify bias in a controlled experiment, they often do\nso on a small scale and consist mostly of artificial, out-of-distribution\nsentences. In this work, we find grammatical patterns indicating stereotypical\nand non-stereotypical gender-role assignments (e.g., female nurses versus male\ndancers) in corpora from three domains, resulting in a first large-scale gender\nbias dataset of 108K diverse real-world English sentences. We manually verify\nthe quality of our corpus and use it to evaluate gender bias in various\ncoreference resolution and machine translation models. We find that all tested\nmodels tend to over-rely on gender stereotypes when presented with natural\ninputs, which may be especially harmful when deployed in commercial systems.\nFinally, we show that our dataset lends itself to finetuning a coreference\nresolution model, finding it mitigates bias on a held out set. Our dataset and\nmodels are publicly available at www.github.com/SLAB-NLP/BUG. We hope they will\nspur future research into gender bias evaluation mitigation techniques in\nrealistic settings.",
"title": "Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation",
"url": "http://arxiv.org/abs/2109.03858v2"
} | null | null | new_dataset | admin | null | false | null | 10fbc559-79e6-4fc6-8480-915d311b510a | null | Validated | 2023-10-04 15:19:51.892219 | {
"text_length": 1370
} | 0new_dataset
|
TITLE: Benchmark tests of atom segmentation deep learning models with a consistent dataset
ABSTRACT: The information content of atomic resolution scanning transmission electron
microscopy (STEM) images can often be reduced to a handful of parameters
describing each atomic column, chief amongst which is the column position.
Neural networks (NNs) are a high performance, computationally efficient method
to automatically locate atomic columns in images, which has led to a profusion
of NN models and associated training datasets. We have developed a benchmark
dataset of simulated and experimental STEM images and used it to evaluate the
performance of two sets of recent NN models for atom location in STEM images.
Both models exhibit high performance for images of varying quality from several
different crystal lattices. However, there are important differences in
performance as a function of image quality, and both models perform poorly for
images outside the training data, such as interfaces with large difference in
background intensity. Both the benchmark dataset and the models are available
using the Foundry service for dissemination, discovery, and reuse of machine
learning models. | {
"abstract": "The information content of atomic resolution scanning transmission electron\nmicroscopy (STEM) images can often be reduced to a handful of parameters\ndescribing each atomic column, chief amongst which is the column position.\nNeural networks (NNs) are a high performance, computationally efficient method\nto automatically locate atomic columns in images, which has led to a profusion\nof NN models and associated training datasets. We have developed a benchmark\ndataset of simulated and experimental STEM images and used it to evaluate the\nperformance of two sets of recent NN models for atom location in STEM images.\nBoth models exhibit high performance for images of varying quality from several\ndifferent crystal lattices. However, there are important differences in\nperformance as a function of image quality, and both models perform poorly for\nimages outside the training data, such as interfaces with large difference in\nbackground intensity. Both the benchmark dataset and the models are available\nusing the Foundry service for dissemination, discovery, and reuse of machine\nlearning models.",
"title": "Benchmark tests of atom segmentation deep learning models with a consistent dataset",
"url": "http://arxiv.org/abs/2207.10173v1"
} | null | null | no_new_dataset | admin | null | false | null | ab7f850f-32e2-4af5-943a-e0a0ce1ede50 | null | Validated | 2023-10-04 15:19:51.885199 | {
"text_length": 1213
} | 1no_new_dataset
|
TITLE: BrazilDAM: A Benchmark dataset for Tailings Dam Detection
ABSTRACT: In this work we present BrazilDAM, a novel public dataset based on Sentinel-2
and Landsat-8 satellite images covering all tailings dams cataloged by the
Brazilian National Mining Agency (ANM). The dataset was built using
georeferenced images from 769 dams, recorded between 2016 and 2019. The time
series were processed in order to produce cloud free images. The dams contain
mining waste from different ore categories and have highly varying shapes,
areas and volumes, making BrazilDAM particularly interesting and challenging to
be used in machine learning benchmarks. The original catalog contains, besides
the dam coordinates, information about: the main ore, constructive method, risk
category, and associated potential damage. To evaluate BrazilDAM's predictive
potential we performed classification essays using state-of-the-art deep
Convolutional Neural Network (CNNs). In the experiments, we achieved an average
classification accuracy of 94.11% in tailing dam binary classification task. In
addition, others four setups of experiments were made using the complementary
information from the original catalog, exhaustively exploiting the capacity of
the proposed dataset. | {
"abstract": "In this work we present BrazilDAM, a novel public dataset based on Sentinel-2\nand Landsat-8 satellite images covering all tailings dams cataloged by the\nBrazilian National Mining Agency (ANM). The dataset was built using\ngeoreferenced images from 769 dams, recorded between 2016 and 2019. The time\nseries were processed in order to produce cloud free images. The dams contain\nmining waste from different ore categories and have highly varying shapes,\nareas and volumes, making BrazilDAM particularly interesting and challenging to\nbe used in machine learning benchmarks. The original catalog contains, besides\nthe dam coordinates, information about: the main ore, constructive method, risk\ncategory, and associated potential damage. To evaluate BrazilDAM's predictive\npotential we performed classification essays using state-of-the-art deep\nConvolutional Neural Network (CNNs). In the experiments, we achieved an average\nclassification accuracy of 94.11% in tailing dam binary classification task. In\naddition, others four setups of experiments were made using the complementary\ninformation from the original catalog, exhaustively exploiting the capacity of\nthe proposed dataset.",
"title": "BrazilDAM: A Benchmark dataset for Tailings Dam Detection",
"url": "http://arxiv.org/abs/2003.07948v2"
} | null | null | new_dataset | admin | null | false | null | df34d82e-ae49-42db-b2cc-daf65dee09d6 | null | Validated | 2023-10-04 15:19:51.901374 | {
"text_length": 1271
} | 0new_dataset
|
TITLE: MMASD: A Multimodal Dataset for Autism Intervention Analysis
ABSTRACT: Autism spectrum disorder (ASD) is a developmental disorder characterized by
significant social communication impairments and difficulties perceiving and
presenting communication cues. Machine learning techniques have been broadly
adopted to facilitate autism studies and assessments. However, computational
models are primarily concentrated on specific analysis and validated on private
datasets in the autism community, which limits comparisons across models due to
privacy-preserving data sharing complications. This work presents a novel
privacy-preserving open-source dataset, MMASD as a MultiModal ASD benchmark
dataset, collected from play therapy interventions of children with Autism.
MMASD includes data from 32 children with ASD, and 1,315 data samples segmented
from over 100 hours of intervention recordings. To promote public access, each
data sample consists of four privacy-preserving modalities of data; some of
which are derived from original videos: (1) optical flow, (2) 2D skeleton, (3)
3D skeleton, and (4) clinician ASD evaluation scores of children, e.g., ADOS
scores. MMASD aims to assist researchers and therapists in understanding
children's cognitive status, monitoring their progress during therapy, and
customizing the treatment plan accordingly. It also has inspiration for
downstream tasks such as action quality assessment and interpersonal synchrony
estimation. MMASD dataset can be easily accessed at
https://github.com/Li-Jicheng/MMASD-A-Multimodal-Dataset-for-Autism-Intervention-Analysis. | {
"abstract": "Autism spectrum disorder (ASD) is a developmental disorder characterized by\nsignificant social communication impairments and difficulties perceiving and\npresenting communication cues. Machine learning techniques have been broadly\nadopted to facilitate autism studies and assessments. However, computational\nmodels are primarily concentrated on specific analysis and validated on private\ndatasets in the autism community, which limits comparisons across models due to\nprivacy-preserving data sharing complications. This work presents a novel\nprivacy-preserving open-source dataset, MMASD as a MultiModal ASD benchmark\ndataset, collected from play therapy interventions of children with Autism.\nMMASD includes data from 32 children with ASD, and 1,315 data samples segmented\nfrom over 100 hours of intervention recordings. To promote public access, each\ndata sample consists of four privacy-preserving modalities of data; some of\nwhich are derived from original videos: (1) optical flow, (2) 2D skeleton, (3)\n3D skeleton, and (4) clinician ASD evaluation scores of children, e.g., ADOS\nscores. MMASD aims to assist researchers and therapists in understanding\nchildren's cognitive status, monitoring their progress during therapy, and\ncustomizing the treatment plan accordingly. It also has inspiration for\ndownstream tasks such as action quality assessment and interpersonal synchrony\nestimation. MMASD dataset can be easily accessed at\nhttps://github.com/Li-Jicheng/MMASD-A-Multimodal-Dataset-for-Autism-Intervention-Analysis.",
"title": "MMASD: A Multimodal Dataset for Autism Intervention Analysis",
"url": "http://arxiv.org/abs/2306.08243v3"
} | null | null | new_dataset | admin | null | false | null | 02df4219-2477-44e6-9c43-32d640964a80 | null | Validated | 2023-10-04 15:19:51.871076 | {
"text_length": 1620
} | 0new_dataset
|
TITLE: PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels
ABSTRACT: Allowing organizations to share their data for training of machine learning
(ML) models without unintended information leakage is an open problem in
practice. A promising technique for this still-open problem is to train models
on the encoded data. Our approach, called Privately Encoded Open Datasets with
Public Labels (PEOPL), uses a certain class of randomly constructed transforms
to encode sensitive data. Organizations publish their randomly encoded data and
associated raw labels for ML training, where training is done without knowledge
of the encoding realization. We investigate several important aspects of this
problem: We introduce information-theoretic scores for privacy and utility,
which quantify the average performance of an unfaithful user (e.g., adversary)
and a faithful user (e.g., model developer) that have access to the published
encoded data. We then theoretically characterize primitives in building
families of encoding schemes that motivate the use of random deep neural
networks. Empirically, we compare the performance of our randomized encoding
scheme and a linear scheme to a suite of computational attacks, and we also
show that our scheme achieves competitive prediction accuracy to raw-sample
baselines. Moreover, we demonstrate that multiple institutions, using
independent random encoders, can collaborate to train improved ML models. | {
"abstract": "Allowing organizations to share their data for training of machine learning\n(ML) models without unintended information leakage is an open problem in\npractice. A promising technique for this still-open problem is to train models\non the encoded data. Our approach, called Privately Encoded Open Datasets with\nPublic Labels (PEOPL), uses a certain class of randomly constructed transforms\nto encode sensitive data. Organizations publish their randomly encoded data and\nassociated raw labels for ML training, where training is done without knowledge\nof the encoding realization. We investigate several important aspects of this\nproblem: We introduce information-theoretic scores for privacy and utility,\nwhich quantify the average performance of an unfaithful user (e.g., adversary)\nand a faithful user (e.g., model developer) that have access to the published\nencoded data. We then theoretically characterize primitives in building\nfamilies of encoding schemes that motivate the use of random deep neural\nnetworks. Empirically, we compare the performance of our randomized encoding\nscheme and a linear scheme to a suite of computational attacks, and we also\nshow that our scheme achieves competitive prediction accuracy to raw-sample\nbaselines. Moreover, we demonstrate that multiple institutions, using\nindependent random encoders, can collaborate to train improved ML models.",
"title": "PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels",
"url": "http://arxiv.org/abs/2304.00047v1"
} | null | null | no_new_dataset | admin | null | false | null | 159e2249-b5f9-4f5c-bced-d3c8106f4bc3 | null | Validated | 2023-10-04 15:19:51.880122 | {
"text_length": 1481
} | 1no_new_dataset
|
TITLE: Design and Development of Rule-based open-domain Question-Answering System on SQuAD v2.0 Dataset
ABSTRACT: Human mind is the palace of curious questions that seek answers.
Computational resolution of this challenge is possible through Natural Language
Processing techniques. Statistical techniques like machine learning and deep
learning require a lot of data to train and despite that they fail to tap into
the nuances of language. Such systems usually perform best on close-domain
datasets. We have proposed development of a rule-based open-domain
question-answering system which is capable of answering questions of any domain
from a corresponding context passage. We have used 1000 questions from SQuAD
2.0 dataset for testing the developed system and it gives satisfactory results.
In this paper, we have described the structure of the developed system and have
analyzed the performance. | {
"abstract": "Human mind is the palace of curious questions that seek answers.\nComputational resolution of this challenge is possible through Natural Language\nProcessing techniques. Statistical techniques like machine learning and deep\nlearning require a lot of data to train and despite that they fail to tap into\nthe nuances of language. Such systems usually perform best on close-domain\ndatasets. We have proposed development of a rule-based open-domain\nquestion-answering system which is capable of answering questions of any domain\nfrom a corresponding context passage. We have used 1000 questions from SQuAD\n2.0 dataset for testing the developed system and it gives satisfactory results.\nIn this paper, we have described the structure of the developed system and have\nanalyzed the performance.",
"title": "Design and Development of Rule-based open-domain Question-Answering System on SQuAD v2.0 Dataset",
"url": "http://arxiv.org/abs/2204.09659v1"
} | null | null | no_new_dataset | admin | null | false | null | 1d73d664-2fb8-406d-a3df-5a5e29cbd5ce | null | Validated | 2023-10-04 15:19:51.887406 | {
"text_length": 916
} | 1no_new_dataset
|
TITLE: Dataset Condensation with Gradient Matching
ABSTRACT: As the state-of-the-art machine learning methods in many fields rely on
larger datasets, storing datasets and training models on them become
significantly more expensive. This paper proposes a training set synthesis
technique for data-efficient learning, called Dataset Condensation, that learns
to condense large dataset into a small set of informative synthetic samples for
training deep neural networks from scratch. We formulate this goal as a
gradient matching problem between the gradients of deep neural network weights
that are trained on the original and our synthetic data. We rigorously evaluate
its performance in several computer vision benchmarks and demonstrate that it
significantly outperforms the state-of-the-art methods. Finally we explore the
use of our method in continual learning and neural architecture search and
report promising gains when limited memory and computations are available. | {
"abstract": "As the state-of-the-art machine learning methods in many fields rely on\nlarger datasets, storing datasets and training models on them become\nsignificantly more expensive. This paper proposes a training set synthesis\ntechnique for data-efficient learning, called Dataset Condensation, that learns\nto condense large dataset into a small set of informative synthetic samples for\ntraining deep neural networks from scratch. We formulate this goal as a\ngradient matching problem between the gradients of deep neural network weights\nthat are trained on the original and our synthetic data. We rigorously evaluate\nits performance in several computer vision benchmarks and demonstrate that it\nsignificantly outperforms the state-of-the-art methods. Finally we explore the\nuse of our method in continual learning and neural architecture search and\nreport promising gains when limited memory and computations are available.",
"title": "Dataset Condensation with Gradient Matching",
"url": "http://arxiv.org/abs/2006.05929v3"
} | null | null | no_new_dataset | admin | null | false | null | 64690501-3341-418f-b282-da7220d9880d | null | Validated | 2023-10-04 15:19:51.899731 | {
"text_length": 991
} | 1no_new_dataset
|
TITLE: Dataset Bias in the Natural Sciences: A Case Study in Chemical Reaction Prediction and Synthesis Design
ABSTRACT: Datasets in the Natural Sciences are often curated with the goal of aiding
scientific understanding and hence may not always be in a form that facilitates
the application of machine learning. In this paper, we identify three trends
within the fields of chemical reaction prediction and synthesis design that
require a change in direction. First, the manner in which reaction datasets are
split into reactants and reagents encourages testing models in an
unrealistically generous manner. Second, we highlight the prevalence of
mislabelled data, and suggest that the focus should be on outlier removal
rather than data fitting only. Lastly, we discuss the problem of reagent
prediction, in addition to reactant prediction, in order to solve the full
synthesis design problem, highlighting the mismatch between what machine
learning solves and what a lab chemist would need. Our critiques are also
relevant to the burgeoning field of using machine learning to accelerate
progress in experimental Natural Sciences, where datasets are often split in a
biased way, are highly noisy, and contextual variables that are not evident
from the data strongly influence the outcome of experiments. | {
"abstract": "Datasets in the Natural Sciences are often curated with the goal of aiding\nscientific understanding and hence may not always be in a form that facilitates\nthe application of machine learning. In this paper, we identify three trends\nwithin the fields of chemical reaction prediction and synthesis design that\nrequire a change in direction. First, the manner in which reaction datasets are\nsplit into reactants and reagents encourages testing models in an\nunrealistically generous manner. Second, we highlight the prevalence of\nmislabelled data, and suggest that the focus should be on outlier removal\nrather than data fitting only. Lastly, we discuss the problem of reagent\nprediction, in addition to reactant prediction, in order to solve the full\nsynthesis design problem, highlighting the mismatch between what machine\nlearning solves and what a lab chemist would need. Our critiques are also\nrelevant to the burgeoning field of using machine learning to accelerate\nprogress in experimental Natural Sciences, where datasets are often split in a\nbiased way, are highly noisy, and contextual variables that are not evident\nfrom the data strongly influence the outcome of experiments.",
"title": "Dataset Bias in the Natural Sciences: A Case Study in Chemical Reaction Prediction and Synthesis Design",
"url": "http://arxiv.org/abs/2105.02637v1"
} | null | null | no_new_dataset | admin | null | false | null | 339ffb47-739d-4717-b6c9-d3ddf45f62c7 | null | Validated | 2023-10-04 15:19:51.894665 | {
"text_length": 1321
} | 1no_new_dataset
|
TITLE: Monant Medical Misinformation Dataset: Mapping Articles to Fact-Checked Claims
ABSTRACT: False information has a significant negative influence on individuals as well
as on the whole society. Especially in the current COVID-19 era, we witness an
unprecedented growth of medical misinformation. To help tackle this problem
with machine learning approaches, we are publishing a feature-rich dataset of
approx. 317k medical news articles/blogs and 3.5k fact-checked claims. It also
contains 573 manually and more than 51k automatically labelled mappings between
claims and articles. Mappings consist of claim presence, i.e., whether a claim
is contained in a given article, and article stance towards the claim. We
provide several baselines for these two tasks and evaluate them on the manually
labelled part of the dataset. The dataset enables a number of additional tasks
related to medical misinformation, such as misinformation characterisation
studies or studies of misinformation diffusion between sources. | {
"abstract": "False information has a significant negative influence on individuals as well\nas on the whole society. Especially in the current COVID-19 era, we witness an\nunprecedented growth of medical misinformation. To help tackle this problem\nwith machine learning approaches, we are publishing a feature-rich dataset of\napprox. 317k medical news articles/blogs and 3.5k fact-checked claims. It also\ncontains 573 manually and more than 51k automatically labelled mappings between\nclaims and articles. Mappings consist of claim presence, i.e., whether a claim\nis contained in a given article, and article stance towards the claim. We\nprovide several baselines for these two tasks and evaluate them on the manually\nlabelled part of the dataset. The dataset enables a number of additional tasks\nrelated to medical misinformation, such as misinformation characterisation\nstudies or studies of misinformation diffusion between sources.",
"title": "Monant Medical Misinformation Dataset: Mapping Articles to Fact-Checked Claims",
"url": "http://arxiv.org/abs/2204.12294v1"
} | null | null | new_dataset | admin | null | false | null | 9c10bba4-0302-4a4b-ba29-f59f25d38f86 | null | Validated | 2023-10-04 15:19:51.886869 | {
"text_length": 1033
} | 0new_dataset
|
TITLE: Evaluation of Chinese-English Machine Translation of Emotion-Loaded Microblog Texts: A Human Annotated Dataset for the Quality Assessment of Emotion Translation
ABSTRACT: In this paper, we focus on how current Machine Translation (MT) tools perform
on the translation of emotion-loaded texts by evaluating outputs from Google
Translate according to a framework proposed in this paper. We propose this
evaluation framework based on the Multidimensional Quality Metrics (MQM) and
perform a detailed error analysis of the MT outputs. From our analysis, we
observe that about 50% of the MT outputs fail to preserve the original emotion.
After further analysis of the errors, we find that emotion carrying words and
linguistic phenomena such as polysemous words, negation, abbreviation etc., are
common causes for these translation errors. | {
"abstract": "In this paper, we focus on how current Machine Translation (MT) tools perform\non the translation of emotion-loaded texts by evaluating outputs from Google\nTranslate according to a framework proposed in this paper. We propose this\nevaluation framework based on the Multidimensional Quality Metrics (MQM) and\nperform a detailed error analysis of the MT outputs. From our analysis, we\nobserve that about 50% of the MT outputs fail to preserve the original emotion.\nAfter further analysis of the errors, we find that emotion carrying words and\nlinguistic phenomena such as polysemous words, negation, abbreviation etc., are\ncommon causes for these translation errors.",
"title": "Evaluation of Chinese-English Machine Translation of Emotion-Loaded Microblog Texts: A Human Annotated Dataset for the Quality Assessment of Emotion Translation",
"url": "http://arxiv.org/abs/2306.11900v1"
} | null | null | new_dataset | admin | null | false | null | 17992f71-bba4-4950-880a-8c6c1295bde9 | null | Validated | 2023-10-04 15:19:51.870234 | {
"text_length": 858
} | 0new_dataset
|
TITLE: A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering
ABSTRACT: Legislation can be viewed as a body of prescriptive rules expressed in
natural language. The application of legislation to facts of a case we refer to
as statutory reasoning, where those facts are also expressed in natural
language. Computational statutory reasoning is distinct from most existing work
in machine reading, in that much of the information needed for deciding a case
is declared exactly once (a law), while the information needed in much of
machine reading tends to be learned through distributional language statistics.
To investigate the performance of natural language understanding approaches on
statutory reasoning, we introduce a dataset, together with a legal-domain text
corpus. Straightforward application of machine reading models exhibits low
out-of-the-box performance on our questions, whether or not they have been
fine-tuned to the legal domain. We contrast this with a hand-constructed
Prolog-based system, designed to fully solve the task. These experiments
support a discussion of the challenges facing statutory reasoning moving
forward, which we argue is an interesting real-world task that can motivate the
development of models able to utilize prescriptive rules specified in natural
language. | {
"abstract": "Legislation can be viewed as a body of prescriptive rules expressed in\nnatural language. The application of legislation to facts of a case we refer to\nas statutory reasoning, where those facts are also expressed in natural\nlanguage. Computational statutory reasoning is distinct from most existing work\nin machine reading, in that much of the information needed for deciding a case\nis declared exactly once (a law), while the information needed in much of\nmachine reading tends to be learned through distributional language statistics.\nTo investigate the performance of natural language understanding approaches on\nstatutory reasoning, we introduce a dataset, together with a legal-domain text\ncorpus. Straightforward application of machine reading models exhibits low\nout-of-the-box performance on our questions, whether or not they have been\nfine-tuned to the legal domain. We contrast this with a hand-constructed\nProlog-based system, designed to fully solve the task. These experiments\nsupport a discussion of the challenges facing statutory reasoning moving\nforward, which we argue is an interesting real-world task that can motivate the\ndevelopment of models able to utilize prescriptive rules specified in natural\nlanguage.",
"title": "A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering",
"url": "http://arxiv.org/abs/2005.05257v3"
} | null | null | new_dataset | admin | null | false | null | aab60b27-f2f1-4c7f-abed-f9b8651b53a0 | null | Validated | 2023-10-04 15:19:51.899998 | {
"text_length": 1343
} | 0new_dataset
|
TITLE: A Dataset-Level Geometric Framework for Ensemble Classifiers
ABSTRACT: Ensemble classifiers have been investigated by many in the artificial
intelligence and machine learning community. Majority voting and weighted
majority voting are two commonly used combination schemes in ensemble learning.
However, understanding of them is incomplete at best, with some properties even
misunderstood. In this paper, we present a group of properties of these two
schemes formally under a dataset-level geometric framework. Two key factors,
every component base classifier's performance and dissimilarity between each
pair of component classifiers are evaluated by the same metric - the Euclidean
distance. Consequently, ensembling becomes a deterministic problem and the
performance of an ensemble can be calculated directly by a formula. We prove
several theorems of interest and explain their implications for ensembles. In
particular, we compare and contrast the effect of the number of component
classifiers on these two types of ensemble schemes. Empirical investigation is
also conducted to verify the theoretical results when other metrics such as
accuracy are used. We believe that the results from this paper are very useful
for us to understand the fundamental properties of these two combination
schemes and the principles of ensemble classifiers in general. The results are
also helpful for us to investigate some issues in ensemble classifiers, such as
ensemble performance prediction, selecting a small number of base classifiers
to obtain efficient and effective ensembles. | {
"abstract": "Ensemble classifiers have been investigated by many in the artificial\nintelligence and machine learning community. Majority voting and weighted\nmajority voting are two commonly used combination schemes in ensemble learning.\nHowever, understanding of them is incomplete at best, with some properties even\nmisunderstood. In this paper, we present a group of properties of these two\nschemes formally under a dataset-level geometric framework. Two key factors,\nevery component base classifier's performance and dissimilarity between each\npair of component classifiers are evaluated by the same metric - the Euclidean\ndistance. Consequently, ensembling becomes a deterministic problem and the\nperformance of an ensemble can be calculated directly by a formula. We prove\nseveral theorems of interest and explain their implications for ensembles. In\nparticular, we compare and contrast the effect of the number of component\nclassifiers on these two types of ensemble schemes. Empirical investigation is\nalso conducted to verify the theoretical results when other metrics such as\naccuracy are used. We believe that the results from this paper are very useful\nfor us to understand the fundamental properties of these two combination\nschemes and the principles of ensemble classifiers in general. The results are\nalso helpful for us to investigate some issues in ensemble classifiers, such as\nensemble performance prediction, selecting a small number of base classifiers\nto obtain efficient and effective ensembles.",
"title": "A Dataset-Level Geometric Framework for Ensemble Classifiers",
"url": "http://arxiv.org/abs/2106.08658v1"
} | null | null | no_new_dataset | admin | null | false | null | 359c0495-400f-4ad8-b7e1-efae79e4d313 | null | Validated | 2023-10-04 15:19:51.894192 | {
"text_length": 1600
} | 1no_new_dataset
|
TITLE: BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation
ABSTRACT: Recent advances in deep learning techniques have enabled machines to generate
cohesive open-ended text when prompted with a sequence of words as context.
While these models now empower many downstream applications from conversation
bots to automatic storytelling, they have been shown to generate texts that
exhibit social biases. To systematically study and benchmark social biases in
open-ended language generation, we introduce the Bias in Open-Ended Language
Generation Dataset (BOLD), a large-scale dataset that consists of 23,679
English text generation prompts for bias benchmarking across five domains:
profession, gender, race, religion, and political ideology. We also propose new
automated metrics for toxicity, psycholinguistic norms, and text gender
polarity to measure social biases in open-ended text generation from multiple
angles. An examination of text generated from three popular language models
reveals that the majority of these models exhibit a larger social bias than
human-written Wikipedia text across all domains. With these results we
highlight the need to benchmark biases in open-ended language generation and
caution users of language generation models on downstream tasks to be cognizant
of these embedded prejudices. | {
"abstract": "Recent advances in deep learning techniques have enabled machines to generate\ncohesive open-ended text when prompted with a sequence of words as context.\nWhile these models now empower many downstream applications from conversation\nbots to automatic storytelling, they have been shown to generate texts that\nexhibit social biases. To systematically study and benchmark social biases in\nopen-ended language generation, we introduce the Bias in Open-Ended Language\nGeneration Dataset (BOLD), a large-scale dataset that consists of 23,679\nEnglish text generation prompts for bias benchmarking across five domains:\nprofession, gender, race, religion, and political ideology. We also propose new\nautomated metrics for toxicity, psycholinguistic norms, and text gender\npolarity to measure social biases in open-ended text generation from multiple\nangles. An examination of text generated from three popular language models\nreveals that the majority of these models exhibit a larger social bias than\nhuman-written Wikipedia text across all domains. With these results we\nhighlight the need to benchmark biases in open-ended language generation and\ncaution users of language generation models on downstream tasks to be cognizant\nof these embedded prejudices.",
"title": "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation",
"url": "http://arxiv.org/abs/2101.11718v1"
} | null | null | new_dataset | admin | null | false | null | 454b438c-a6dd-43f3-9eeb-77caa589922f | null | Validated | 2023-10-04 15:19:51.896133 | {
"text_length": 1365
} | 0new_dataset
|
TITLE: Enhancing Mortality Prediction in Heart Failure Patients: Exploring Preprocessing Methods for Imbalanced Clinical Datasets
ABSTRACT: Heart failure (HF) is a critical condition in which the accurate prediction
of mortality plays a vital role in guiding patient management decisions.
However, clinical datasets used for mortality prediction in HF often suffer
from an imbalanced distribution of classes, posing significant challenges. In
this paper, we explore preprocessing methods for enhancing one-month mortality
prediction in HF patients. We present a comprehensive preprocessing framework
including scaling, outliers processing and resampling as key techniques. We
also employed an aware encoding approach to effectively handle missing values
in clinical datasets. Our study utilizes a comprehensive dataset from the
Persian Registry Of cardio Vascular disease (PROVE) with a significant class
imbalance. By leveraging appropriate preprocessing techniques and Machine
Learning (ML) algorithms, we aim to improve mortality prediction performance
for HF patients. The results reveal an average enhancement of approximately
3.6% in F1 score and 2.7% in MCC for tree-based models, specifically Random
Forest (RF) and XGBoost (XGB). This demonstrates the efficiency of our
preprocessing approach in effectively handling Imbalanced Clinical Datasets
(ICD). Our findings hold promise in guiding healthcare professionals to make
informed decisions and improve patient outcomes in HF management. | {
"abstract": "Heart failure (HF) is a critical condition in which the accurate prediction\nof mortality plays a vital role in guiding patient management decisions.\nHowever, clinical datasets used for mortality prediction in HF often suffer\nfrom an imbalanced distribution of classes, posing significant challenges. In\nthis paper, we explore preprocessing methods for enhancing one-month mortality\nprediction in HF patients. We present a comprehensive preprocessing framework\nincluding scaling, outliers processing and resampling as key techniques. We\nalso employed an aware encoding approach to effectively handle missing values\nin clinical datasets. Our study utilizes a comprehensive dataset from the\nPersian Registry Of cardio Vascular disease (PROVE) with a significant class\nimbalance. By leveraging appropriate preprocessing techniques and Machine\nLearning (ML) algorithms, we aim to improve mortality prediction performance\nfor HF patients. The results reveal an average enhancement of approximately\n3.6% in F1 score and 2.7% in MCC for tree-based models, specifically Random\nForest (RF) and XGBoost (XGB). This demonstrates the efficiency of our\npreprocessing approach in effectively handling Imbalanced Clinical Datasets\n(ICD). Our findings hold promise in guiding healthcare professionals to make\ninformed decisions and improve patient outcomes in HF management.",
"title": "Enhancing Mortality Prediction in Heart Failure Patients: Exploring Preprocessing Methods for Imbalanced Clinical Datasets",
"url": "http://arxiv.org/abs/2310.00457v1"
} | null | null | no_new_dataset | admin | null | false | null | 49bfecdd-c83c-4347-b136-72300d5ee06c | null | Validated | 2023-10-04 15:19:51.862912 | {
"text_length": 1514
} | 1no_new_dataset
|
TITLE: Multi-feature Dataset for Windows PE Malware Classification
ABSTRACT: This paper describes a multi-feature dataset for training machine learning
classifiers for detecting malicious Windows Portable Executable (PE) files. The
dataset includes four feature sets from 18,551 binary samples belonging to five
malware families including Spyware, Ransomware, Downloader, Backdoor and
Generic Malware. The feature sets include the list of DLLs and their functions,
values of different fields of PE Header and Sections. First, we explain the
data collection and creation phase and then we explain how did we label the
samples in it using VirusTotal's services. Finally, we explore the dataset to
describe how this dataset can benefit the researchers for static malware
analysis. The dataset is made public in the hope that it will help inspire
machine learning research for malware detection. | {
"abstract": "This paper describes a multi-feature dataset for training machine learning\nclassifiers for detecting malicious Windows Portable Executable (PE) files. The\ndataset includes four feature sets from 18,551 binary samples belonging to five\nmalware families including Spyware, Ransomware, Downloader, Backdoor and\nGeneric Malware. The feature sets include the list of DLLs and their functions,\nvalues of different fields of PE Header and Sections. First, we explain the\ndata collection and creation phase and then we explain how did we label the\nsamples in it using VirusTotal's services. Finally, we explore the dataset to\ndescribe how this dataset can benefit the researchers for static malware\nanalysis. The dataset is made public in the hope that it will help inspire\nmachine learning research for malware detection.",
"title": "Multi-feature Dataset for Windows PE Malware Classification",
"url": "http://arxiv.org/abs/2210.16285v1"
} | null | null | new_dataset | admin | null | false | null | 59ca26fc-a4b8-4f78-94dd-16d31625ab97 | null | Validated | 2023-10-04 15:19:51.883255 | {
"text_length": 908
} | 0new_dataset
|
TITLE: METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets
ABSTRACT: The COVID-19 pandemic continues to bring up various topics discussed or
debated on social media. In order to explore the impact of pandemics on
people's lives, it is crucial to understand the public's concerns and attitudes
towards pandemic-related entities (e.g., drugs, vaccines) on social media.
However, models trained on existing named entity recognition (NER) or targeted
sentiment analysis (TSA) datasets have limited ability to understand
COVID-19-related social media texts because these datasets are not designed or
annotated from a medical perspective. This paper releases METS-CoV, a dataset
containing medical entities and targeted sentiments from COVID-19-related
tweets. METS-CoV contains 10,000 tweets with 7 types of entities, including 4
medical entity types (Disease, Drug, Symptom, and Vaccine) and 3 general entity
types (Person, Location, and Organization). To further investigate tweet users'
attitudes toward specific entities, 4 types of entities (Person, Organization,
Drug, and Vaccine) are selected and annotated with user sentiments, resulting
in a targeted sentiment dataset with 9,101 entities (in 5,278 tweets). To the
best of our knowledge, METS-CoV is the first dataset to collect medical
entities and corresponding sentiments of COVID-19-related tweets. We benchmark
the performance of classical machine learning models and state-of-the-art deep
learning models on NER and TSA tasks with extensive experiments. Results show
that the dataset has vast room for improvement for both NER and TSA tasks.
METS-CoV is an important resource for developing better medical social media
tools and facilitating computational social science research, especially in
epidemiology. Our data, annotation guidelines, benchmark models, and source
code are publicly available (https://github.com/YLab-Open/METS-CoV) to ensure
reproducibility. | {
"abstract": "The COVID-19 pandemic continues to bring up various topics discussed or\ndebated on social media. In order to explore the impact of pandemics on\npeople's lives, it is crucial to understand the public's concerns and attitudes\ntowards pandemic-related entities (e.g., drugs, vaccines) on social media.\nHowever, models trained on existing named entity recognition (NER) or targeted\nsentiment analysis (TSA) datasets have limited ability to understand\nCOVID-19-related social media texts because these datasets are not designed or\nannotated from a medical perspective. This paper releases METS-CoV, a dataset\ncontaining medical entities and targeted sentiments from COVID-19-related\ntweets. METS-CoV contains 10,000 tweets with 7 types of entities, including 4\nmedical entity types (Disease, Drug, Symptom, and Vaccine) and 3 general entity\ntypes (Person, Location, and Organization). To further investigate tweet users'\nattitudes toward specific entities, 4 types of entities (Person, Organization,\nDrug, and Vaccine) are selected and annotated with user sentiments, resulting\nin a targeted sentiment dataset with 9,101 entities (in 5,278 tweets). To the\nbest of our knowledge, METS-CoV is the first dataset to collect medical\nentities and corresponding sentiments of COVID-19-related tweets. We benchmark\nthe performance of classical machine learning models and state-of-the-art deep\nlearning models on NER and TSA tasks with extensive experiments. Results show\nthat the dataset has vast room for improvement for both NER and TSA tasks.\nMETS-CoV is an important resource for developing better medical social media\ntools and facilitating computational social science research, especially in\nepidemiology. Our data, annotation guidelines, benchmark models, and source\ncode are publicly available (https://github.com/YLab-Open/METS-CoV) to ensure\nreproducibility.",
"title": "METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets",
"url": "http://arxiv.org/abs/2209.13773v1"
} | null | null | new_dataset | admin | null | false | null | 401077d3-09e9-4b72-914e-7e0d65b3e45d | null | Validated | 2023-10-04 15:19:51.883760 | {
"text_length": 1979
} | 0new_dataset
|
TITLE: An Extensive Study on Cross-Dataset Bias and Evaluation Metrics Interpretation for Machine Learning applied to Gastrointestinal Tract Abnormality Classification
ABSTRACT: Precise and efficient automated identification of Gastrointestinal (GI) tract
diseases can help doctors treat more patients and improve the rate of disease
detection and identification. Currently, automatic analysis of diseases in the
GI tract is a hot topic in both computer science and medical-related journals.
Nevertheless, the evaluation of such an automatic analysis is often incomplete
or simply wrong. Algorithms are often only tested on small and biased datasets,
and cross-dataset evaluations are rarely performed. A clear understanding of
evaluation metrics and machine learning models with cross datasets is crucial
to bring research in the field to a new quality level. Towards this goal, we
present comprehensive evaluations of five distinct machine learning models
using Global Features and Deep Neural Networks that can classify 16 different
key types of GI tract conditions, including pathological findings, anatomical
landmarks, polyp removal conditions, and normal findings from images captured
by common GI tract examination instruments. In our evaluation, we introduce
performance hexagons using six performance metrics such as recall, precision,
specificity, accuracy, F1-score, and Matthews Correlation Coefficient to
demonstrate how to determine the real capabilities of models rather than
evaluating them shallowly. Furthermore, we perform cross-dataset evaluations
using different datasets for training and testing. With these cross-dataset
evaluations, we demonstrate the challenge of actually building a generalizable
model that could be used across different hospitals. Our experiments clearly
show that more sophisticated performance metrics and evaluation methods need to
be applied to get reliable models rather than depending on evaluations of the
splits of the same dataset, i.e., the performance metrics should always be
interpreted together rather than relying on a single metric. | {
"abstract": "Precise and efficient automated identification of Gastrointestinal (GI) tract\ndiseases can help doctors treat more patients and improve the rate of disease\ndetection and identification. Currently, automatic analysis of diseases in the\nGI tract is a hot topic in both computer science and medical-related journals.\nNevertheless, the evaluation of such an automatic analysis is often incomplete\nor simply wrong. Algorithms are often only tested on small and biased datasets,\nand cross-dataset evaluations are rarely performed. A clear understanding of\nevaluation metrics and machine learning models with cross datasets is crucial\nto bring research in the field to a new quality level. Towards this goal, we\npresent comprehensive evaluations of five distinct machine learning models\nusing Global Features and Deep Neural Networks that can classify 16 different\nkey types of GI tract conditions, including pathological findings, anatomical\nlandmarks, polyp removal conditions, and normal findings from images captured\nby common GI tract examination instruments. In our evaluation, we introduce\nperformance hexagons using six performance metrics such as recall, precision,\nspecificity, accuracy, F1-score, and Matthews Correlation Coefficient to\ndemonstrate how to determine the real capabilities of models rather than\nevaluating them shallowly. Furthermore, we perform cross-dataset evaluations\nusing different datasets for training and testing. With these cross-dataset\nevaluations, we demonstrate the challenge of actually building a generalizable\nmodel that could be used across different hospitals. Our experiments clearly\nshow that more sophisticated performance metrics and evaluation methods need to\nbe applied to get reliable models rather than depending on evaluations of the\nsplits of the same dataset, i.e., the performance metrics should always be\ninterpreted together rather than relying on a single metric.",
"title": "An Extensive Study on Cross-Dataset Bias and Evaluation Metrics Interpretation for Machine Learning applied to Gastrointestinal Tract Abnormality Classification",
"url": "http://arxiv.org/abs/2005.03912v1"
} | null | null | no_new_dataset | admin | null | false | null | a21c3561-2dd3-4a4c-9790-b694c72641e3 | null | Validated | 2023-10-04 15:19:51.900094 | {
"text_length": 2111
} | 1no_new_dataset
|
TITLE: Intrinsic Bias Identification on Medical Image Datasets
ABSTRACT: Machine learning based medical image analysis highly depends on datasets.
Biases in the dataset can be learned by the model and degrade the
generalizability of the applications. There are studies on debiased models.
However, scientists and practitioners are difficult to identify implicit biases
in the datasets, which causes lack of reliable unbias test datasets to valid
models. To tackle this issue, we first define the data intrinsic bias
attribute, and then propose a novel bias identification framework for medical
image datasets. The framework contains two major components, KlotskiNet and
Bias Discriminant Direction Analysis(bdda), where KlostkiNet is to build the
mapping which makes backgrounds to distinguish positive and negative samples
and bdda provides a theoretical solution on determining bias attributes.
Experimental results on three datasets show the effectiveness of the bias
attributes discovered by the framework. | {
"abstract": "Machine learning based medical image analysis highly depends on datasets.\nBiases in the dataset can be learned by the model and degrade the\ngeneralizability of the applications. There are studies on debiased models.\nHowever, scientists and practitioners are difficult to identify implicit biases\nin the datasets, which causes lack of reliable unbias test datasets to valid\nmodels. To tackle this issue, we first define the data intrinsic bias\nattribute, and then propose a novel bias identification framework for medical\nimage datasets. The framework contains two major components, KlotskiNet and\nBias Discriminant Direction Analysis(bdda), where KlostkiNet is to build the\nmapping which makes backgrounds to distinguish positive and negative samples\nand bdda provides a theoretical solution on determining bias attributes.\nExperimental results on three datasets show the effectiveness of the bias\nattributes discovered by the framework.",
"title": "Intrinsic Bias Identification on Medical Image Datasets",
"url": "http://arxiv.org/abs/2203.12872v2"
} | null | null | no_new_dataset | admin | null | false | null | 55e2a4bf-c990-4bbd-8bdd-7ec9dca77f04 | null | Validated | 2023-10-04 15:19:51.887499 | {
"text_length": 1027
} | 1no_new_dataset
|
TITLE: Space, Time, and Interaction: A Taxonomy of Corner Cases in Trajectory Datasets for Automated Driving
ABSTRACT: Trajectory data analysis is an essential component for highly automated
driving. Complex models developed with these data predict other road users'
movement and behavior patterns. Based on these predictions - and additional
contextual information such as the course of the road, (traffic) rules, and
interaction with other road users - the highly automated vehicle (HAV) must be
able to reliably and safely perform the task assigned to it, e.g., moving from
point A to B. Ideally, the HAV moves safely through its environment, just as we
would expect a human driver to do. However, if unusual trajectories occur,
so-called trajectory corner cases, a human driver can usually cope well, but an
HAV can quickly get into trouble. In the definition of trajectory corner cases,
which we provide in this work, we will consider the relevance of unusual
trajectories with respect to the task at hand. Based on this, we will also
present a taxonomy of different trajectory corner cases. The categorization of
corner cases into the taxonomy will be shown with examples and is done by cause
and required data sources. To illustrate the complexity between the machine
learning (ML) model and the corner case cause, we present a general processing
chain underlying the taxonomy. | {
"abstract": "Trajectory data analysis is an essential component for highly automated\ndriving. Complex models developed with these data predict other road users'\nmovement and behavior patterns. Based on these predictions - and additional\ncontextual information such as the course of the road, (traffic) rules, and\ninteraction with other road users - the highly automated vehicle (HAV) must be\nable to reliably and safely perform the task assigned to it, e.g., moving from\npoint A to B. Ideally, the HAV moves safely through its environment, just as we\nwould expect a human driver to do. However, if unusual trajectories occur,\nso-called trajectory corner cases, a human driver can usually cope well, but an\nHAV can quickly get into trouble. In the definition of trajectory corner cases,\nwhich we provide in this work, we will consider the relevance of unusual\ntrajectories with respect to the task at hand. Based on this, we will also\npresent a taxonomy of different trajectory corner cases. The categorization of\ncorner cases into the taxonomy will be shown with examples and is done by cause\nand required data sources. To illustrate the complexity between the machine\nlearning (ML) model and the corner case cause, we present a general processing\nchain underlying the taxonomy.",
"title": "Space, Time, and Interaction: A Taxonomy of Corner Cases in Trajectory Datasets for Automated Driving",
"url": "http://arxiv.org/abs/2210.08885v1"
} | null | null | no_new_dataset | admin | null | false | null | 62c45eb5-fb7e-4161-9927-0f3104cea3d9 | null | Validated | 2023-10-04 15:19:51.883452 | {
"text_length": 1401
} | 1no_new_dataset
|
TITLE: Sentiment Analysis of Persian Language: Review of Algorithms, Approaches and Datasets
ABSTRACT: Sentiment analysis aims to extract people's emotions and opinion from their
comments on the web. It widely used in businesses to detect sentiment in social
data, gauge brand reputation, and understand customers. Most of articles in
this area have concentrated on the English language whereas there are limited
resources for Persian language. In this review paper, recent published articles
between 2018 and 2022 in sentiment analysis in Persian Language have been
collected and their methods, approach and dataset will be explained and
analyzed. Almost all the methods used to solve sentiment analysis are machine
learning and deep learning. The purpose of this paper is to examine 40
different approach sentiment analysis in the Persian Language, analysis
datasets along with the accuracy of the algorithms applied to them and also
review strengths and weaknesses of each. Among all the methods, transformers
such as BERT and RNN Neural Networks such as LSTM and Bi-LSTM have achieved
higher accuracy in the sentiment analysis. In addition to the methods and
approaches, the datasets reviewed are listed between 2018 and 2022 and
information about each dataset and its details are provided. | {
"abstract": "Sentiment analysis aims to extract people's emotions and opinion from their\ncomments on the web. It widely used in businesses to detect sentiment in social\ndata, gauge brand reputation, and understand customers. Most of articles in\nthis area have concentrated on the English language whereas there are limited\nresources for Persian language. In this review paper, recent published articles\nbetween 2018 and 2022 in sentiment analysis in Persian Language have been\ncollected and their methods, approach and dataset will be explained and\nanalyzed. Almost all the methods used to solve sentiment analysis are machine\nlearning and deep learning. The purpose of this paper is to examine 40\ndifferent approach sentiment analysis in the Persian Language, analysis\ndatasets along with the accuracy of the algorithms applied to them and also\nreview strengths and weaknesses of each. Among all the methods, transformers\nsuch as BERT and RNN Neural Networks such as LSTM and Bi-LSTM have achieved\nhigher accuracy in the sentiment analysis. In addition to the methods and\napproaches, the datasets reviewed are listed between 2018 and 2022 and\ninformation about each dataset and its details are provided.",
"title": "Sentiment Analysis of Persian Language: Review of Algorithms, Approaches and Datasets",
"url": "http://arxiv.org/abs/2212.06041v1"
} | null | null | no_new_dataset | admin | null | false | null | 28a03d50-7c85-4a8b-aea1-a81c38f5012b | null | Validated | 2023-10-04 15:19:51.882940 | {
"text_length": 1311
} | 1no_new_dataset
|
TITLE: Efficient Large Scale Medical Image Dataset Preparation for Machine Learning Applications
ABSTRACT: In the rapidly evolving field of medical imaging, machine learning algorithms
have become indispensable for enhancing diagnostic accuracy. However, the
effectiveness of these algorithms is contingent upon the availability and
organization of high-quality medical imaging datasets. Traditional Digital
Imaging and Communications in Medicine (DICOM) data management systems are
inadequate for handling the scale and complexity of data required to be
facilitated in machine learning algorithms. This paper introduces an innovative
data curation tool, developed as part of the Kaapana open-source toolkit, aimed
at streamlining the organization, management, and processing of large-scale
medical imaging datasets. The tool is specifically tailored to meet the needs
of radiologists and machine learning researchers. It incorporates advanced
search, auto-annotation and efficient tagging functionalities for improved data
curation. Additionally, the tool facilitates quality control and review,
enabling researchers to validate image and segmentation quality in large
datasets. It also plays a critical role in uncovering potential biases in
datasets by aggregating and visualizing metadata, which is essential for
developing robust machine learning models. Furthermore, Kaapana is integrated
within the Radiological Cooperative Network (RACOON), a pioneering initiative
aimed at creating a comprehensive national infrastructure for the aggregation,
transmission, and consolidation of radiological data across all university
clinics throughout Germany. A supplementary video showcasing the tool's
functionalities can be accessed at https://bit.ly/MICCAI-DEMI2023. | {
"abstract": "In the rapidly evolving field of medical imaging, machine learning algorithms\nhave become indispensable for enhancing diagnostic accuracy. However, the\neffectiveness of these algorithms is contingent upon the availability and\norganization of high-quality medical imaging datasets. Traditional Digital\nImaging and Communications in Medicine (DICOM) data management systems are\ninadequate for handling the scale and complexity of data required to be\nfacilitated in machine learning algorithms. This paper introduces an innovative\ndata curation tool, developed as part of the Kaapana open-source toolkit, aimed\nat streamlining the organization, management, and processing of large-scale\nmedical imaging datasets. The tool is specifically tailored to meet the needs\nof radiologists and machine learning researchers. It incorporates advanced\nsearch, auto-annotation and efficient tagging functionalities for improved data\ncuration. Additionally, the tool facilitates quality control and review,\nenabling researchers to validate image and segmentation quality in large\ndatasets. It also plays a critical role in uncovering potential biases in\ndatasets by aggregating and visualizing metadata, which is essential for\ndeveloping robust machine learning models. Furthermore, Kaapana is integrated\nwithin the Radiological Cooperative Network (RACOON), a pioneering initiative\naimed at creating a comprehensive national infrastructure for the aggregation,\ntransmission, and consolidation of radiological data across all university\nclinics throughout Germany. A supplementary video showcasing the tool's\nfunctionalities can be accessed at https://bit.ly/MICCAI-DEMI2023.",
"title": "Efficient Large Scale Medical Image Dataset Preparation for Machine Learning Applications",
"url": "http://arxiv.org/abs/2309.17285v1"
} | null | null | no_new_dataset | admin | null | false | null | 804eb9d6-26b6-4a2f-bf61-c8edbaa10136 | null | Validated | 2023-10-04 15:19:51.862964 | {
"text_length": 1782
} | 1no_new_dataset
|
TITLE: Composite Score for Anomaly Detection in Imbalanced Real-World Industrial Dataset
ABSTRACT: In recent years, the industrial sector has evolved towards its fourth
revolution. The quality control domain is particularly interested in advanced
machine learning for computer vision anomaly detection. Nevertheless, several
challenges have to be faced, including imbalanced datasets, the image
complexity, and the zero-false-negative (ZFN) constraint to guarantee the
high-quality requirement. This paper illustrates a use case for an industrial
partner, where Printed Circuit Board Assembly (PCBA) images are first
reconstructed with a Vector Quantized Generative Adversarial Network (VQGAN)
trained on normal products. Then, several multi-level metrics are extracted on
a few normal and abnormal images, highlighting anomalies through reconstruction
differences. Finally, a classifer is trained to build a composite anomaly score
thanks to the metrics extracted. This three-step approach is performed on the
public MVTec-AD datasets and on the partner PCBA dataset, where it achieves a
regular accuracy of 95.69% and 87.93% under the ZFN constraint. | {
"abstract": "In recent years, the industrial sector has evolved towards its fourth\nrevolution. The quality control domain is particularly interested in advanced\nmachine learning for computer vision anomaly detection. Nevertheless, several\nchallenges have to be faced, including imbalanced datasets, the image\ncomplexity, and the zero-false-negative (ZFN) constraint to guarantee the\nhigh-quality requirement. This paper illustrates a use case for an industrial\npartner, where Printed Circuit Board Assembly (PCBA) images are first\nreconstructed with a Vector Quantized Generative Adversarial Network (VQGAN)\ntrained on normal products. Then, several multi-level metrics are extracted on\na few normal and abnormal images, highlighting anomalies through reconstruction\ndifferences. Finally, a classifer is trained to build a composite anomaly score\nthanks to the metrics extracted. This three-step approach is performed on the\npublic MVTec-AD datasets and on the partner PCBA dataset, where it achieves a\nregular accuracy of 95.69% and 87.93% under the ZFN constraint.",
"title": "Composite Score for Anomaly Detection in Imbalanced Real-World Industrial Dataset",
"url": "http://arxiv.org/abs/2211.15513v1"
} | null | null | no_new_dataset | admin | null | false | null | e6b2a322-86eb-4355-8402-7dd4c71442ba | null | Validated | 2023-10-04 15:19:51.882652 | {
"text_length": 1169
} | 1no_new_dataset
|
TITLE: Synthetic Dataset Generation for Privacy-Preserving Machine Learning
ABSTRACT: Machine Learning (ML) has achieved enormous success in solving a variety of
problems in computer vision, speech recognition, object detection, to name a
few. The principal reason for this success is the availability of huge datasets
for training deep neural networks (DNNs). However, datasets can not be publicly
released if they contain sensitive information such as medical or financial
records. In such cases, data privacy becomes a major concern. Encryption
methods offer a possible solution to this issue, however their deployment on ML
applications is non-trivial, as they seriously impact the classification
accuracy and result in substantial computational overhead.Alternatively,
obfuscation techniques can be used, but maintaining a good balance between
visual privacy and accuracy is challenging. In this work, we propose a method
to generate secure synthetic datasets from the original private datasets. In
our method, given a network with Batch Normalization (BN) layers pre-trained on
the original dataset, we first record the layer-wise BN statistics. Next, using
the BN statistics and the pre-trained model, we generate the synthetic dataset
by optimizing random noises such that the synthetic data match the layer-wise
statistical distribution of the original model. We evaluate our method on image
classification dataset (CIFAR10) and show that our synthetic data can be used
for training networks from scratch, producing reasonable classification
performance. | {
"abstract": "Machine Learning (ML) has achieved enormous success in solving a variety of\nproblems in computer vision, speech recognition, object detection, to name a\nfew. The principal reason for this success is the availability of huge datasets\nfor training deep neural networks (DNNs). However, datasets can not be publicly\nreleased if they contain sensitive information such as medical or financial\nrecords. In such cases, data privacy becomes a major concern. Encryption\nmethods offer a possible solution to this issue, however their deployment on ML\napplications is non-trivial, as they seriously impact the classification\naccuracy and result in substantial computational overhead.Alternatively,\nobfuscation techniques can be used, but maintaining a good balance between\nvisual privacy and accuracy is challenging. In this work, we propose a method\nto generate secure synthetic datasets from the original private datasets. In\nour method, given a network with Batch Normalization (BN) layers pre-trained on\nthe original dataset, we first record the layer-wise BN statistics. Next, using\nthe BN statistics and the pre-trained model, we generate the synthetic dataset\nby optimizing random noises such that the synthetic data match the layer-wise\nstatistical distribution of the original model. We evaluate our method on image\nclassification dataset (CIFAR10) and show that our synthetic data can be used\nfor training networks from scratch, producing reasonable classification\nperformance.",
"title": "Synthetic Dataset Generation for Privacy-Preserving Machine Learning",
"url": "http://arxiv.org/abs/2210.03205v5"
} | null | null | no_new_dataset | admin | null | false | null | 7aea2fea-96db-4732-8496-7f4da428882c | null | Validated | 2023-10-04 15:19:51.883665 | {
"text_length": 1580
} | 1no_new_dataset
|
TITLE: A Large-Scale Annotated Multivariate Time Series Aviation Maintenance Dataset from the NGAFID
ABSTRACT: This paper presents the largest publicly available, non-simulated, fleet-wide
aircraft flight recording and maintenance log data for use in predicting part
failure and maintenance need. We present 31,177 hours of flight data across
28,935 flights, which occur relative to 2,111 unplanned maintenance events
clustered into 36 types of maintenance issues. Flights are annotated as before
or after maintenance, with some flights occurring on the day of maintenance.
Collecting data to evaluate predictive maintenance systems is challenging
because it is difficult, dangerous, and unethical to generate data from
compromised aircraft. To overcome this, we use the National General Aviation
Flight Information Database (NGAFID), which contains flights recorded during
regular operation of aircraft, and maintenance logs to construct a part failure
dataset. We use a novel framing of Remaining Useful Life (RUL) prediction and
consider the probability that the RUL of a part is greater than 2 days. Unlike
previous datasets generated with simulations or in laboratory settings, the
NGAFID Aviation Maintenance Dataset contains real flight records and
maintenance logs from different seasons, weather conditions, pilots, and flight
patterns. Additionally, we provide Python code to easily download the dataset
and a Colab environment to reproduce our benchmarks on three different models.
Our dataset presents a difficult challenge for machine learning researchers and
a valuable opportunity to test and develop prognostic health management methods | {
"abstract": "This paper presents the largest publicly available, non-simulated, fleet-wide\naircraft flight recording and maintenance log data for use in predicting part\nfailure and maintenance need. We present 31,177 hours of flight data across\n28,935 flights, which occur relative to 2,111 unplanned maintenance events\nclustered into 36 types of maintenance issues. Flights are annotated as before\nor after maintenance, with some flights occurring on the day of maintenance.\nCollecting data to evaluate predictive maintenance systems is challenging\nbecause it is difficult, dangerous, and unethical to generate data from\ncompromised aircraft. To overcome this, we use the National General Aviation\nFlight Information Database (NGAFID), which contains flights recorded during\nregular operation of aircraft, and maintenance logs to construct a part failure\ndataset. We use a novel framing of Remaining Useful Life (RUL) prediction and\nconsider the probability that the RUL of a part is greater than 2 days. Unlike\nprevious datasets generated with simulations or in laboratory settings, the\nNGAFID Aviation Maintenance Dataset contains real flight records and\nmaintenance logs from different seasons, weather conditions, pilots, and flight\npatterns. Additionally, we provide Python code to easily download the dataset\nand a Colab environment to reproduce our benchmarks on three different models.\nOur dataset presents a difficult challenge for machine learning researchers and\na valuable opportunity to test and develop prognostic health management methods",
"title": "A Large-Scale Annotated Multivariate Time Series Aviation Maintenance Dataset from the NGAFID",
"url": "http://arxiv.org/abs/2210.07317v1"
} | null | null | new_dataset | admin | null | false | null | 30c6cddb-32c6-4d1c-b1a7-c489d11229df | null | Validated | 2023-10-04 15:19:51.883498 | {
"text_length": 1669
} | 0new_dataset
|
TITLE: MSCTD: A Multimodal Sentiment Chat Translation Dataset
ABSTRACT: Multimodal machine translation and textual chat translation have received
considerable attention in recent years. Although the conversation in its
natural form is usually multimodal, there still lacks work on multimodal
machine translation in conversations. In this work, we introduce a new task
named Multimodal Chat Translation (MCT), aiming to generate more accurate
translations with the help of the associated dialogue history and visual
context. To this end, we firstly construct a Multimodal Sentiment Chat
Translation Dataset (MSCTD) containing 142,871 English-Chinese utterance pairs
in 14,762 bilingual dialogues and 30,370 English-German utterance pairs in
3,079 bilingual dialogues. Each utterance pair, corresponding to the visual
context that reflects the current conversational scene, is annotated with a
sentiment label. Then, we benchmark the task by establishing multiple baseline
systems that incorporate multimodal and sentiment features for MCT. Preliminary
experiments on four language directions (English-Chinese and English-German)
verify the potential of contextual and multimodal information fusion and the
positive impact of sentiment on the MCT task. Additionally, as a by-product of
the MSCTD, it also provides two new benchmarks on multimodal dialogue sentiment
analysis. Our work can facilitate research on both multimodal chat translation
and multimodal dialogue sentiment analysis. | {
"abstract": "Multimodal machine translation and textual chat translation have received\nconsiderable attention in recent years. Although the conversation in its\nnatural form is usually multimodal, there still lacks work on multimodal\nmachine translation in conversations. In this work, we introduce a new task\nnamed Multimodal Chat Translation (MCT), aiming to generate more accurate\ntranslations with the help of the associated dialogue history and visual\ncontext. To this end, we firstly construct a Multimodal Sentiment Chat\nTranslation Dataset (MSCTD) containing 142,871 English-Chinese utterance pairs\nin 14,762 bilingual dialogues and 30,370 English-German utterance pairs in\n3,079 bilingual dialogues. Each utterance pair, corresponding to the visual\ncontext that reflects the current conversational scene, is annotated with a\nsentiment label. Then, we benchmark the task by establishing multiple baseline\nsystems that incorporate multimodal and sentiment features for MCT. Preliminary\nexperiments on four language directions (English-Chinese and English-German)\nverify the potential of contextual and multimodal information fusion and the\npositive impact of sentiment on the MCT task. Additionally, as a by-product of\nthe MSCTD, it also provides two new benchmarks on multimodal dialogue sentiment\nanalysis. Our work can facilitate research on both multimodal chat translation\nand multimodal dialogue sentiment analysis.",
"title": "MSCTD: A Multimodal Sentiment Chat Translation Dataset",
"url": "http://arxiv.org/abs/2202.13645v1"
} | null | null | new_dataset | admin | null | false | null | 3645a88e-a862-4859-9224-ee3afba8a77e | null | Validated | 2023-10-04 15:19:51.887997 | {
"text_length": 1503
} | 0new_dataset
|
TITLE: A Framework for Deprecating Datasets: Standardizing Documentation, Identification, and Communication
ABSTRACT: Datasets are central to training machine learning (ML) models. The ML
community has recently made significant improvements to data stewardship and
documentation practices across the model development life cycle. However, the
act of deprecating, or deleting, datasets has been largely overlooked, and
there are currently no standardized approaches for structuring this stage of
the dataset life cycle. In this paper, we study the practice of dataset
deprecation in ML, identify several cases of datasets that continued to
circulate despite having been deprecated, and describe the different technical,
legal, ethical, and organizational issues raised by such continuations. We then
propose a Dataset Deprecation Framework that includes considerations of risk,
mitigation of impact, appeal mechanisms, timeline, post-deprecation protocols,
and publication checks that can be adapted and implemented by the ML community.
Finally, we propose creating a centralized, sustainable repository system for
archiving datasets, tracking dataset modifications or deprecations, and
facilitating practices of care and stewardship that can be integrated into
research and publication processes. | {
"abstract": "Datasets are central to training machine learning (ML) models. The ML\ncommunity has recently made significant improvements to data stewardship and\ndocumentation practices across the model development life cycle. However, the\nact of deprecating, or deleting, datasets has been largely overlooked, and\nthere are currently no standardized approaches for structuring this stage of\nthe dataset life cycle. In this paper, we study the practice of dataset\ndeprecation in ML, identify several cases of datasets that continued to\ncirculate despite having been deprecated, and describe the different technical,\nlegal, ethical, and organizational issues raised by such continuations. We then\npropose a Dataset Deprecation Framework that includes considerations of risk,\nmitigation of impact, appeal mechanisms, timeline, post-deprecation protocols,\nand publication checks that can be adapted and implemented by the ML community.\nFinally, we propose creating a centralized, sustainable repository system for\narchiving datasets, tracking dataset modifications or deprecations, and\nfacilitating practices of care and stewardship that can be integrated into\nresearch and publication processes.",
"title": "A Framework for Deprecating Datasets: Standardizing Documentation, Identification, and Communication",
"url": "http://arxiv.org/abs/2111.04424v2"
} | null | null | no_new_dataset | admin | null | false | null | 488a7ea5-1bd8-441c-947b-e4bb82531a6d | null | Validated | 2023-10-04 15:19:51.890320 | {
"text_length": 1313
} | 1no_new_dataset
|
TITLE: Analyzing Dataset Annotation Quality Management in the Wild
ABSTRACT: Data quality is crucial for training accurate, unbiased, and trustworthy
machine learning models and their correct evaluation. Recent works, however,
have shown that even popular datasets used to train and evaluate
state-of-the-art models contain a non-negligible amount of erroneous
annotations, bias or annotation artifacts. There exist best practices and
guidelines regarding annotation projects. But to the best of our knowledge, no
large-scale analysis has been performed as of yet on how quality management is
actually conducted when creating natural language datasets and whether these
recommendations are followed. Therefore, we first survey and summarize
recommended quality management practices for dataset creation as described in
the literature and provide suggestions on how to apply them. Then, we compile a
corpus of 591 scientific publications introducing text datasets and annotate it
for quality-related aspects, such as annotator management, agreement,
adjudication or data validation. Using these annotations, we then analyze how
quality management is conducted in practice. We find that a majority of the
annotated publications apply good or very good quality management. However, we
deem the effort of 30% of the works as only subpar. Our analysis also shows
common errors, especially with using inter-annotator agreement and computing
annotation error rates. | {
"abstract": "Data quality is crucial for training accurate, unbiased, and trustworthy\nmachine learning models and their correct evaluation. Recent works, however,\nhave shown that even popular datasets used to train and evaluate\nstate-of-the-art models contain a non-negligible amount of erroneous\nannotations, bias or annotation artifacts. There exist best practices and\nguidelines regarding annotation projects. But to the best of our knowledge, no\nlarge-scale analysis has been performed as of yet on how quality management is\nactually conducted when creating natural language datasets and whether these\nrecommendations are followed. Therefore, we first survey and summarize\nrecommended quality management practices for dataset creation as described in\nthe literature and provide suggestions on how to apply them. Then, we compile a\ncorpus of 591 scientific publications introducing text datasets and annotate it\nfor quality-related aspects, such as annotator management, agreement,\nadjudication or data validation. Using these annotations, we then analyze how\nquality management is conducted in practice. We find that a majority of the\nannotated publications apply good or very good quality management. However, we\ndeem the effort of 30% of the works as only subpar. Our analysis also shows\ncommon errors, especially with using inter-annotator agreement and computing\nannotation error rates.",
"title": "Analyzing Dataset Annotation Quality Management in the Wild",
"url": "http://arxiv.org/abs/2307.08153v2"
} | null | null | no_new_dataset | admin | null | false | null | e8ea3607-9223-4b42-8dd7-16daef1cefd2 | null | Validated | 2023-10-04 15:19:51.867080 | {
"text_length": 1475
} | 1no_new_dataset
|
TITLE: Elements of effective machine learning datasets in astronomy
ABSTRACT: In this work, we identify elements of effective machine learning datasets in
astronomy and present suggestions for their design and creation. Machine
learning has become an increasingly important tool for analyzing and
understanding the large-scale flood of data in astronomy. To take advantage of
these tools, datasets are required for training and testing. However, building
machine learning datasets for astronomy can be challenging. Astronomical data
is collected from instruments built to explore science questions in a
traditional fashion rather than to conduct machine learning. Thus, it is often
the case that raw data, or even downstream processed data is not in a form
amenable to machine learning. We explore the construction of machine learning
datasets and we ask: what elements define effective machine learning datasets?
We define effective machine learning datasets in astronomy to be formed with
well-defined data points, structure, and metadata. We discuss why these
elements are important for astronomical applications and ways to put them in
practice. We posit that these qualities not only make the data suitable for
machine learning, they also help to foster usable, reusable, and replicable
science practices. | {
"abstract": "In this work, we identify elements of effective machine learning datasets in\nastronomy and present suggestions for their design and creation. Machine\nlearning has become an increasingly important tool for analyzing and\nunderstanding the large-scale flood of data in astronomy. To take advantage of\nthese tools, datasets are required for training and testing. However, building\nmachine learning datasets for astronomy can be challenging. Astronomical data\nis collected from instruments built to explore science questions in a\ntraditional fashion rather than to conduct machine learning. Thus, it is often\nthe case that raw data, or even downstream processed data is not in a form\namenable to machine learning. We explore the construction of machine learning\ndatasets and we ask: what elements define effective machine learning datasets?\nWe define effective machine learning datasets in astronomy to be formed with\nwell-defined data points, structure, and metadata. We discuss why these\nelements are important for astronomical applications and ways to put them in\npractice. We posit that these qualities not only make the data suitable for\nmachine learning, they also help to foster usable, reusable, and replicable\nscience practices.",
"title": "Elements of effective machine learning datasets in astronomy",
"url": "http://arxiv.org/abs/2211.14401v2"
} | null | null | no_new_dataset | admin | null | false | null | 10f51720-501d-4239-a327-860125199b8a | null | Validated | 2023-10-04 15:19:51.882629 | {
"text_length": 1327
} | 1no_new_dataset
|
TITLE: MyDigitalFootprint: an extensive context dataset for pervasive computing applications at the edge
ABSTRACT: The widespread diffusion of connected smart devices has contributed to the
rapid expansion and evolution of the Internet at its edge. Personal mobile
devices interact with other smart objects in their surroundings, adapting
behavior based on rapidly changing user context. The ability of mobile devices
to process this data locally is crucial for quick adaptation. This can be
achieved through a single elaboration process integrated into user applications
or a middleware platform for context processing. However, the lack of public
datasets considering user context complexity in the mobile environment hinders
research progress. We introduce MyDigitalFootprint, a large-scale dataset
comprising smartphone sensor data, physical proximity information, and Online
Social Networks interactions. This dataset supports multimodal context
recognition and social relationship modeling. It spans two months of
measurements from 31 volunteer users in their natural environment, allowing for
unrestricted behavior. Existing public datasets focus on limited context data
for specific applications, while ours offers comprehensive information on the
user context in the mobile environment. To demonstrate the dataset's
effectiveness, we present three context-aware applications utilizing various
machine learning tasks: (i) a social link prediction algorithm based on
physical proximity data, (ii) daily-life activity recognition using
smartphone-embedded sensors data, and (iii) a pervasive context-aware
recommender system. Our dataset, with its heterogeneity of information, serves
as a valuable resource to validate new research in mobile and edge computing. | {
"abstract": "The widespread diffusion of connected smart devices has contributed to the\nrapid expansion and evolution of the Internet at its edge. Personal mobile\ndevices interact with other smart objects in their surroundings, adapting\nbehavior based on rapidly changing user context. The ability of mobile devices\nto process this data locally is crucial for quick adaptation. This can be\nachieved through a single elaboration process integrated into user applications\nor a middleware platform for context processing. However, the lack of public\ndatasets considering user context complexity in the mobile environment hinders\nresearch progress. We introduce MyDigitalFootprint, a large-scale dataset\ncomprising smartphone sensor data, physical proximity information, and Online\nSocial Networks interactions. This dataset supports multimodal context\nrecognition and social relationship modeling. It spans two months of\nmeasurements from 31 volunteer users in their natural environment, allowing for\nunrestricted behavior. Existing public datasets focus on limited context data\nfor specific applications, while ours offers comprehensive information on the\nuser context in the mobile environment. To demonstrate the dataset's\neffectiveness, we present three context-aware applications utilizing various\nmachine learning tasks: (i) a social link prediction algorithm based on\nphysical proximity data, (ii) daily-life activity recognition using\nsmartphone-embedded sensors data, and (iii) a pervasive context-aware\nrecommender system. Our dataset, with its heterogeneity of information, serves\nas a valuable resource to validate new research in mobile and edge computing.",
"title": "MyDigitalFootprint: an extensive context dataset for pervasive computing applications at the edge",
"url": "http://arxiv.org/abs/2306.15990v1"
} | null | null | new_dataset | admin | null | false | null | 838c9801-0d70-4eb5-a7ca-3dd90d2bd5d3 | null | Validated | 2023-10-04 15:19:51.869311 | {
"text_length": 1785
} | 0new_dataset
|
TITLE: Grain and Grain Boundary Segmentation using Machine Learning with Real and Generated Datasets
ABSTRACT: We report significantly improved accuracy of grain boundary segmentation
using Convolutional Neural Networks (CNN) trained on a combination of real and
generated data. Manual segmentation is accurate but time-consuming, and
existing computational methods are faster but often inaccurate. To combat this
dilemma, machine learning models can be used to achieve the accuracy of manual
segmentation and have the efficiency of a computational method. An extensive
dataset of from 316L stainless steel samples is additively manufactured,
prepared, polished, etched, and then microstructure grain images were
systematically collected. Grain segmentation via existing computational methods
and manual (by-hand) were conducted, to create "real" training data. A Voronoi
tessellation pattern combined with random synthetic noise and simulated
defects, is developed to create a novel artificial grain image fabrication
method. This provided training data supplementation for data-intensive machine
learning methods. The accuracy of the grain measurements from microstructure
images segmented via computational methods and machine learning methods
proposed in this work are calculated and compared to provide much benchmarks in
grain segmentation. Over 400 images of the microstructure of stainless steel
samples were manually segmented for machine learning training applications.
This data and the artificial data is available on Kaggle. | {
"abstract": "We report significantly improved accuracy of grain boundary segmentation\nusing Convolutional Neural Networks (CNN) trained on a combination of real and\ngenerated data. Manual segmentation is accurate but time-consuming, and\nexisting computational methods are faster but often inaccurate. To combat this\ndilemma, machine learning models can be used to achieve the accuracy of manual\nsegmentation and have the efficiency of a computational method. An extensive\ndataset of from 316L stainless steel samples is additively manufactured,\nprepared, polished, etched, and then microstructure grain images were\nsystematically collected. Grain segmentation via existing computational methods\nand manual (by-hand) were conducted, to create \"real\" training data. A Voronoi\ntessellation pattern combined with random synthetic noise and simulated\ndefects, is developed to create a novel artificial grain image fabrication\nmethod. This provided training data supplementation for data-intensive machine\nlearning methods. The accuracy of the grain measurements from microstructure\nimages segmented via computational methods and machine learning methods\nproposed in this work are calculated and compared to provide much benchmarks in\ngrain segmentation. Over 400 images of the microstructure of stainless steel\nsamples were manually segmented for machine learning training applications.\nThis data and the artificial data is available on Kaggle.",
"title": "Grain and Grain Boundary Segmentation using Machine Learning with Real and Generated Datasets",
"url": "http://arxiv.org/abs/2307.05911v1"
} | null | null | new_dataset | admin | null | false | null | 4c7dc54c-eb35-4d8d-92ff-a20b7733d2c9 | null | Validated | 2023-10-04 15:19:51.867523 | {
"text_length": 1554
} | 0new_dataset
|
TITLE: EPIE Dataset: A Corpus For Possible Idiomatic Expressions
ABSTRACT: Idiomatic expressions have always been a bottleneck for language
comprehension and natural language understanding, specifically for tasks like
Machine Translation(MT). MT systems predominantly produce literal translations
of idiomatic expressions as they do not exhibit generic and linguistically
deterministic patterns which can be exploited for comprehension of the
non-compositional meaning of the expressions. These expressions occur in
parallel corpora used for training, but due to the comparatively high
occurrences of the constituent words of idiomatic expressions in literal
context, the idiomatic meaning gets overpowered by the compositional meaning of
the expression. State of the art Metaphor Detection Systems are able to detect
non-compositional usage at word level but miss out on idiosyncratic phrasal
idiomatic expressions. This creates a dire need for a dataset with a wider
coverage and higher occurrence of commonly occurring idiomatic expressions, the
spans of which can be used for Metaphor Detection. With this in mind, we
present our English Possible Idiomatic Expressions(EPIE) corpus containing
25206 sentences labelled with lexical instances of 717 idiomatic expressions.
These spans also cover literal usages for the given set of idiomatic
expressions. We also present the utility of our dataset by using it to train a
sequence labelling module and testing on three independent datasets with high
accuracy, precision and recall scores. | {
"abstract": "Idiomatic expressions have always been a bottleneck for language\ncomprehension and natural language understanding, specifically for tasks like\nMachine Translation(MT). MT systems predominantly produce literal translations\nof idiomatic expressions as they do not exhibit generic and linguistically\ndeterministic patterns which can be exploited for comprehension of the\nnon-compositional meaning of the expressions. These expressions occur in\nparallel corpora used for training, but due to the comparatively high\noccurrences of the constituent words of idiomatic expressions in literal\ncontext, the idiomatic meaning gets overpowered by the compositional meaning of\nthe expression. State of the art Metaphor Detection Systems are able to detect\nnon-compositional usage at word level but miss out on idiosyncratic phrasal\nidiomatic expressions. This creates a dire need for a dataset with a wider\ncoverage and higher occurrence of commonly occurring idiomatic expressions, the\nspans of which can be used for Metaphor Detection. With this in mind, we\npresent our English Possible Idiomatic Expressions(EPIE) corpus containing\n25206 sentences labelled with lexical instances of 717 idiomatic expressions.\nThese spans also cover literal usages for the given set of idiomatic\nexpressions. We also present the utility of our dataset by using it to train a\nsequence labelling module and testing on three independent datasets with high\naccuracy, precision and recall scores.",
"title": "EPIE Dataset: A Corpus For Possible Idiomatic Expressions",
"url": "http://arxiv.org/abs/2006.09479v1"
} | null | null | new_dataset | admin | null | false | null | b472efe6-92a7-4ee1-a3dc-fe47bada5979 | null | Validated | 2023-10-04 15:19:51.899564 | {
"text_length": 1556
} | 0new_dataset
|
TITLE: A Comprehensive Review of Sign Language Recognition: Different Types, Modalities, and Datasets
ABSTRACT: A machine can understand human activities, and the meaning of signs can help
overcome the communication barriers between the inaudible and ordinary people.
Sign Language Recognition (SLR) is a fascinating research area and a crucial
task concerning computer vision and pattern recognition. Recently, SLR usage
has increased in many applications, but the environment, background image
resolution, modalities, and datasets affect the performance a lot. Many
researchers have been striving to carry out generic real-time SLR models. This
review paper facilitates a comprehensive overview of SLR and discusses the
needs, challenges, and problems associated with SLR. We study related works
about manual and non-manual, various modalities, and datasets. Research
progress and existing state-of-the-art SLR models over the past decade have
been reviewed. Finally, we find the research gap and limitations in this domain
and suggest future directions. This review paper will be helpful for readers
and researchers to get complete guidance about SLR and the progressive design
of the state-of-the-art SLR model | {
"abstract": "A machine can understand human activities, and the meaning of signs can help\novercome the communication barriers between the inaudible and ordinary people.\nSign Language Recognition (SLR) is a fascinating research area and a crucial\ntask concerning computer vision and pattern recognition. Recently, SLR usage\nhas increased in many applications, but the environment, background image\nresolution, modalities, and datasets affect the performance a lot. Many\nresearchers have been striving to carry out generic real-time SLR models. This\nreview paper facilitates a comprehensive overview of SLR and discusses the\nneeds, challenges, and problems associated with SLR. We study related works\nabout manual and non-manual, various modalities, and datasets. Research\nprogress and existing state-of-the-art SLR models over the past decade have\nbeen reviewed. Finally, we find the research gap and limitations in this domain\nand suggest future directions. This review paper will be helpful for readers\nand researchers to get complete guidance about SLR and the progressive design\nof the state-of-the-art SLR model",
"title": "A Comprehensive Review of Sign Language Recognition: Different Types, Modalities, and Datasets",
"url": "http://arxiv.org/abs/2204.03328v1"
} | null | null | no_new_dataset | admin | null | false | null | be84e115-3d65-404e-b8a7-ac6c5e79679c | null | Validated | 2023-10-04 15:19:51.887188 | {
"text_length": 1231
} | 1no_new_dataset
|
TITLE: A universal synthetic dataset for machine learning on spectroscopic data
ABSTRACT: To assist in the development of machine learning methods for automated
classification of spectroscopic data, we have generated a universal synthetic
dataset that can be used for model validation. This dataset contains artificial
spectra designed to represent experimental measurements from techniques
including X-ray diffraction, nuclear magnetic resonance, and Raman
spectroscopy. The dataset generation process features customizable parameters,
such as scan length and peak count, which can be adjusted to fit the problem at
hand. As an initial benchmark, we simulated a dataset containing 35,000 spectra
based on 500 unique classes. To automate the classification of this data, eight
different machine learning architectures were evaluated. From the results, we
shed light on which factors are most critical to achieve optimal performance
for the classification task. The scripts used to generate synthetic spectra, as
well as our benchmark dataset and evaluation routines, are made publicly
available to aid in the development of improved machine learning models for
spectroscopic analysis. | {
"abstract": "To assist in the development of machine learning methods for automated\nclassification of spectroscopic data, we have generated a universal synthetic\ndataset that can be used for model validation. This dataset contains artificial\nspectra designed to represent experimental measurements from techniques\nincluding X-ray diffraction, nuclear magnetic resonance, and Raman\nspectroscopy. The dataset generation process features customizable parameters,\nsuch as scan length and peak count, which can be adjusted to fit the problem at\nhand. As an initial benchmark, we simulated a dataset containing 35,000 spectra\nbased on 500 unique classes. To automate the classification of this data, eight\ndifferent machine learning architectures were evaluated. From the results, we\nshed light on which factors are most critical to achieve optimal performance\nfor the classification task. The scripts used to generate synthetic spectra, as\nwell as our benchmark dataset and evaluation routines, are made publicly\navailable to aid in the development of improved machine learning models for\nspectroscopic analysis.",
"title": "A universal synthetic dataset for machine learning on spectroscopic data",
"url": "http://arxiv.org/abs/2206.06031v2"
} | null | null | new_dataset | admin | null | false | null | 3b5aa46c-1600-414e-96b9-4f0a87a2e7ba | null | Validated | 2023-10-04 15:19:51.885819 | {
"text_length": 1201
} | 0new_dataset
|
TITLE: t-METASET: Tailoring Property Bias of Large-Scale Metamaterial Datasets through Active Learning
ABSTRACT: Inspired by the recent achievements of machine learning in diverse domains,
data-driven metamaterials design has emerged as a compelling paradigm that can
unlock the potential of multiscale architectures. The model-centric research
trend, however, lacks principled frameworks dedicated to data acquisition,
whose quality propagates into the downstream tasks. Often built by naive
space-filling design in shape descriptor space, metamaterial datasets suffer
from property distributions that are either highly imbalanced or at odds with
design tasks of interest. To this end, we present t-METASET: an
active-learning-based data acquisition framework aiming to guide both diverse
and task-aware data generation. Distinctly, we seek a solution to a commonplace
yet frequently overlooked scenario at early stages of data-driven design of
metamaterials: when a massive (~O(10^4 )) shape-only library has been prepared
with no properties evaluated. The key idea is to harness a data-driven shape
descriptor learned from generative models, fit a sparse regressor as a start-up
agent, and leverage metrics related to diversity to drive data acquisition to
areas that help designers fulfill design goals. We validate the proposed
framework in three deployment cases, which encompass general use, task-specific
use, and tailorable use. Two large-scale mechanical metamaterial datasets are
used to demonstrate the efficacy. Applicable to general image-based design
representations, t-METASET could boost future advancements in data-driven
design. | {
"abstract": "Inspired by the recent achievements of machine learning in diverse domains,\ndata-driven metamaterials design has emerged as a compelling paradigm that can\nunlock the potential of multiscale architectures. The model-centric research\ntrend, however, lacks principled frameworks dedicated to data acquisition,\nwhose quality propagates into the downstream tasks. Often built by naive\nspace-filling design in shape descriptor space, metamaterial datasets suffer\nfrom property distributions that are either highly imbalanced or at odds with\ndesign tasks of interest. To this end, we present t-METASET: an\nactive-learning-based data acquisition framework aiming to guide both diverse\nand task-aware data generation. Distinctly, we seek a solution to a commonplace\nyet frequently overlooked scenario at early stages of data-driven design of\nmetamaterials: when a massive (~O(10^4 )) shape-only library has been prepared\nwith no properties evaluated. The key idea is to harness a data-driven shape\ndescriptor learned from generative models, fit a sparse regressor as a start-up\nagent, and leverage metrics related to diversity to drive data acquisition to\nareas that help designers fulfill design goals. We validate the proposed\nframework in three deployment cases, which encompass general use, task-specific\nuse, and tailorable use. Two large-scale mechanical metamaterial datasets are\nused to demonstrate the efficacy. Applicable to general image-based design\nrepresentations, t-METASET could boost future advancements in data-driven\ndesign.",
"title": "t-METASET: Tailoring Property Bias of Large-Scale Metamaterial Datasets through Active Learning",
"url": "http://arxiv.org/abs/2202.10565v2"
} | null | null | no_new_dataset | admin | null | false | null | fc0df2fa-78eb-41e5-8564-42530e2b3306 | null | Validated | 2023-10-04 15:19:51.888171 | {
"text_length": 1664
} | 1no_new_dataset
|
TITLE: Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets
ABSTRACT: Data is the engine of modern computer vision, which necessitates collecting
large-scale datasets. This is expensive, and guaranteeing the quality of the
labels is a major challenge. In this paper, we investigate efficient annotation
strategies for collecting multi-class classification labels for a large
collection of images. While methods that exploit learnt models for labeling
exist, a surprisingly prevalent approach is to query humans for a fixed number
of labels per datum and aggregate them, which is expensive. Building on prior
work on online joint probabilistic modeling of human annotations and
machine-generated beliefs, we propose modifications and best practices aimed at
minimizing human labeling effort. Specifically, we make use of advances in
self-supervised learning, view annotation as a semi-supervised learning
problem, identify and mitigate pitfalls and ablate several key design choices
to propose effective guidelines for labeling. Our analysis is done in a more
realistic simulation that involves querying human labelers, which uncovers
issues with evaluation using existing worker simulation methods. Simulated
experiments on a 125k image subset of the ImageNet100 show that it can be
annotated to 80% top-1 accuracy with 0.35 annotations per image on average, a
2.7x and 6.7x improvement over prior work and manual annotation, respectively.
Project page: https://fidler-lab.github.io/efficient-annotation-cookbook | {
"abstract": "Data is the engine of modern computer vision, which necessitates collecting\nlarge-scale datasets. This is expensive, and guaranteeing the quality of the\nlabels is a major challenge. In this paper, we investigate efficient annotation\nstrategies for collecting multi-class classification labels for a large\ncollection of images. While methods that exploit learnt models for labeling\nexist, a surprisingly prevalent approach is to query humans for a fixed number\nof labels per datum and aggregate them, which is expensive. Building on prior\nwork on online joint probabilistic modeling of human annotations and\nmachine-generated beliefs, we propose modifications and best practices aimed at\nminimizing human labeling effort. Specifically, we make use of advances in\nself-supervised learning, view annotation as a semi-supervised learning\nproblem, identify and mitigate pitfalls and ablate several key design choices\nto propose effective guidelines for labeling. Our analysis is done in a more\nrealistic simulation that involves querying human labelers, which uncovers\nissues with evaluation using existing worker simulation methods. Simulated\nexperiments on a 125k image subset of the ImageNet100 show that it can be\nannotated to 80% top-1 accuracy with 0.35 annotations per image on average, a\n2.7x and 6.7x improvement over prior work and manual annotation, respectively.\nProject page: https://fidler-lab.github.io/efficient-annotation-cookbook",
"title": "Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets",
"url": "http://arxiv.org/abs/2104.12690v1"
} | null | null | no_new_dataset | admin | null | false | null | 4893023b-7647-41ee-94e6-fe21bae5167c | null | Validated | 2023-10-04 15:19:51.894837 | {
"text_length": 1568
} | 1no_new_dataset
|
TITLE: MultiWOZ-DF -- A Dataflow implementation of the MultiWOZ dataset
ABSTRACT: Semantic Machines (SM) have introduced the use of the dataflow (DF) paradigm
to dialogue modelling, using computational graphs to hierarchically represent
user requests, data, and the dialogue history [Semantic Machines et al. 2020].
Although the main focus of that paper was the SMCalFlow dataset (to date, the
only dataset with "native" DF annotations), they also reported some results of
an experiment using a transformed version of the commonly used MultiWOZ dataset
[Budzianowski et al. 2018] into a DF format. In this paper, we expand the
experiments using DF for the MultiWOZ dataset, exploring some additional
experimental set-ups. The code and instructions to reproduce the experiments
reported here have been released. The contributions of this paper are: 1.) A DF
implementation capable of executing MultiWOZ dialogues; 2.) Several versions of
conversion of MultiWOZ into a DF format are presented; 3.) Experimental results
on state match and translation accuracy. | {
"abstract": "Semantic Machines (SM) have introduced the use of the dataflow (DF) paradigm\nto dialogue modelling, using computational graphs to hierarchically represent\nuser requests, data, and the dialogue history [Semantic Machines et al. 2020].\nAlthough the main focus of that paper was the SMCalFlow dataset (to date, the\nonly dataset with \"native\" DF annotations), they also reported some results of\nan experiment using a transformed version of the commonly used MultiWOZ dataset\n[Budzianowski et al. 2018] into a DF format. In this paper, we expand the\nexperiments using DF for the MultiWOZ dataset, exploring some additional\nexperimental set-ups. The code and instructions to reproduce the experiments\nreported here have been released. The contributions of this paper are: 1.) A DF\nimplementation capable of executing MultiWOZ dialogues; 2.) Several versions of\nconversion of MultiWOZ into a DF format are presented; 3.) Experimental results\non state match and translation accuracy.",
"title": "MultiWOZ-DF -- A Dataflow implementation of the MultiWOZ dataset",
"url": "http://arxiv.org/abs/2211.02303v1"
} | null | null | no_new_dataset | admin | null | false | null | 179b9cff-02d5-4dec-9f40-b2cd7e6d34eb | null | Validated | 2023-10-04 15:19:51.883091 | {
"text_length": 1074
} | 1no_new_dataset
|
TITLE: Deep Learning Analysis of Cardiac MRI in Legacy Datasets: Multi-Ethnic Study of Atherosclerosis
ABSTRACT: The shape and motion of the heart provide essential clues to understanding
the mechanisms of cardiovascular disease. With the advent of large-scale
cardiac imaging data, statistical atlases become a powerful tool to provide
automated and precise quantification of the status of patient-specific heart
geometry with respect to reference populations. The Multi-Ethnic Study of
Atherosclerosis (MESA), begun in 2000, was the first large cohort study to
incorporate cardiovascular MRI in over 5000 participants, and there is now a
wealth of follow-up data over 20 years. Building a machine learning based
automated analysis is necessary to extract the additional imaging information
necessary for expanding original manual analyses. However, machine learning
tools trained on MRI datasets with different pulse sequences fail on such
legacy datasets. Here, we describe an automated atlas construction pipeline
using deep learning methods applied to the legacy cardiac MRI data in MESA. For
detection of anatomical cardiac landmark points, a modified VGGNet
convolutional neural network architecture was used in conjunction with a
transfer learning sequence between two-chamber, four-chamber, and short-axis
MRI views. A U-Net architecture was used for detection of the endocardial and
epicardial boundaries in short axis images. Both network architectures resulted
in good segmentation and landmark detection accuracies compared with
inter-observer variations. Statistical relationships with common risk factors
were similar between atlases derived from automated vs manual annotations. The
automated atlas can be employed in future studies to examine the relationships
between cardiac morphology and future events. | {
"abstract": "The shape and motion of the heart provide essential clues to understanding\nthe mechanisms of cardiovascular disease. With the advent of large-scale\ncardiac imaging data, statistical atlases become a powerful tool to provide\nautomated and precise quantification of the status of patient-specific heart\ngeometry with respect to reference populations. The Multi-Ethnic Study of\nAtherosclerosis (MESA), begun in 2000, was the first large cohort study to\nincorporate cardiovascular MRI in over 5000 participants, and there is now a\nwealth of follow-up data over 20 years. Building a machine learning based\nautomated analysis is necessary to extract the additional imaging information\nnecessary for expanding original manual analyses. However, machine learning\ntools trained on MRI datasets with different pulse sequences fail on such\nlegacy datasets. Here, we describe an automated atlas construction pipeline\nusing deep learning methods applied to the legacy cardiac MRI data in MESA. For\ndetection of anatomical cardiac landmark points, a modified VGGNet\nconvolutional neural network architecture was used in conjunction with a\ntransfer learning sequence between two-chamber, four-chamber, and short-axis\nMRI views. A U-Net architecture was used for detection of the endocardial and\nepicardial boundaries in short axis images. Both network architectures resulted\nin good segmentation and landmark detection accuracies compared with\ninter-observer variations. Statistical relationships with common risk factors\nwere similar between atlases derived from automated vs manual annotations. The\nautomated atlas can be employed in future studies to examine the relationships\nbetween cardiac morphology and future events.",
"title": "Deep Learning Analysis of Cardiac MRI in Legacy Datasets: Multi-Ethnic Study of Atherosclerosis",
"url": "http://arxiv.org/abs/2110.15144v1"
} | null | null | no_new_dataset | admin | null | false | null | 342c5e74-710d-4ba1-856e-60281de6d495 | null | Validated | 2023-10-04 15:19:51.890198 | {
"text_length": 1840
} | 1no_new_dataset
|
TITLE: SODA: Site Object Detection dAtaset for Deep Learning in Construction
ABSTRACT: Computer vision-based deep learning object detection algorithms have been
developed sufficiently powerful to support the ability to recognize various
objects. Although there are currently general datasets for object detection,
there is still a lack of large-scale, open-source dataset for the construction
industry, which limits the developments of object detection algorithms as they
tend to be data-hungry. Therefore, this paper develops a new large-scale image
dataset specifically collected and annotated for the construction site, called
Site Object Detection dAtaset (SODA), which contains 15 kinds of object classes
categorized by workers, materials, machines, and layout. Firstly, more than
20,000 images were collected from multiple construction sites in different site
conditions, weather conditions, and construction phases, which covered
different angles and perspectives. After careful screening and processing,
19,846 images including 286,201 objects were then obtained and annotated with
labels in accordance with predefined categories. Statistical analysis shows
that the developed dataset is advantageous in terms of diversity and volume.
Further evaluation with two widely-adopted object detection algorithms based on
deep learning (YOLO v3/ YOLO v4) also illustrates the feasibility of the
dataset for typical construction scenarios, achieving a maximum mAP of 81.47%.
In this manner, this research contributes a large-scale image dataset for the
development of deep learning-based object detection methods in the construction
industry and sets up a performance benchmark for further evaluation of
corresponding algorithms in this area. | {
"abstract": "Computer vision-based deep learning object detection algorithms have been\ndeveloped sufficiently powerful to support the ability to recognize various\nobjects. Although there are currently general datasets for object detection,\nthere is still a lack of large-scale, open-source dataset for the construction\nindustry, which limits the developments of object detection algorithms as they\ntend to be data-hungry. Therefore, this paper develops a new large-scale image\ndataset specifically collected and annotated for the construction site, called\nSite Object Detection dAtaset (SODA), which contains 15 kinds of object classes\ncategorized by workers, materials, machines, and layout. Firstly, more than\n20,000 images were collected from multiple construction sites in different site\nconditions, weather conditions, and construction phases, which covered\ndifferent angles and perspectives. After careful screening and processing,\n19,846 images including 286,201 objects were then obtained and annotated with\nlabels in accordance with predefined categories. Statistical analysis shows\nthat the developed dataset is advantageous in terms of diversity and volume.\nFurther evaluation with two widely-adopted object detection algorithms based on\ndeep learning (YOLO v3/ YOLO v4) also illustrates the feasibility of the\ndataset for typical construction scenarios, achieving a maximum mAP of 81.47%.\nIn this manner, this research contributes a large-scale image dataset for the\ndevelopment of deep learning-based object detection methods in the construction\nindustry and sets up a performance benchmark for further evaluation of\ncorresponding algorithms in this area.",
"title": "SODA: Site Object Detection dAtaset for Deep Learning in Construction",
"url": "http://arxiv.org/abs/2202.09554v1"
} | null | null | new_dataset | admin | null | false | null | 6f4cd0f4-268f-4df4-a1e0-0ca594115151 | null | Validated | 2023-10-04 15:19:51.888195 | {
"text_length": 1759
} | 0new_dataset
|
TITLE: Towards an AI-enabled Connected Industry: AGV Communication and Sensor Measurement Datasets
ABSTRACT: This paper presents two wireless measurement campaigns in industrial
testbeds: industrial Vehicle-to-vehicle (iV2V) and industrial
Vehicle-to-infrastructure plus Sensor (iV2I+), together with detailed
information about the two captured datasets. iV2V covers sidelink communication
scenarios between Automated Guided Vehicles (AGVs), while iV2I+ is conducted at
an industrial setting where an autonomous cleaning robot is connected to a
private cellular network. The combination of different communication
technologies within a common measurement methodology provides insights that can
be exploited by Machine Learning (ML) for tasks such as fingerprinting,
line-of-sight detection, prediction of quality of service or link selection.
Moreover, the datasets are publicly available, labelled and prefiltered for
fast on-boarding and applicability. | {
"abstract": "This paper presents two wireless measurement campaigns in industrial\ntestbeds: industrial Vehicle-to-vehicle (iV2V) and industrial\nVehicle-to-infrastructure plus Sensor (iV2I+), together with detailed\ninformation about the two captured datasets. iV2V covers sidelink communication\nscenarios between Automated Guided Vehicles (AGVs), while iV2I+ is conducted at\nan industrial setting where an autonomous cleaning robot is connected to a\nprivate cellular network. The combination of different communication\ntechnologies within a common measurement methodology provides insights that can\nbe exploited by Machine Learning (ML) for tasks such as fingerprinting,\nline-of-sight detection, prediction of quality of service or link selection.\nMoreover, the datasets are publicly available, labelled and prefiltered for\nfast on-boarding and applicability.",
"title": "Towards an AI-enabled Connected Industry: AGV Communication and Sensor Measurement Datasets",
"url": "http://arxiv.org/abs/2301.03364v4"
} | null | null | no_new_dataset | admin | null | false | null | 62494be6-a749-4668-a401-35235b795f1c | null | Validated | 2023-10-04 15:19:51.881972 | {
"text_length": 971
} | 1no_new_dataset
|
TITLE: Ensemble Classifier Design Tuned to Dataset Characteristics for Network Intrusion Detection
ABSTRACT: Machine Learning-based supervised approaches require highly customized and
fine-tuned methodologies to deliver outstanding performance. This paper
presents a dataset-driven design and performance evaluation of a machine
learning classifier for the network intrusion dataset UNSW-NB15. Analysis of
the dataset suggests that it suffers from class representation imbalance and
class overlap in the feature space. We employed ensemble methods using Balanced
Bagging (BB), eXtreme Gradient Boosting (XGBoost), and Random Forest empowered
by Hellinger Distance Decision Tree (RF-HDDT). BB and XGBoost are tuned to
handle the imbalanced data, and Random Forest (RF) classifier is supplemented
by the Hellinger metric to address the imbalance issue. Two new algorithms are
proposed to address the class overlap issue in the dataset. These two
algorithms are leveraged to help improve the performance of the testing dataset
by modifying the final classification decision made by three base classifiers
as part of the ensemble classifier which employs a majority vote combiner. The
proposed design is evaluated for both binary and multi-category classification.
Comparing the proposed model to those reported on the same dataset in the
literature demonstrate that the proposed model outperforms others by a
significant margin for both binary and multi-category classification cases. | {
"abstract": "Machine Learning-based supervised approaches require highly customized and\nfine-tuned methodologies to deliver outstanding performance. This paper\npresents a dataset-driven design and performance evaluation of a machine\nlearning classifier for the network intrusion dataset UNSW-NB15. Analysis of\nthe dataset suggests that it suffers from class representation imbalance and\nclass overlap in the feature space. We employed ensemble methods using Balanced\nBagging (BB), eXtreme Gradient Boosting (XGBoost), and Random Forest empowered\nby Hellinger Distance Decision Tree (RF-HDDT). BB and XGBoost are tuned to\nhandle the imbalanced data, and Random Forest (RF) classifier is supplemented\nby the Hellinger metric to address the imbalance issue. Two new algorithms are\nproposed to address the class overlap issue in the dataset. These two\nalgorithms are leveraged to help improve the performance of the testing dataset\nby modifying the final classification decision made by three base classifiers\nas part of the ensemble classifier which employs a majority vote combiner. The\nproposed design is evaluated for both binary and multi-category classification.\nComparing the proposed model to those reported on the same dataset in the\nliterature demonstrate that the proposed model outperforms others by a\nsignificant margin for both binary and multi-category classification cases.",
"title": "Ensemble Classifier Design Tuned to Dataset Characteristics for Network Intrusion Detection",
"url": "http://arxiv.org/abs/2205.06177v1"
} | null | null | no_new_dataset | admin | null | false | null | 294e3ef9-22f8-4432-b3ca-4adecd5eff00 | null | Validated | 2023-10-04 15:19:51.886562 | {
"text_length": 1498
} | 1no_new_dataset
|
TITLE: Interactive exploration of population scale pharmacoepidemiology datasets
ABSTRACT: Population-scale drug prescription data linked with adverse drug reaction
(ADR) data supports the fitting of models large enough to detect drug use and
ADR patterns that are not detectable using traditional methods on smaller
datasets. However, detecting ADR patterns in large datasets requires tools for
scalable data processing, machine learning for data analysis, and interactive
visualization. To our knowledge no existing pharmacoepidemiology tool supports
all three requirements. We have therefore created a tool for interactive
exploration of patterns in prescription datasets with millions of samples. We
use Spark to preprocess the data for machine learning and for analyses using
SQL queries. We have implemented models in Keras and the scikit-learn
framework. The model results are visualized and interpreted using live Python
coding in Jupyter. We apply our tool to explore a 384 million prescription data
set from the Norwegian Prescription Database combined with a 62 million
prescriptions for elders that were hospitalized. We preprocess the data in two
minutes, train models in seconds, and plot the results in milliseconds. Our
results show the power of combining computational power, short computation
times, and ease of use for analysis of population scale pharmacoepidemiology
datasets. The code is open source and available at:
https://github.com/uit-hdl/norpd_prescription_analyses | {
"abstract": "Population-scale drug prescription data linked with adverse drug reaction\n(ADR) data supports the fitting of models large enough to detect drug use and\nADR patterns that are not detectable using traditional methods on smaller\ndatasets. However, detecting ADR patterns in large datasets requires tools for\nscalable data processing, machine learning for data analysis, and interactive\nvisualization. To our knowledge no existing pharmacoepidemiology tool supports\nall three requirements. We have therefore created a tool for interactive\nexploration of patterns in prescription datasets with millions of samples. We\nuse Spark to preprocess the data for machine learning and for analyses using\nSQL queries. We have implemented models in Keras and the scikit-learn\nframework. The model results are visualized and interpreted using live Python\ncoding in Jupyter. We apply our tool to explore a 384 million prescription data\nset from the Norwegian Prescription Database combined with a 62 million\nprescriptions for elders that were hospitalized. We preprocess the data in two\nminutes, train models in seconds, and plot the results in milliseconds. Our\nresults show the power of combining computational power, short computation\ntimes, and ease of use for analysis of population scale pharmacoepidemiology\ndatasets. The code is open source and available at:\nhttps://github.com/uit-hdl/norpd_prescription_analyses",
"title": "Interactive exploration of population scale pharmacoepidemiology datasets",
"url": "http://arxiv.org/abs/2005.09890v1"
} | null | null | no_new_dataset | admin | null | false | null | 79e087ae-beb5-4265-9a86-b2c8d9d5e6aa | null | Validated | 2023-10-04 15:19:51.899903 | {
"text_length": 1511
} | 1no_new_dataset
|
TITLE: Is augmentation effective to improve prediction in imbalanced text datasets?
ABSTRACT: Imbalanced datasets present a significant challenge for machine learning
models, often leading to biased predictions. To address this issue, data
augmentation techniques are widely used in natural language processing (NLP) to
generate new samples for the minority class. However, in this paper, we
challenge the common assumption that data augmentation is always necessary to
improve predictions on imbalanced datasets. Instead, we argue that adjusting
the classifier cutoffs without data augmentation can produce similar results to
oversampling techniques. Our study provides theoretical and empirical evidence
to support this claim. Our findings contribute to a better understanding of the
strengths and limitations of different approaches to dealing with imbalanced
data, and help researchers and practitioners make informed decisions about
which methods to use for a given task. | {
"abstract": "Imbalanced datasets present a significant challenge for machine learning\nmodels, often leading to biased predictions. To address this issue, data\naugmentation techniques are widely used in natural language processing (NLP) to\ngenerate new samples for the minority class. However, in this paper, we\nchallenge the common assumption that data augmentation is always necessary to\nimprove predictions on imbalanced datasets. Instead, we argue that adjusting\nthe classifier cutoffs without data augmentation can produce similar results to\noversampling techniques. Our study provides theoretical and empirical evidence\nto support this claim. Our findings contribute to a better understanding of the\nstrengths and limitations of different approaches to dealing with imbalanced\ndata, and help researchers and practitioners make informed decisions about\nwhich methods to use for a given task.",
"title": "Is augmentation effective to improve prediction in imbalanced text datasets?",
"url": "http://arxiv.org/abs/2304.10283v1"
} | null | null | no_new_dataset | admin | null | false | null | 557d240f-f5e9-435f-9fa6-fce550c90c05 | null | Validated | 2023-10-04 15:19:51.879827 | {
"text_length": 993
} | 1no_new_dataset
|
TITLE: GECTurk: Grammatical Error Correction and Detection Dataset for Turkish
ABSTRACT: Grammatical Error Detection and Correction (GEC) tools have proven useful for
native speakers and second language learners. Developing such tools requires a
large amount of parallel, annotated data, which is unavailable for most
languages. Synthetic data generation is a common practice to overcome the
scarcity of such data. However, it is not straightforward for morphologically
rich languages like Turkish due to complex writing rules that require
phonological, morphological, and syntactic information. In this work, we
present a flexible and extensible synthetic data generation pipeline for
Turkish covering more than 20 expert-curated grammar and spelling rules
(a.k.a., writing rules) implemented through complex transformation functions.
Using this pipeline, we derive 130,000 high-quality parallel sentences from
professionally edited articles. Additionally, we create a more realistic test
set by manually annotating a set of movie reviews. We implement three baselines
formulating the task as i) neural machine translation, ii) sequence tagging,
and iii) prefix tuning with a pretrained decoder-only model, achieving strong
results. Furthermore, we perform exhaustive experiments on out-of-domain
datasets to gain insights on the transferability and robustness of the proposed
approaches. Our results suggest that our corpus, GECTurk, is high-quality and
allows knowledge transfer for the out-of-domain setting. To encourage further
research on Turkish GEC, we release our datasets, baseline models, and the
synthetic data generation pipeline at https://github.com/GGLAB-KU/gecturk. | {
"abstract": "Grammatical Error Detection and Correction (GEC) tools have proven useful for\nnative speakers and second language learners. Developing such tools requires a\nlarge amount of parallel, annotated data, which is unavailable for most\nlanguages. Synthetic data generation is a common practice to overcome the\nscarcity of such data. However, it is not straightforward for morphologically\nrich languages like Turkish due to complex writing rules that require\nphonological, morphological, and syntactic information. In this work, we\npresent a flexible and extensible synthetic data generation pipeline for\nTurkish covering more than 20 expert-curated grammar and spelling rules\n(a.k.a., writing rules) implemented through complex transformation functions.\nUsing this pipeline, we derive 130,000 high-quality parallel sentences from\nprofessionally edited articles. Additionally, we create a more realistic test\nset by manually annotating a set of movie reviews. We implement three baselines\nformulating the task as i) neural machine translation, ii) sequence tagging,\nand iii) prefix tuning with a pretrained decoder-only model, achieving strong\nresults. Furthermore, we perform exhaustive experiments on out-of-domain\ndatasets to gain insights on the transferability and robustness of the proposed\napproaches. Our results suggest that our corpus, GECTurk, is high-quality and\nallows knowledge transfer for the out-of-domain setting. To encourage further\nresearch on Turkish GEC, we release our datasets, baseline models, and the\nsynthetic data generation pipeline at https://github.com/GGLAB-KU/gecturk.",
"title": "GECTurk: Grammatical Error Correction and Detection Dataset for Turkish",
"url": "http://arxiv.org/abs/2309.11346v1"
} | null | null | new_dataset | admin | null | false | null | 3ebdf7a8-8877-444a-95ef-591a72f782ed | null | Validated | 2023-10-04 15:19:51.863395 | {
"text_length": 1700
} | 0new_dataset
|
TITLE: OpenEDS2020: Open Eyes Dataset
ABSTRACT: We present the second edition of OpenEDS dataset, OpenEDS2020, a novel
dataset of eye-image sequences captured at a frame rate of 100 Hz under
controlled illumination, using a virtual-reality head-mounted display mounted
with two synchronized eye-facing cameras. The dataset, which is anonymized to
remove any personally identifiable information on participants, consists of 80
participants of varied appearance performing several gaze-elicited tasks, and
is divided in two subsets: 1) Gaze Prediction Dataset, with up to 66,560
sequences containing 550,400 eye-images and respective gaze vectors, created to
foster research in spatio-temporal gaze estimation and prediction approaches;
and 2) Eye Segmentation Dataset, consisting of 200 sequences sampled at 5 Hz,
with up to 29,500 images, of which 5% contain a semantic segmentation label,
devised to encourage the use of temporal information to propagate labels to
contiguous frames. Baseline experiments have been evaluated on OpenEDS2020, one
for each task, with average angular error of 5.37 degrees when performing gaze
prediction on 1 to 5 frames into the future, and a mean intersection over union
score of 84.1% for semantic segmentation. As its predecessor, OpenEDS dataset,
we anticipate that this new dataset will continue creating opportunities to
researchers in eye tracking, machine learning and computer vision communities,
to advance the state of the art for virtual reality applications. The dataset
is available for download upon request at
http://research.fb.com/programs/openeds-2020-challenge/. | {
"abstract": "We present the second edition of OpenEDS dataset, OpenEDS2020, a novel\ndataset of eye-image sequences captured at a frame rate of 100 Hz under\ncontrolled illumination, using a virtual-reality head-mounted display mounted\nwith two synchronized eye-facing cameras. The dataset, which is anonymized to\nremove any personally identifiable information on participants, consists of 80\nparticipants of varied appearance performing several gaze-elicited tasks, and\nis divided in two subsets: 1) Gaze Prediction Dataset, with up to 66,560\nsequences containing 550,400 eye-images and respective gaze vectors, created to\nfoster research in spatio-temporal gaze estimation and prediction approaches;\nand 2) Eye Segmentation Dataset, consisting of 200 sequences sampled at 5 Hz,\nwith up to 29,500 images, of which 5% contain a semantic segmentation label,\ndevised to encourage the use of temporal information to propagate labels to\ncontiguous frames. Baseline experiments have been evaluated on OpenEDS2020, one\nfor each task, with average angular error of 5.37 degrees when performing gaze\nprediction on 1 to 5 frames into the future, and a mean intersection over union\nscore of 84.1% for semantic segmentation. As its predecessor, OpenEDS dataset,\nwe anticipate that this new dataset will continue creating opportunities to\nresearchers in eye tracking, machine learning and computer vision communities,\nto advance the state of the art for virtual reality applications. The dataset\nis available for download upon request at\nhttp://research.fb.com/programs/openeds-2020-challenge/.",
"title": "OpenEDS2020: Open Eyes Dataset",
"url": "http://arxiv.org/abs/2005.03876v1"
} | null | null | new_dataset | admin | null | false | null | 3c91fcb2-627f-4f30-888a-f6e3fa315c73 | null | Validated | 2023-10-04 15:19:51.900118 | {
"text_length": 1632
} | 0new_dataset
|
TITLE: More Than Reading Comprehension: A Survey on Datasets and Metrics of Textual Question Answering
ABSTRACT: Textual Question Answering (QA) aims to provide precise answers to user's
questions in natural language using unstructured data. One of the most popular
approaches to this goal is machine reading comprehension(MRC). In recent years,
many novel datasets and evaluation metrics based on classical MRC tasks have
been proposed for broader textual QA tasks. In this paper, we survey 47 recent
textual QA benchmark datasets and propose a new taxonomy from an application
point of view. In addition, We summarize 8 evaluation metrics of textual QA
tasks. Finally, we discuss current trends in constructing textual QA benchmarks
and suggest directions for future work. | {
"abstract": "Textual Question Answering (QA) aims to provide precise answers to user's\nquestions in natural language using unstructured data. One of the most popular\napproaches to this goal is machine reading comprehension(MRC). In recent years,\nmany novel datasets and evaluation metrics based on classical MRC tasks have\nbeen proposed for broader textual QA tasks. In this paper, we survey 47 recent\ntextual QA benchmark datasets and propose a new taxonomy from an application\npoint of view. In addition, We summarize 8 evaluation metrics of textual QA\ntasks. Finally, we discuss current trends in constructing textual QA benchmarks\nand suggest directions for future work.",
"title": "More Than Reading Comprehension: A Survey on Datasets and Metrics of Textual Question Answering",
"url": "http://arxiv.org/abs/2109.12264v2"
} | null | null | no_new_dataset | admin | null | false | null | 5df667ab-c5c7-4727-97c8-5b11f49ebcec | null | Validated | 2023-10-04 15:19:51.891948 | {
"text_length": 791
} | 1no_new_dataset
|
TITLE: A Dataset of Kurdish (Sorani) Named Entities -- An Amendment to Kurdish-BLARK Named Entities
ABSTRACT: Named Entity Recognition (NER) is one of the essential applications of
Natural Language Processing (NLP). It is also an instrument that plays a
significant role in many other NLP applications, such as Machine Translation
(MT), Information Retrieval (IR), and Part of Speech Tagging (POST). Kurdish is
an under-resourced language from the NLP perspective. Particularly, in all the
categories, the lack of NER resources hinders other aspects of Kurdish
processing. In this work, we present a data set that covers several categories
of NEs in Kurdish (Sorani). The dataset is a significant amendment to a
previously developed dataset in the Kurdish BLARK (Basic Language Resource
Kit). It covers 11 categories and 33261 entries in total. The dataset is
publicly available for non-commercial use under CC BY-NC-SA 4.0 license at
https://kurdishblark.github.io/. | {
"abstract": "Named Entity Recognition (NER) is one of the essential applications of\nNatural Language Processing (NLP). It is also an instrument that plays a\nsignificant role in many other NLP applications, such as Machine Translation\n(MT), Information Retrieval (IR), and Part of Speech Tagging (POST). Kurdish is\nan under-resourced language from the NLP perspective. Particularly, in all the\ncategories, the lack of NER resources hinders other aspects of Kurdish\nprocessing. In this work, we present a data set that covers several categories\nof NEs in Kurdish (Sorani). The dataset is a significant amendment to a\npreviously developed dataset in the Kurdish BLARK (Basic Language Resource\nKit). It covers 11 categories and 33261 entries in total. The dataset is\npublicly available for non-commercial use under CC BY-NC-SA 4.0 license at\nhttps://kurdishblark.github.io/.",
"title": "A Dataset of Kurdish (Sorani) Named Entities -- An Amendment to Kurdish-BLARK Named Entities",
"url": "http://arxiv.org/abs/2301.04962v1"
} | null | null | new_dataset | admin | null | false | null | a2e660ba-a2ed-4b52-96b4-c743cabb58ae | null | Validated | 2023-10-04 15:19:51.881533 | {
"text_length": 984
} | 0new_dataset
|
TITLE: Learning from Sparse Datasets: Predicting Concrete's Strength by Machine Learning
ABSTRACT: Despite enormous efforts over the last decades to establish the relationship
between concrete proportioning and strength, a robust knowledge-based model for
accurate concrete strength predictions is still lacking. As an alternative to
physical or chemical-based models, data-driven machine learning (ML) methods
offer a new solution to this problem. Although this approach is promising for
handling the complex, non-linear, non-additive relationship between concrete
mixture proportions and strength, a major limitation of ML lies in the fact
that large datasets are needed for model training. This is a concern as
reliable, consistent strength data is rather limited, especially for realistic
industrial concretes. Here, based on the analysis of a large dataset (>10,000
observations) of measured compressive strengths from industrially-produced
concretes, we compare the ability of select ML algorithms to "learn" how to
reliably predict concrete strength as a function of the size of the dataset.
Based on these results, we discuss the competition between how accurate a given
model can eventually be (when trained on a large dataset) and how much data is
actually required to train this model. | {
"abstract": "Despite enormous efforts over the last decades to establish the relationship\nbetween concrete proportioning and strength, a robust knowledge-based model for\naccurate concrete strength predictions is still lacking. As an alternative to\nphysical or chemical-based models, data-driven machine learning (ML) methods\noffer a new solution to this problem. Although this approach is promising for\nhandling the complex, non-linear, non-additive relationship between concrete\nmixture proportions and strength, a major limitation of ML lies in the fact\nthat large datasets are needed for model training. This is a concern as\nreliable, consistent strength data is rather limited, especially for realistic\nindustrial concretes. Here, based on the analysis of a large dataset (>10,000\nobservations) of measured compressive strengths from industrially-produced\nconcretes, we compare the ability of select ML algorithms to \"learn\" how to\nreliably predict concrete strength as a function of the size of the dataset.\nBased on these results, we discuss the competition between how accurate a given\nmodel can eventually be (when trained on a large dataset) and how much data is\nactually required to train this model.",
"title": "Learning from Sparse Datasets: Predicting Concrete's Strength by Machine Learning",
"url": "http://arxiv.org/abs/2004.14407v1"
} | null | null | no_new_dataset | admin | null | false | null | 515986bc-d6cc-4874-8bae-21b004a55284 | null | Validated | 2023-10-04 15:19:51.900422 | {
"text_length": 1313
} | 1no_new_dataset
|
TITLE: Retiring Adult: New Datasets for Fair Machine Learning
ABSTRACT: Although the fairness community has recognized the importance of data,
researchers in the area primarily rely on UCI Adult when it comes to tabular
data. Derived from a 1994 US Census survey, this dataset has appeared in
hundreds of research papers where it served as the basis for the development
and comparison of many algorithmic fairness interventions. We reconstruct a
superset of the UCI Adult data from available US Census sources and reveal
idiosyncrasies of the UCI Adult dataset that limit its external validity. Our
primary contribution is a suite of new datasets derived from US Census surveys
that extend the existing data ecosystem for research on fair machine learning.
We create prediction tasks relating to income, employment, health,
transportation, and housing. The data span multiple years and all states of the
United States, allowing researchers to study temporal shift and geographic
variation. We highlight a broad initial sweep of new empirical insights
relating to trade-offs between fairness criteria, performance of algorithmic
interventions, and the role of distribution shift based on our new datasets.
Our findings inform ongoing debates, challenge some existing narratives, and
point to future research directions. Our datasets are available at
https://github.com/zykls/folktables. | {
"abstract": "Although the fairness community has recognized the importance of data,\nresearchers in the area primarily rely on UCI Adult when it comes to tabular\ndata. Derived from a 1994 US Census survey, this dataset has appeared in\nhundreds of research papers where it served as the basis for the development\nand comparison of many algorithmic fairness interventions. We reconstruct a\nsuperset of the UCI Adult data from available US Census sources and reveal\nidiosyncrasies of the UCI Adult dataset that limit its external validity. Our\nprimary contribution is a suite of new datasets derived from US Census surveys\nthat extend the existing data ecosystem for research on fair machine learning.\nWe create prediction tasks relating to income, employment, health,\ntransportation, and housing. The data span multiple years and all states of the\nUnited States, allowing researchers to study temporal shift and geographic\nvariation. We highlight a broad initial sweep of new empirical insights\nrelating to trade-offs between fairness criteria, performance of algorithmic\ninterventions, and the role of distribution shift based on our new datasets.\nOur findings inform ongoing debates, challenge some existing narratives, and\npoint to future research directions. Our datasets are available at\nhttps://github.com/zykls/folktables.",
"title": "Retiring Adult: New Datasets for Fair Machine Learning",
"url": "http://arxiv.org/abs/2108.04884v3"
} | null | null | new_dataset | admin | null | false | null | 301f3eac-c801-4cda-87be-763a08ba9b20 | null | Validated | 2023-10-04 15:19:51.892977 | {
"text_length": 1402
} | 0new_dataset
|
TITLE: A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset
ABSTRACT: In an effort to catalog insect biodiversity, we propose a new large dataset
of hand-labelled insect images, the BIOSCAN-Insect Dataset. Each record is
taxonomically classified by an expert, and also has associated genetic
information including raw nucleotide barcode sequences and assigned barcode
index numbers, which are genetically-based proxies for species classification.
This paper presents a curated million-image dataset, primarily to train
computer-vision models capable of providing image-based taxonomic assessment,
however, the dataset also presents compelling characteristics, the study of
which would be of interest to the broader machine learning community. Driven by
the biological nature inherent to the dataset, a characteristic long-tailed
class-imbalance distribution is exhibited. Furthermore, taxonomic labelling is
a hierarchical classification scheme, presenting a highly fine-grained
classification problem at lower levels. Beyond spurring interest in
biodiversity research within the machine learning community, progress on
creating an image-based taxonomic classifier will also further the ultimate
goal of all BIOSCAN research: to lay the foundation for a comprehensive survey
of global biodiversity. This paper introduces the dataset and explores the
classification task through the implementation and analysis of a baseline
classifier. | {
"abstract": "In an effort to catalog insect biodiversity, we propose a new large dataset\nof hand-labelled insect images, the BIOSCAN-Insect Dataset. Each record is\ntaxonomically classified by an expert, and also has associated genetic\ninformation including raw nucleotide barcode sequences and assigned barcode\nindex numbers, which are genetically-based proxies for species classification.\nThis paper presents a curated million-image dataset, primarily to train\ncomputer-vision models capable of providing image-based taxonomic assessment,\nhowever, the dataset also presents compelling characteristics, the study of\nwhich would be of interest to the broader machine learning community. Driven by\nthe biological nature inherent to the dataset, a characteristic long-tailed\nclass-imbalance distribution is exhibited. Furthermore, taxonomic labelling is\na hierarchical classification scheme, presenting a highly fine-grained\nclassification problem at lower levels. Beyond spurring interest in\nbiodiversity research within the machine learning community, progress on\ncreating an image-based taxonomic classifier will also further the ultimate\ngoal of all BIOSCAN research: to lay the foundation for a comprehensive survey\nof global biodiversity. This paper introduces the dataset and explores the\nclassification task through the implementation and analysis of a baseline\nclassifier.",
"title": "A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset",
"url": "http://arxiv.org/abs/2307.10455v1"
} | null | null | new_dataset | admin | null | false | null | 4442b601-6f77-494e-8b70-54a97754b425 | null | Validated | 2023-10-04 15:19:51.865552 | {
"text_length": 1479
} | 0new_dataset
|
TITLE: Changes in European Solidarity Before and During COVID-19: Evidence from a Large Crowd- and Expert-Annotated Twitter Dataset
ABSTRACT: We introduce the well-established social scientific concept of social
solidarity and its contestation, anti-solidarity, as a new problem setting to
supervised machine learning in NLP to assess how European solidarity discourses
changed before and after the COVID-19 outbreak was declared a global pandemic.
To this end, we annotate 2.3k English and German tweets for (anti-)solidarity
expressions, utilizing multiple human annotators and two annotation approaches
(experts vs.\ crowds). We use these annotations to train a BERT model with
multiple data augmentation strategies. Our augmented BERT model that combines
both expert and crowd annotations outperforms the baseline BERT classifier
trained with expert annotations only by over 25 points, from 58\% macro-F1 to
almost 85\%. We use this high-quality model to automatically label over 270k
tweets between September 2019 and December 2020. We then assess the
automatically labeled data for how statements related to European
(anti-)solidarity discourses developed over time and in relation to one
another, before and during the COVID-19 crisis. Our results show that
solidarity became increasingly salient and contested during the crisis. While
the number of solidarity tweets remained on a higher level and dominated the
discourse in the scrutinized time frame, anti-solidarity tweets initially
spiked, then decreased to (almost) pre-COVID-19 values before rising to a
stable higher level until the end of 2020. | {
"abstract": "We introduce the well-established social scientific concept of social\nsolidarity and its contestation, anti-solidarity, as a new problem setting to\nsupervised machine learning in NLP to assess how European solidarity discourses\nchanged before and after the COVID-19 outbreak was declared a global pandemic.\nTo this end, we annotate 2.3k English and German tweets for (anti-)solidarity\nexpressions, utilizing multiple human annotators and two annotation approaches\n(experts vs.\\ crowds). We use these annotations to train a BERT model with\nmultiple data augmentation strategies. Our augmented BERT model that combines\nboth expert and crowd annotations outperforms the baseline BERT classifier\ntrained with expert annotations only by over 25 points, from 58\\% macro-F1 to\nalmost 85\\%. We use this high-quality model to automatically label over 270k\ntweets between September 2019 and December 2020. We then assess the\nautomatically labeled data for how statements related to European\n(anti-)solidarity discourses developed over time and in relation to one\nanother, before and during the COVID-19 crisis. Our results show that\nsolidarity became increasingly salient and contested during the crisis. While\nthe number of solidarity tweets remained on a higher level and dominated the\ndiscourse in the scrutinized time frame, anti-solidarity tweets initially\nspiked, then decreased to (almost) pre-COVID-19 values before rising to a\nstable higher level until the end of 2020.",
"title": "Changes in European Solidarity Before and During COVID-19: Evidence from a Large Crowd- and Expert-Annotated Twitter Dataset",
"url": "http://arxiv.org/abs/2108.01042v1"
} | null | null | no_new_dataset | admin | null | false | null | 076834be-f521-481a-b1ca-7946cc3f3e62 | null | Validated | 2023-10-04 15:19:51.893291 | {
"text_length": 1627
} | 1no_new_dataset
|
TITLE: A dataset for audio-video based vehicle speed estimation
ABSTRACT: Accurate speed estimation of road vehicles is important for several reasons.
One is speed limit enforcement, which represents a crucial tool in decreasing
traffic accidents and fatalities. Compared with other research areas and
domains, the number of available datasets for vehicle speed estimation is still
very limited. We present a dataset of on-road audio-video recordings of single
vehicles passing by a camera at known speeds, maintained stable by the on-board
cruise control. The dataset contains thirteen vehicles, selected to be as
diverse as possible in terms of manufacturer, production year, engine type,
power and transmission, resulting in a total of $ 400 $ annotated audio-video
recordings. The dataset is fully available and intended as a public benchmark
to facilitate research in audio-video vehicle speed estimation. In addition to
the dataset, we propose a cross-validation strategy which can be used in a
machine learning model for vehicle speed estimation. Two approaches to
training-validation split of the dataset are proposed. | {
"abstract": "Accurate speed estimation of road vehicles is important for several reasons.\nOne is speed limit enforcement, which represents a crucial tool in decreasing\ntraffic accidents and fatalities. Compared with other research areas and\ndomains, the number of available datasets for vehicle speed estimation is still\nvery limited. We present a dataset of on-road audio-video recordings of single\nvehicles passing by a camera at known speeds, maintained stable by the on-board\ncruise control. The dataset contains thirteen vehicles, selected to be as\ndiverse as possible in terms of manufacturer, production year, engine type,\npower and transmission, resulting in a total of $ 400 $ annotated audio-video\nrecordings. The dataset is fully available and intended as a public benchmark\nto facilitate research in audio-video vehicle speed estimation. In addition to\nthe dataset, we propose a cross-validation strategy which can be used in a\nmachine learning model for vehicle speed estimation. Two approaches to\ntraining-validation split of the dataset are proposed.",
"title": "A dataset for audio-video based vehicle speed estimation",
"url": "http://arxiv.org/abs/2212.01651v1"
} | null | null | new_dataset | admin | null | false | null | 14349d67-2649-4d9d-b12e-52381602f2e1 | null | Validated | 2023-10-04 15:19:51.882463 | {
"text_length": 1143
} | 0new_dataset
|
TITLE: An Analytical Study of Covid-19 Dataset using Graph-Based Clustering Algorithms
ABSTRACT: Corona VIrus Disease abbreviated as COVID-19 is a novel virus which is
initially identified in Wuhan of China in December of 2019 and now this deadly
disease has spread all over the world. According to World Health Organization
(WHO), a total of 3,124,905 people died from 2019 to 2021, April. In this case,
many methods, AI base techniques, and machine learning algorithms have been
researched and are being used to save people from this pandemic. The SARS-CoV
and the 2019-nCoV, SARS-CoV-2 virus invade our bodies, causing some differences
in the structure of cell proteins. Protein-protein interaction (PPI) is an
essential process in our cells and plays a very important role in the
development of medicines and gives ideas about the disease. In this study, we
performed clustering on PPI networks generated from 92 genes of the Covi-19
dataset. We have used three graph-based clustering algorithms to give intuition
to the analysis of clusters. | {
"abstract": "Corona VIrus Disease abbreviated as COVID-19 is a novel virus which is\ninitially identified in Wuhan of China in December of 2019 and now this deadly\ndisease has spread all over the world. According to World Health Organization\n(WHO), a total of 3,124,905 people died from 2019 to 2021, April. In this case,\nmany methods, AI base techniques, and machine learning algorithms have been\nresearched and are being used to save people from this pandemic. The SARS-CoV\nand the 2019-nCoV, SARS-CoV-2 virus invade our bodies, causing some differences\nin the structure of cell proteins. Protein-protein interaction (PPI) is an\nessential process in our cells and plays a very important role in the\ndevelopment of medicines and gives ideas about the disease. In this study, we\nperformed clustering on PPI networks generated from 92 genes of the Covi-19\ndataset. We have used three graph-based clustering algorithms to give intuition\nto the analysis of clusters.",
"title": "An Analytical Study of Covid-19 Dataset using Graph-Based Clustering Algorithms",
"url": "http://arxiv.org/abs/2308.04697v1"
} | null | null | no_new_dataset | admin | null | false | null | 882d3500-b413-44b8-b7d8-981154c994a6 | null | Validated | 2023-10-04 15:19:51.864276 | {
"text_length": 1063
} | 1no_new_dataset
|
TITLE: A domain-specific language for describing machine learning datasets
ABSTRACT: Datasets play a central role in the training and evaluation of machine
learning (ML) models. But they are also the root cause of many undesired model
behaviors, such as biased predictions. To overcome this situation, the ML
community is proposing a data-centric cultural shift where data issues are
given the attention they deserve, and more standard practices around the
gathering and processing of datasets start to be discussed and established.
So far, these proposals are mostly high-level guidelines described in natural
language and, as such, they are difficult to formalize and apply to particular
datasets. In this sense, and inspired by these proposals, we define a new
domain-specific language (DSL) to precisely describe machine learning datasets
in terms of their structure, data provenance, and social concerns. We believe
this DSL will facilitate any ML initiative to leverage and benefit from this
data-centric shift in ML (e.g., selecting the most appropriate dataset for a
new project or better replicating other ML results). The DSL is implemented as
a Visual Studio Code plugin, and it has been published under an open source
license. | {
"abstract": "Datasets play a central role in the training and evaluation of machine\nlearning (ML) models. But they are also the root cause of many undesired model\nbehaviors, such as biased predictions. To overcome this situation, the ML\ncommunity is proposing a data-centric cultural shift where data issues are\ngiven the attention they deserve, and more standard practices around the\ngathering and processing of datasets start to be discussed and established.\n So far, these proposals are mostly high-level guidelines described in natural\nlanguage and, as such, they are difficult to formalize and apply to particular\ndatasets. In this sense, and inspired by these proposals, we define a new\ndomain-specific language (DSL) to precisely describe machine learning datasets\nin terms of their structure, data provenance, and social concerns. We believe\nthis DSL will facilitate any ML initiative to leverage and benefit from this\ndata-centric shift in ML (e.g., selecting the most appropriate dataset for a\nnew project or better replicating other ML results). The DSL is implemented as\na Visual Studio Code plugin, and it has been published under an open source\nlicense.",
"title": "A domain-specific language for describing machine learning datasets",
"url": "http://arxiv.org/abs/2207.02848v2"
} | null | null | no_new_dataset | admin | null | false | null | df33c0ab-9957-4c35-b9d7-2da5fdb5006c | null | Validated | 2023-10-04 15:19:51.885438 | {
"text_length": 1257
} | 1no_new_dataset
|
TITLE: Position Paper on Dataset Engineering to Accelerate Science
ABSTRACT: Data is a critical element in any discovery process. In the last decades, we
observed exponential growth in the volume of available data and the technology
to manipulate it. However, data is only practical when one can structure it for
a well-defined task. For instance, we need a corpus of text broken into
sentences to train a natural language machine-learning model. In this work, we
will use the token \textit{dataset} to designate a structured set of data built
to perform a well-defined task. Moreover, the dataset will be used in most
cases as a blueprint of an entity that at any moment can be stored as a table.
Specifically, in science, each area has unique forms to organize, gather and
handle its datasets. We believe that datasets must be a first-class entity in
any knowledge-intensive process, and all workflows should have exceptional
attention to datasets' lifecycle, from their gathering to uses and evolution.
We advocate that science and engineering discovery processes are extreme
instances of the need for such organization on datasets, claiming for new
approaches and tooling. Furthermore, these requirements are more evident when
the discovery workflow uses artificial intelligence methods to empower the
subject-matter expert. In this work, we discuss an approach to bringing
datasets as a critical entity in the discovery process in science. We
illustrate some concepts using material discovery as a use case. We chose this
domain because it leverages many significant problems that can be generalized
to other science fields. | {
"abstract": "Data is a critical element in any discovery process. In the last decades, we\nobserved exponential growth in the volume of available data and the technology\nto manipulate it. However, data is only practical when one can structure it for\na well-defined task. For instance, we need a corpus of text broken into\nsentences to train a natural language machine-learning model. In this work, we\nwill use the token \\textit{dataset} to designate a structured set of data built\nto perform a well-defined task. Moreover, the dataset will be used in most\ncases as a blueprint of an entity that at any moment can be stored as a table.\nSpecifically, in science, each area has unique forms to organize, gather and\nhandle its datasets. We believe that datasets must be a first-class entity in\nany knowledge-intensive process, and all workflows should have exceptional\nattention to datasets' lifecycle, from their gathering to uses and evolution.\nWe advocate that science and engineering discovery processes are extreme\ninstances of the need for such organization on datasets, claiming for new\napproaches and tooling. Furthermore, these requirements are more evident when\nthe discovery workflow uses artificial intelligence methods to empower the\nsubject-matter expert. In this work, we discuss an approach to bringing\ndatasets as a critical entity in the discovery process in science. We\nillustrate some concepts using material discovery as a use case. We chose this\ndomain because it leverages many significant problems that can be generalized\nto other science fields.",
"title": "Position Paper on Dataset Engineering to Accelerate Science",
"url": "http://arxiv.org/abs/2303.05545v1"
} | null | null | no_new_dataset | admin | null | false | null | 38400ef2-f896-40b6-94f9-79e3bfc67c1f | null | Validated | 2023-10-04 15:19:51.880511 | {
"text_length": 1646
} | 1no_new_dataset
|
TITLE: Fraud Dataset Benchmark and Applications
ABSTRACT: Standardized datasets and benchmarks have spurred innovations in computer
vision, natural language processing, multi-modal and tabular settings. We note
that, as compared to other well researched fields, fraud detection has unique
challenges: high-class imbalance, diverse feature types, frequently changing
fraud patterns, and adversarial nature of the problem. Due to these, the
modeling approaches evaluated on datasets from other research fields may not
work well for the fraud detection. In this paper, we introduce Fraud Dataset
Benchmark (FDB), a compilation of publicly available datasets catered to fraud
detection FDB comprises variety of fraud related tasks, ranging from
identifying fraudulent card-not-present transactions, detecting bot attacks,
classifying malicious URLs, estimating risk of loan default to content
moderation. The Python based library for FDB provides a consistent API for data
loading with standardized training and testing splits. We demonstrate several
applications of FDB that are of broad interest for fraud detection, including
feature engineering, comparison of supervised learning algorithms, label noise
removal, class-imbalance treatment and semi-supervised learning. We hope that
FDB provides a common playground for researchers and practitioners in the fraud
detection domain to develop robust and customized machine learning techniques
targeting various fraud use cases. | {
"abstract": "Standardized datasets and benchmarks have spurred innovations in computer\nvision, natural language processing, multi-modal and tabular settings. We note\nthat, as compared to other well researched fields, fraud detection has unique\nchallenges: high-class imbalance, diverse feature types, frequently changing\nfraud patterns, and adversarial nature of the problem. Due to these, the\nmodeling approaches evaluated on datasets from other research fields may not\nwork well for the fraud detection. In this paper, we introduce Fraud Dataset\nBenchmark (FDB), a compilation of publicly available datasets catered to fraud\ndetection FDB comprises variety of fraud related tasks, ranging from\nidentifying fraudulent card-not-present transactions, detecting bot attacks,\nclassifying malicious URLs, estimating risk of loan default to content\nmoderation. The Python based library for FDB provides a consistent API for data\nloading with standardized training and testing splits. We demonstrate several\napplications of FDB that are of broad interest for fraud detection, including\nfeature engineering, comparison of supervised learning algorithms, label noise\nremoval, class-imbalance treatment and semi-supervised learning. We hope that\nFDB provides a common playground for researchers and practitioners in the fraud\ndetection domain to develop robust and customized machine learning techniques\ntargeting various fraud use cases.",
"title": "Fraud Dataset Benchmark and Applications",
"url": "http://arxiv.org/abs/2208.14417v3"
} | null | null | new_dataset | admin | null | false | null | 970e215e-d4f3-4cf3-b844-e4d1e2945336 | null | Validated | 2023-10-04 15:19:51.884394 | {
"text_length": 1491
} | 0new_dataset
|
TITLE: There is no data like more data -- current status of machine learning datasets in remote sensing
ABSTRACT: Annotated datasets have become one of the most crucial preconditions for the
development and evaluation of machine learning-based methods designed for the
automated interpretation of remote sensing data. In this paper, we review the
historic development of such datasets, discuss their features based on a few
selected examples, and address open issues for future developments. | {
"abstract": "Annotated datasets have become one of the most crucial preconditions for the\ndevelopment and evaluation of machine learning-based methods designed for the\nautomated interpretation of remote sensing data. In this paper, we review the\nhistoric development of such datasets, discuss their features based on a few\nselected examples, and address open issues for future developments.",
"title": "There is no data like more data -- current status of machine learning datasets in remote sensing",
"url": "http://arxiv.org/abs/2105.11726v2"
} | null | null | no_new_dataset | admin | null | false | null | 65ab55eb-f5e4-4809-beed-e8fe487e4b48 | null | Validated | 2023-10-04 15:19:51.894423 | {
"text_length": 508
} | 1no_new_dataset
|
TITLE: CircuitNet: An Open-Source Dataset for Machine Learning Applications in Electronic Design Automation (EDA)
ABSTRACT: The electronic design automation (EDA) community has been actively exploring
machine learning (ML) for very large-scale integrated computer-aided design
(VLSI CAD). Many studies explored learning-based techniques for cross-stage
prediction tasks in the design flow to achieve faster design convergence.
Although building ML models usually requires a large amount of data, most
studies can only generate small internal datasets for validation because of the
lack of large public datasets. In this essay, we present the first open-source
dataset called CircuitNet for ML tasks in VLSI CAD. | {
"abstract": "The electronic design automation (EDA) community has been actively exploring\nmachine learning (ML) for very large-scale integrated computer-aided design\n(VLSI CAD). Many studies explored learning-based techniques for cross-stage\nprediction tasks in the design flow to achieve faster design convergence.\nAlthough building ML models usually requires a large amount of data, most\nstudies can only generate small internal datasets for validation because of the\nlack of large public datasets. In this essay, we present the first open-source\ndataset called CircuitNet for ML tasks in VLSI CAD.",
"title": "CircuitNet: An Open-Source Dataset for Machine Learning Applications in Electronic Design Automation (EDA)",
"url": "http://arxiv.org/abs/2208.01040v4"
} | null | null | new_dataset | admin | null | false | null | 70ed2888-04a9-414e-81dc-2b7f734f3d5e | null | Validated | 2023-10-04 15:19:51.884979 | {
"text_length": 728
} | 0new_dataset
|
TITLE: A Survey on Industrial Control System Testbeds and Datasets for Security Research
ABSTRACT: The increasing digitization and interconnection of legacy Industrial Control
Systems (ICSs) open new vulnerability surfaces, exposing such systems to
malicious attackers. Furthermore, since ICSs are often employed in critical
infrastructures (e.g., nuclear plants) and manufacturing companies (e.g.,
chemical industries), attacks can lead to devastating physical damages. In
dealing with this security requirement, the research community focuses on
developing new security mechanisms such as Intrusion Detection Systems (IDSs),
facilitated by leveraging modern machine learning techniques. However, these
algorithms require a testing platform and a considerable amount of data to be
trained and tested accurately. To satisfy this prerequisite, Academia,
Industry, and Government are increasingly proposing testbed (i.e., scaled-down
versions of ICSs or simulations) to test the performances of the IDSs.
Furthermore, to enable researchers to cross-validate security systems (e.g.,
security-by-design concepts or anomaly detectors), several datasets have been
collected from testbeds and shared with the community. In this paper, we
provide a deep and comprehensive overview of ICSs, presenting the architecture
design, the employed devices, and the security protocols implemented. We then
collect, compare, and describe testbeds and datasets in the literature,
highlighting key challenges and design guidelines to keep in mind in the design
phases. Furthermore, we enrich our work by reporting the best performing IDS
algorithms tested on every dataset to create a baseline in state of the art for
this field. Finally, driven by knowledge accumulated during this survey's
development, we report advice and good practices on the development, the
choice, and the utilization of testbeds, datasets, and IDSs. | {
"abstract": "The increasing digitization and interconnection of legacy Industrial Control\nSystems (ICSs) open new vulnerability surfaces, exposing such systems to\nmalicious attackers. Furthermore, since ICSs are often employed in critical\ninfrastructures (e.g., nuclear plants) and manufacturing companies (e.g.,\nchemical industries), attacks can lead to devastating physical damages. In\ndealing with this security requirement, the research community focuses on\ndeveloping new security mechanisms such as Intrusion Detection Systems (IDSs),\nfacilitated by leveraging modern machine learning techniques. However, these\nalgorithms require a testing platform and a considerable amount of data to be\ntrained and tested accurately. To satisfy this prerequisite, Academia,\nIndustry, and Government are increasingly proposing testbed (i.e., scaled-down\nversions of ICSs or simulations) to test the performances of the IDSs.\nFurthermore, to enable researchers to cross-validate security systems (e.g.,\nsecurity-by-design concepts or anomaly detectors), several datasets have been\ncollected from testbeds and shared with the community. In this paper, we\nprovide a deep and comprehensive overview of ICSs, presenting the architecture\ndesign, the employed devices, and the security protocols implemented. We then\ncollect, compare, and describe testbeds and datasets in the literature,\nhighlighting key challenges and design guidelines to keep in mind in the design\nphases. Furthermore, we enrich our work by reporting the best performing IDS\nalgorithms tested on every dataset to create a baseline in state of the art for\nthis field. Finally, driven by knowledge accumulated during this survey's\ndevelopment, we report advice and good practices on the development, the\nchoice, and the utilization of testbeds, datasets, and IDSs.",
"title": "A Survey on Industrial Control System Testbeds and Datasets for Security Research",
"url": "http://arxiv.org/abs/2102.05631v3"
} | null | null | no_new_dataset | admin | null | false | null | 05be641d-0356-42d3-a5a8-e11c62686771 | null | Validated | 2023-10-04 15:19:51.895876 | {
"text_length": 1921
} | 1no_new_dataset
|
TITLE: A Benchmarking Dataset with 2440 Organic Molecules for Volume Distribution at Steady State
ABSTRACT: Background: The volume of distribution at steady state (VDss) is a
fundamental pharmacokinetics (PK) property of drugs, which measures how
effectively a drug molecule is distributed throughout the body. Along with the
clearance (CL), it determines the half-life and, therefore, the drug dosing
interval. However, the molecular data size limits the generalizability of the
reported machine learning models. Objective: This study aims to provide a clean
and comprehensive dataset for human VDss as the benchmarking data source,
fostering and benefiting future predictive studies. Moreover, several
predictive models were also built with machine learning regression algorithms.
Methods: The dataset was curated from 13 publicly accessible data sources and
the DrugBank database entirely from intravenous drug administration and then
underwent extensive data cleaning. The molecular descriptors were calculated
with Mordred, and feature selection was conducted for constructing predictive
models. Five machine learning methods were used to build regression models,
grid search was used to optimize hyperparameters, and ten-fold cross-validation
was used to evaluate the model. Results: An enriched dataset of VDss
(https://github.com/da-wen-er/VDss) was constructed with 2440 molecules. Among
the prediction models, the LightGBM model was the most stable and had the best
internal prediction ability with Q2 = 0.837, R2=0.814 and for the other four
models, Q2 was higher than 0.79. Conclusions: To the best of our knowledge,
this is the largest dataset for VDss, which can be used as the benchmark for
computational studies of VDss. Moreover, the regression models reported within
this study can be of use for pharmacokinetic related studies. | {
"abstract": "Background: The volume of distribution at steady state (VDss) is a\nfundamental pharmacokinetics (PK) property of drugs, which measures how\neffectively a drug molecule is distributed throughout the body. Along with the\nclearance (CL), it determines the half-life and, therefore, the drug dosing\ninterval. However, the molecular data size limits the generalizability of the\nreported machine learning models. Objective: This study aims to provide a clean\nand comprehensive dataset for human VDss as the benchmarking data source,\nfostering and benefiting future predictive studies. Moreover, several\npredictive models were also built with machine learning regression algorithms.\nMethods: The dataset was curated from 13 publicly accessible data sources and\nthe DrugBank database entirely from intravenous drug administration and then\nunderwent extensive data cleaning. The molecular descriptors were calculated\nwith Mordred, and feature selection was conducted for constructing predictive\nmodels. Five machine learning methods were used to build regression models,\ngrid search was used to optimize hyperparameters, and ten-fold cross-validation\nwas used to evaluate the model. Results: An enriched dataset of VDss\n(https://github.com/da-wen-er/VDss) was constructed with 2440 molecules. Among\nthe prediction models, the LightGBM model was the most stable and had the best\ninternal prediction ability with Q2 = 0.837, R2=0.814 and for the other four\nmodels, Q2 was higher than 0.79. Conclusions: To the best of our knowledge,\nthis is the largest dataset for VDss, which can be used as the benchmark for\ncomputational studies of VDss. Moreover, the regression models reported within\nthis study can be of use for pharmacokinetic related studies.",
"title": "A Benchmarking Dataset with 2440 Organic Molecules for Volume Distribution at Steady State",
"url": "http://arxiv.org/abs/2211.05661v1"
} | null | null | new_dataset | admin | null | false | null | 4be8cf84-9ee8-452c-8270-486868b8c99f | null | Validated | 2023-10-04 15:19:51.882913 | {
"text_length": 1863
} | 0new_dataset
|
TITLE: Efficacy of MRI data harmonization in the age of machine learning. A multicenter study across 36 datasets
ABSTRACT: Pooling publicly-available MRI data from multiple sites allows to assemble
extensive groups of subjects, increase statistical power, and promote data
reuse with machine learning techniques. The harmonization of multicenter data
is necessary to reduce the confounding effect associated with non-biological
sources of variability in the data. However, when applied to the entire dataset
before machine learning, the harmonization leads to data leakage, because
information outside the training set may affect model building, and potentially
falsely overestimate performance. We propose a 1) measurement of the efficacy
of data harmonization; 2) harmonizer transformer, i.e., an implementation of
the ComBat harmonization allowing its encapsulation among the preprocessing
steps of a machine learning pipeline, avoiding data leakage. We tested these
tools using brain T1-weighted MRI data from 1740 healthy subjects acquired at
36 sites. After harmonization, the site effect was removed or reduced, and we
showed the data leakage effect in predicting individual age from MRI data,
highlighting that introducing the harmonizer transformer into a machine
learning pipeline allows for avoiding data leakage. | {
"abstract": "Pooling publicly-available MRI data from multiple sites allows to assemble\nextensive groups of subjects, increase statistical power, and promote data\nreuse with machine learning techniques. The harmonization of multicenter data\nis necessary to reduce the confounding effect associated with non-biological\nsources of variability in the data. However, when applied to the entire dataset\nbefore machine learning, the harmonization leads to data leakage, because\ninformation outside the training set may affect model building, and potentially\nfalsely overestimate performance. We propose a 1) measurement of the efficacy\nof data harmonization; 2) harmonizer transformer, i.e., an implementation of\nthe ComBat harmonization allowing its encapsulation among the preprocessing\nsteps of a machine learning pipeline, avoiding data leakage. We tested these\ntools using brain T1-weighted MRI data from 1740 healthy subjects acquired at\n36 sites. After harmonization, the site effect was removed or reduced, and we\nshowed the data leakage effect in predicting individual age from MRI data,\nhighlighting that introducing the harmonizer transformer into a machine\nlearning pipeline allows for avoiding data leakage.",
"title": "Efficacy of MRI data harmonization in the age of machine learning. A multicenter study across 36 datasets",
"url": "http://arxiv.org/abs/2211.04125v3"
} | null | null | no_new_dataset | admin | null | false | null | 70ed9264-dac9-47f4-87e8-3dfd9fd76e33 | null | Validated | 2023-10-04 15:19:51.882964 | {
"text_length": 1341
} | 1no_new_dataset
|
TITLE: Scalable mRMR feature selection to handle high dimensional datasets: Vertical partitioning based Iterative MapReduce framework
ABSTRACT: While building machine learning models, Feature selection (FS) stands out as
an essential preprocessing step used to handle the uncertainty and vagueness in
the data. Recently, the minimum Redundancy and Maximum Relevance (mRMR)
approach has proven to be effective in obtaining the irredundant feature
subset. Owing to the generation of voluminous datasets, it is essential to
design scalable solutions using distributed/parallel paradigms. MapReduce
solutions are proven to be one of the best approaches to designing
fault-tolerant and scalable solutions. This work analyses the existing
MapReduce approaches for mRMR feature selection and identifies the limitations
thereof. In the current study, we proposed VMR_mRMR, an efficient vertical
partitioning-based approach using a memorization approach, thereby overcoming
the extant approaches limitations. The experiment analysis says that VMR_mRMR
significantly outperformed extant approaches and achieved a better
computational gain (C.G). In addition, we also conducted a comparative analysis
with the horizontal partitioning approach HMR_mRMR [1] to assess the strengths
and limitations of the proposed approach. | {
"abstract": "While building machine learning models, Feature selection (FS) stands out as\nan essential preprocessing step used to handle the uncertainty and vagueness in\nthe data. Recently, the minimum Redundancy and Maximum Relevance (mRMR)\napproach has proven to be effective in obtaining the irredundant feature\nsubset. Owing to the generation of voluminous datasets, it is essential to\ndesign scalable solutions using distributed/parallel paradigms. MapReduce\nsolutions are proven to be one of the best approaches to designing\nfault-tolerant and scalable solutions. This work analyses the existing\nMapReduce approaches for mRMR feature selection and identifies the limitations\nthereof. In the current study, we proposed VMR_mRMR, an efficient vertical\npartitioning-based approach using a memorization approach, thereby overcoming\nthe extant approaches limitations. The experiment analysis says that VMR_mRMR\nsignificantly outperformed extant approaches and achieved a better\ncomputational gain (C.G). In addition, we also conducted a comparative analysis\nwith the horizontal partitioning approach HMR_mRMR [1] to assess the strengths\nand limitations of the proposed approach.",
"title": "Scalable mRMR feature selection to handle high dimensional datasets: Vertical partitioning based Iterative MapReduce framework",
"url": "http://arxiv.org/abs/2208.09901v1"
} | null | null | no_new_dataset | admin | null | false | null | 7ce72dae-c96a-4d20-b0ed-9d4e1476739b | null | Validated | 2023-10-04 15:19:51.884673 | {
"text_length": 1327
} | 1no_new_dataset
|
TITLE: Fusion 360 Gallery: A Dataset and Environment for Programmatic CAD Construction from Human Design Sequences
ABSTRACT: Parametric computer-aided design (CAD) is a standard paradigm used to design
manufactured objects, where a 3D shape is represented as a program supported by
the CAD software. Despite the pervasiveness of parametric CAD and a growing
interest from the research community, currently there does not exist a dataset
of realistic CAD models in a concise programmatic form. In this paper we
present the Fusion 360 Gallery, consisting of a simple language with just the
sketch and extrude modeling operations, and a dataset of 8,625 human design
sequences expressed in this language. We also present an interactive
environment called the Fusion 360 Gym, which exposes the sequential
construction of a CAD program as a Markov decision process, making it amendable
to machine learning approaches. As a use case for our dataset and environment,
we define the CAD reconstruction task of recovering a CAD program from a target
geometry. We report results of applying state-of-the-art methods of program
synthesis with neurally guided search on this task. | {
"abstract": "Parametric computer-aided design (CAD) is a standard paradigm used to design\nmanufactured objects, where a 3D shape is represented as a program supported by\nthe CAD software. Despite the pervasiveness of parametric CAD and a growing\ninterest from the research community, currently there does not exist a dataset\nof realistic CAD models in a concise programmatic form. In this paper we\npresent the Fusion 360 Gallery, consisting of a simple language with just the\nsketch and extrude modeling operations, and a dataset of 8,625 human design\nsequences expressed in this language. We also present an interactive\nenvironment called the Fusion 360 Gym, which exposes the sequential\nconstruction of a CAD program as a Markov decision process, making it amendable\nto machine learning approaches. As a use case for our dataset and environment,\nwe define the CAD reconstruction task of recovering a CAD program from a target\ngeometry. We report results of applying state-of-the-art methods of program\nsynthesis with neurally guided search on this task.",
"title": "Fusion 360 Gallery: A Dataset and Environment for Programmatic CAD Construction from Human Design Sequences",
"url": "http://arxiv.org/abs/2010.02392v2"
} | null | null | new_dataset | admin | null | false | null | 4b89adfd-ddeb-4ec8-a124-cc50c928a1f7 | null | Validated | 2023-10-04 15:19:51.897833 | {
"text_length": 1184
} | 0new_dataset
|
TITLE: A Tweet-based Dataset for Company-Level Stock Return Prediction
ABSTRACT: Public opinion influences events, especially related to stock market
movement, in which a subtle hint can influence the local outcome of the market.
In this paper, we present a dataset that allows for company-level analysis of
tweet based impact on one-, two-, three-, and seven-day stock returns. Our
dataset consists of 862, 231 labelled instances from twitter in English, we
also release a cleaned subset of 85, 176 labelled instances to the community.
We also provide baselines using standard machine learning algorithms and a
multi-view learning based approach that makes use of different types of
features. Our dataset, scripts and models are publicly available at:
https://github.com/ImperialNLP/stockreturnpred. | {
"abstract": "Public opinion influences events, especially related to stock market\nmovement, in which a subtle hint can influence the local outcome of the market.\nIn this paper, we present a dataset that allows for company-level analysis of\ntweet based impact on one-, two-, three-, and seven-day stock returns. Our\ndataset consists of 862, 231 labelled instances from twitter in English, we\nalso release a cleaned subset of 85, 176 labelled instances to the community.\nWe also provide baselines using standard machine learning algorithms and a\nmulti-view learning based approach that makes use of different types of\nfeatures. Our dataset, scripts and models are publicly available at:\nhttps://github.com/ImperialNLP/stockreturnpred.",
"title": "A Tweet-based Dataset for Company-Level Stock Return Prediction",
"url": "http://arxiv.org/abs/2006.09723v1"
} | null | null | new_dataset | admin | null | false | null | 9396233b-ff0e-40bb-b3f8-71cbf5faf18b | null | Validated | 2023-10-04 15:19:51.899540 | {
"text_length": 817
} | 0new_dataset
|
TITLE: Challenge Dataset of Cognates and False Friend Pairs from Indian Languages
ABSTRACT: Cognates are present in multiple variants of the same text across different
languages (e.g., "hund" in German and "hound" in English language mean "dog").
They pose a challenge to various Natural Language Processing (NLP) applications
such as Machine Translation, Cross-lingual Sense Disambiguation, Computational
Phylogenetics, and Information Retrieval. A possible solution to address this
challenge is to identify cognates across language pairs. In this paper, we
describe the creation of two cognate datasets for twelve Indian languages,
namely Sanskrit, Hindi, Assamese, Oriya, Kannada, Gujarati, Tamil, Telugu,
Punjabi, Bengali, Marathi, and Malayalam. We digitize the cognate data from an
Indian language cognate dictionary and utilize linked Indian language Wordnets
to generate cognate sets. Additionally, we use the Wordnet data to create a
False Friends' dataset for eleven language pairs. We also evaluate the efficacy
of our dataset using previously available baseline cognate detection
approaches. We also perform a manual evaluation with the help of lexicographers
and release the curated gold-standard dataset with this paper. | {
"abstract": "Cognates are present in multiple variants of the same text across different\nlanguages (e.g., \"hund\" in German and \"hound\" in English language mean \"dog\").\nThey pose a challenge to various Natural Language Processing (NLP) applications\nsuch as Machine Translation, Cross-lingual Sense Disambiguation, Computational\nPhylogenetics, and Information Retrieval. A possible solution to address this\nchallenge is to identify cognates across language pairs. In this paper, we\ndescribe the creation of two cognate datasets for twelve Indian languages,\nnamely Sanskrit, Hindi, Assamese, Oriya, Kannada, Gujarati, Tamil, Telugu,\nPunjabi, Bengali, Marathi, and Malayalam. We digitize the cognate data from an\nIndian language cognate dictionary and utilize linked Indian language Wordnets\nto generate cognate sets. Additionally, we use the Wordnet data to create a\nFalse Friends' dataset for eleven language pairs. We also evaluate the efficacy\nof our dataset using previously available baseline cognate detection\napproaches. We also perform a manual evaluation with the help of lexicographers\nand release the curated gold-standard dataset with this paper.",
"title": "Challenge Dataset of Cognates and False Friend Pairs from Indian Languages",
"url": "http://arxiv.org/abs/2112.09526v1"
} | null | null | new_dataset | admin | null | false | null | 460770a3-903f-45a1-8d76-9da1dd020cf0 | null | Validated | 2023-10-04 15:19:51.889202 | {
"text_length": 1251
} | 0new_dataset
|
TITLE: Ontology-Driven Self-Supervision for Adverse Childhood Experiences Identification Using Social Media Datasets
ABSTRACT: Adverse Childhood Experiences (ACEs) are defined as a collection of highly
stressful, and potentially traumatic, events or circumstances that occur
throughout childhood and/or adolescence. They have been shown to be associated
with increased risks of mental health diseases or other abnormal behaviours in
later lives. However, the identification of ACEs from textual data with Natural
Language Processing (NLP) is challenging because (a) there are no NLP ready ACE
ontologies; (b) there are few resources available for machine learning,
necessitating the data annotation from clinical experts; (c) costly annotations
by domain experts and large number of documents for supporting large machine
learning models. In this paper, we present an ontology-driven self-supervised
approach (derive concept embeddings using an auto-encoder from baseline NLP
results) for producing a publicly available resource that would support
large-scale machine learning (e.g., training transformer based large language
models) on social media corpus. This resource as well as the proposed approach
are aimed to facilitate the community in training transferable NLP models for
effectively surfacing ACEs in low-resource scenarios like NLP on clinical notes
within Electronic Health Records. The resource including a list of ACE ontology
terms, ACE concept embeddings and the NLP annotated corpus is available at
https://github.com/knowlab/ACE-NLP. | {
"abstract": "Adverse Childhood Experiences (ACEs) are defined as a collection of highly\nstressful, and potentially traumatic, events or circumstances that occur\nthroughout childhood and/or adolescence. They have been shown to be associated\nwith increased risks of mental health diseases or other abnormal behaviours in\nlater lives. However, the identification of ACEs from textual data with Natural\nLanguage Processing (NLP) is challenging because (a) there are no NLP ready ACE\nontologies; (b) there are few resources available for machine learning,\nnecessitating the data annotation from clinical experts; (c) costly annotations\nby domain experts and large number of documents for supporting large machine\nlearning models. In this paper, we present an ontology-driven self-supervised\napproach (derive concept embeddings using an auto-encoder from baseline NLP\nresults) for producing a publicly available resource that would support\nlarge-scale machine learning (e.g., training transformer based large language\nmodels) on social media corpus. This resource as well as the proposed approach\nare aimed to facilitate the community in training transferable NLP models for\neffectively surfacing ACEs in low-resource scenarios like NLP on clinical notes\nwithin Electronic Health Records. The resource including a list of ACE ontology\nterms, ACE concept embeddings and the NLP annotated corpus is available at\nhttps://github.com/knowlab/ACE-NLP.",
"title": "Ontology-Driven Self-Supervision for Adverse Childhood Experiences Identification Using Social Media Datasets",
"url": "http://arxiv.org/abs/2208.11701v1"
} | null | null | no_new_dataset | admin | null | false | null | 5846cde1-7bf0-4e54-8ea2-4f61a5495c86 | null | Validated | 2023-10-04 15:19:51.884600 | {
"text_length": 1570
} | 1no_new_dataset
|
TITLE: HLSDataset: Open-Source Dataset for ML-Assisted FPGA Design using High Level Synthesis
ABSTRACT: Machine Learning (ML) has been widely adopted in design exploration using
high level synthesis (HLS) to give a better and faster performance, and
resource and power estimation at very early stages for FPGA-based design. To
perform prediction accurately, high-quality and large-volume datasets are
required for training ML models.This paper presents a dataset for ML-assisted
FPGA design using HLS, called HLSDataset. The dataset is generated from widely
used HLS C benchmarks including Polybench, Machsuite, CHStone and Rossetta. The
Verilog samples are generated with a variety of directives including loop
unroll, loop pipeline and array partition to make sure optimized and realistic
designs are covered. The total number of generated Verilog samples is nearly
9,000 per FPGA type. To demonstrate the effectiveness of our dataset, we
undertake case studies to perform power estimation and resource usage
estimation with ML models trained with our dataset. All the codes and dataset
are public at the github repo.We believe that HLSDataset can save valuable time
for researchers by avoiding the tedious process of running tools, scripting and
parsing files to generate the dataset, and enable them to spend more time where
it counts, that is, in training ML models. | {
"abstract": "Machine Learning (ML) has been widely adopted in design exploration using\nhigh level synthesis (HLS) to give a better and faster performance, and\nresource and power estimation at very early stages for FPGA-based design. To\nperform prediction accurately, high-quality and large-volume datasets are\nrequired for training ML models.This paper presents a dataset for ML-assisted\nFPGA design using HLS, called HLSDataset. The dataset is generated from widely\nused HLS C benchmarks including Polybench, Machsuite, CHStone and Rossetta. The\nVerilog samples are generated with a variety of directives including loop\nunroll, loop pipeline and array partition to make sure optimized and realistic\ndesigns are covered. The total number of generated Verilog samples is nearly\n9,000 per FPGA type. To demonstrate the effectiveness of our dataset, we\nundertake case studies to perform power estimation and resource usage\nestimation with ML models trained with our dataset. All the codes and dataset\nare public at the github repo.We believe that HLSDataset can save valuable time\nfor researchers by avoiding the tedious process of running tools, scripting and\nparsing files to generate the dataset, and enable them to spend more time where\nit counts, that is, in training ML models.",
"title": "HLSDataset: Open-Source Dataset for ML-Assisted FPGA Design using High Level Synthesis",
"url": "http://arxiv.org/abs/2302.10977v2"
} | null | null | new_dataset | admin | null | false | null | 696fa31a-156b-4658-8333-d0e6c4c76112 | null | Validated | 2023-10-04 15:19:51.881120 | {
"text_length": 1388
} | 0new_dataset
|
TITLE: Biographical: A Semi-Supervised Relation Extraction Dataset
ABSTRACT: Extracting biographical information from online documents is a popular
research topic among the information extraction (IE) community. Various natural
language processing (NLP) techniques such as text classification, text
summarisation and relation extraction are commonly used to achieve this. Among
these techniques, RE is the most common since it can be directly used to build
biographical knowledge graphs. RE is usually framed as a supervised machine
learning (ML) problem, where ML models are trained on annotated datasets.
However, there are few annotated datasets for RE since the annotation process
can be costly and time-consuming. To address this, we developed Biographical,
the first semi-supervised dataset for RE. The dataset, which is aimed towards
digital humanities (DH) and historical research, is automatically compiled by
aligning sentences from Wikipedia articles with matching structured data from
sources including Pantheon and Wikidata. By exploiting the structure of
Wikipedia articles and robust named entity recognition (NER), we match
information with relatively high precision in order to compile annotated
relation pairs for ten different relations that are important in the DH domain.
Furthermore, we demonstrate the effectiveness of the dataset by training a
state-of-the-art neural model to classify relation pairs, and evaluate it on a
manually annotated gold standard set. Biographical is primarily aimed at
training neural models for RE within the domain of digital humanities and
history, but as we discuss at the end of this paper, it can be useful for other
purposes as well. | {
"abstract": "Extracting biographical information from online documents is a popular\nresearch topic among the information extraction (IE) community. Various natural\nlanguage processing (NLP) techniques such as text classification, text\nsummarisation and relation extraction are commonly used to achieve this. Among\nthese techniques, RE is the most common since it can be directly used to build\nbiographical knowledge graphs. RE is usually framed as a supervised machine\nlearning (ML) problem, where ML models are trained on annotated datasets.\nHowever, there are few annotated datasets for RE since the annotation process\ncan be costly and time-consuming. To address this, we developed Biographical,\nthe first semi-supervised dataset for RE. The dataset, which is aimed towards\ndigital humanities (DH) and historical research, is automatically compiled by\naligning sentences from Wikipedia articles with matching structured data from\nsources including Pantheon and Wikidata. By exploiting the structure of\nWikipedia articles and robust named entity recognition (NER), we match\ninformation with relatively high precision in order to compile annotated\nrelation pairs for ten different relations that are important in the DH domain.\nFurthermore, we demonstrate the effectiveness of the dataset by training a\nstate-of-the-art neural model to classify relation pairs, and evaluate it on a\nmanually annotated gold standard set. Biographical is primarily aimed at\ntraining neural models for RE within the domain of digital humanities and\nhistory, but as we discuss at the end of this paper, it can be useful for other\npurposes as well.",
"title": "Biographical: A Semi-Supervised Relation Extraction Dataset",
"url": "http://arxiv.org/abs/2205.00806v1"
} | null | null | new_dataset | admin | null | false | null | 2c511955-e009-4576-a903-3a2195d8fc6f | null | Validated | 2023-10-04 15:19:51.886728 | {
"text_length": 1708
} | 0new_dataset
|
TITLE: SOLD: Sinhala Offensive Language Dataset
ABSTRACT: The widespread of offensive content online, such as hate speech and
cyber-bullying, is a global phenomenon. This has sparked interest in the
artificial intelligence (AI) and natural language processing (NLP) communities,
motivating the development of various systems trained to detect potentially
harmful content automatically. These systems require annotated datasets to
train the machine learning (ML) models. However, with a few notable exceptions,
most datasets on this topic have dealt with English and a few other
high-resource languages. As a result, the research in offensive language
identification has been limited to these languages. This paper addresses this
gap by tackling offensive language identification in Sinhala, a low-resource
Indo-Aryan language spoken by over 17 million people in Sri Lanka. We introduce
the Sinhala Offensive Language Dataset (SOLD) and present multiple experiments
on this dataset. SOLD is a manually annotated dataset containing 10,000 posts
from Twitter annotated as offensive and not offensive at both sentence-level
and token-level, improving the explainability of the ML models. SOLD is the
first large publicly available offensive language dataset compiled for Sinhala.
We also introduce SemiSOLD, a larger dataset containing more than 145,000
Sinhala tweets, annotated following a semi-supervised approach. | {
"abstract": "The widespread of offensive content online, such as hate speech and\ncyber-bullying, is a global phenomenon. This has sparked interest in the\nartificial intelligence (AI) and natural language processing (NLP) communities,\nmotivating the development of various systems trained to detect potentially\nharmful content automatically. These systems require annotated datasets to\ntrain the machine learning (ML) models. However, with a few notable exceptions,\nmost datasets on this topic have dealt with English and a few other\nhigh-resource languages. As a result, the research in offensive language\nidentification has been limited to these languages. This paper addresses this\ngap by tackling offensive language identification in Sinhala, a low-resource\nIndo-Aryan language spoken by over 17 million people in Sri Lanka. We introduce\nthe Sinhala Offensive Language Dataset (SOLD) and present multiple experiments\non this dataset. SOLD is a manually annotated dataset containing 10,000 posts\nfrom Twitter annotated as offensive and not offensive at both sentence-level\nand token-level, improving the explainability of the ML models. SOLD is the\nfirst large publicly available offensive language dataset compiled for Sinhala.\nWe also introduce SemiSOLD, a larger dataset containing more than 145,000\nSinhala tweets, annotated following a semi-supervised approach.",
"title": "SOLD: Sinhala Offensive Language Dataset",
"url": "http://arxiv.org/abs/2212.00851v1"
} | null | null | new_dataset | admin | null | false | null | 3e4bc237-2776-4d87-aebb-c720e96db614 | null | Validated | 2023-10-04 15:19:51.882533 | {
"text_length": 1430
} | 0new_dataset
|
TITLE: Video compression dataset and benchmark of learning-based video-quality metrics
ABSTRACT: Video-quality measurement is a critical task in video processing. Nowadays,
many implementations of new encoding standards - such as AV1, VVC, and LCEVC -
use deep-learning-based decoding algorithms with perceptual metrics that serve
as optimization objectives. But investigations of the performance of modern
video- and image-quality metrics commonly employ videos compressed using older
standards, such as AVC. In this paper, we present a new benchmark for
video-quality metrics that evaluates video compression. It is based on a new
dataset consisting of about 2,500 streams encoded using different standards,
including AVC, HEVC, AV1, VP9, and VVC. Subjective scores were collected using
crowdsourced pairwise comparisons. The list of evaluated metrics includes
recent ones based on machine learning and neural networks. The results
demonstrate that new no-reference metrics exhibit a high correlation with
subjective quality and approach the capability of top full-reference metrics. | {
"abstract": "Video-quality measurement is a critical task in video processing. Nowadays,\nmany implementations of new encoding standards - such as AV1, VVC, and LCEVC -\nuse deep-learning-based decoding algorithms with perceptual metrics that serve\nas optimization objectives. But investigations of the performance of modern\nvideo- and image-quality metrics commonly employ videos compressed using older\nstandards, such as AVC. In this paper, we present a new benchmark for\nvideo-quality metrics that evaluates video compression. It is based on a new\ndataset consisting of about 2,500 streams encoded using different standards,\nincluding AVC, HEVC, AV1, VP9, and VVC. Subjective scores were collected using\ncrowdsourced pairwise comparisons. The list of evaluated metrics includes\nrecent ones based on machine learning and neural networks. The results\ndemonstrate that new no-reference metrics exhibit a high correlation with\nsubjective quality and approach the capability of top full-reference metrics.",
"title": "Video compression dataset and benchmark of learning-based video-quality metrics",
"url": "http://arxiv.org/abs/2211.12109v2"
} | null | null | new_dataset | admin | null | false | null | 16bdbcde-ffba-4939-952e-71b06be560eb | null | Validated | 2023-10-04 15:19:51.882724 | {
"text_length": 1102
} | 0new_dataset
|
TITLE: EBHI-Seg: A Novel Enteroscope Biopsy Histopathological Haematoxylin and Eosin Image Dataset for Image Segmentation Tasks
ABSTRACT: Background and Purpose: Colorectal cancer is a common fatal malignancy, the
fourth most common cancer in men, and the third most common cancer in women
worldwide. Timely detection of cancer in its early stages is essential for
treating the disease. Currently, there is a lack of datasets for
histopathological image segmentation of rectal cancer, which often hampers the
assessment accuracy when computer technology is used to aid in diagnosis.
Methods: This present study provided a new publicly available Enteroscope
Biopsy Histopathological Hematoxylin and Eosin Image Dataset for Image
Segmentation Tasks (EBHI-Seg). To demonstrate the validity and extensiveness of
EBHI-Seg, the experimental results for EBHI-Seg are evaluated using classical
machine learning methods and deep learning methods. Results: The experimental
results showed that deep learning methods had a better image segmentation
performance when utilizing EBHI-Seg. The maximum accuracy of the Dice
evaluation metric for the classical machine learning method is 0.948, while the
Dice evaluation metric for the deep learning method is 0.965. Conclusion: This
publicly available dataset contained 5,170 images of six types of tumor
differentiation stages and the corresponding ground truth images. The dataset
can provide researchers with new segmentation algorithms for medical diagnosis
of colorectal cancer, which can be used in the clinical setting to help doctors
and patients. | {
"abstract": "Background and Purpose: Colorectal cancer is a common fatal malignancy, the\nfourth most common cancer in men, and the third most common cancer in women\nworldwide. Timely detection of cancer in its early stages is essential for\ntreating the disease. Currently, there is a lack of datasets for\nhistopathological image segmentation of rectal cancer, which often hampers the\nassessment accuracy when computer technology is used to aid in diagnosis.\nMethods: This present study provided a new publicly available Enteroscope\nBiopsy Histopathological Hematoxylin and Eosin Image Dataset for Image\nSegmentation Tasks (EBHI-Seg). To demonstrate the validity and extensiveness of\nEBHI-Seg, the experimental results for EBHI-Seg are evaluated using classical\nmachine learning methods and deep learning methods. Results: The experimental\nresults showed that deep learning methods had a better image segmentation\nperformance when utilizing EBHI-Seg. The maximum accuracy of the Dice\nevaluation metric for the classical machine learning method is 0.948, while the\nDice evaluation metric for the deep learning method is 0.965. Conclusion: This\npublicly available dataset contained 5,170 images of six types of tumor\ndifferentiation stages and the corresponding ground truth images. The dataset\ncan provide researchers with new segmentation algorithms for medical diagnosis\nof colorectal cancer, which can be used in the clinical setting to help doctors\nand patients.",
"title": "EBHI-Seg: A Novel Enteroscope Biopsy Histopathological Haematoxylin and Eosin Image Dataset for Image Segmentation Tasks",
"url": "http://arxiv.org/abs/2212.00532v3"
} | null | null | new_dataset | admin | null | false | null | 2c41663f-2eb2-46cd-806d-cea7cb77a243 | null | Validated | 2023-10-04 15:19:51.882557 | {
"text_length": 1606
} | 0new_dataset
|
TITLE: Building a Manga Dataset "Manga109" with Annotations for Multimedia Applications
ABSTRACT: Manga, or comics, which are a type of multimodal artwork, have been left
behind in the recent trend of deep learning applications because of the lack of
a proper dataset. Hence, we built Manga109, a dataset consisting of a variety
of 109 Japanese comic books (94 authors and 21,142 pages) and made it publicly
available by obtaining author permissions for academic use. We carefully
annotated the frames, speech texts, character faces, and character bodies; the
total number of annotations exceeds 500k. This dataset provides numerous manga
images and annotations, which will be beneficial for use in machine learning
algorithms and their evaluation. In addition to academic use, we obtained
further permission for a subset of the dataset for industrial use. In this
article, we describe the details of the dataset and present a few examples of
multimedia processing applications (detection, retrieval, and generation) that
apply existing deep learning methods and are made possible by the dataset. | {
"abstract": "Manga, or comics, which are a type of multimodal artwork, have been left\nbehind in the recent trend of deep learning applications because of the lack of\na proper dataset. Hence, we built Manga109, a dataset consisting of a variety\nof 109 Japanese comic books (94 authors and 21,142 pages) and made it publicly\navailable by obtaining author permissions for academic use. We carefully\nannotated the frames, speech texts, character faces, and character bodies; the\ntotal number of annotations exceeds 500k. This dataset provides numerous manga\nimages and annotations, which will be beneficial for use in machine learning\nalgorithms and their evaluation. In addition to academic use, we obtained\nfurther permission for a subset of the dataset for industrial use. In this\narticle, we describe the details of the dataset and present a few examples of\nmultimedia processing applications (detection, retrieval, and generation) that\napply existing deep learning methods and are made possible by the dataset.",
"title": "Building a Manga Dataset \"Manga109\" with Annotations for Multimedia Applications",
"url": "http://arxiv.org/abs/2005.04425v2"
} | null | null | new_dataset | admin | null | false | null | 8685286a-cb69-49b3-b307-7770d51bb6d9 | null | Validated | 2023-10-04 15:19:51.900046 | {
"text_length": 1113
} | 0new_dataset
|
TITLE: ProMap: Datasets for Product Mapping in E-commerce
ABSTRACT: The goal of product mapping is to decide, whether two listings from two
different e-shops describe the same products. Existing datasets of matching and
non-matching pairs of products, however, often suffer from incomplete product
information or contain only very distant non-matching products. Therefore,
while predictive models trained on these datasets achieve good results on them,
in practice, they are unusable as they cannot distinguish very similar but
non-matching pairs of products. This paper introduces two new datasets for
product mapping: ProMapCz consisting of 1,495 Czech product pairs and ProMapEn
consisting of 1,555 English product pairs of matching and non-matching products
manually scraped from two pairs of e-shops. The datasets contain both images
and textual descriptions of the products, including their specifications,
making them one of the most complete datasets for product mapping.
Additionally, the non-matching products were selected in two phases, creating
two types of non-matches -- close non-matches and medium non-matches. Even the
medium non-matches are pairs of products that are much more similar than
non-matches in other datasets -- for example, they still need to have the same
brand and similar name and price. After simple data preprocessing, several
machine learning algorithms were trained on these and two the other datasets to
demonstrate the complexity and completeness of ProMap datasets. ProMap datasets
are presented as a golden standard for further research of product mapping
filling the gaps in existing ones. | {
"abstract": "The goal of product mapping is to decide, whether two listings from two\ndifferent e-shops describe the same products. Existing datasets of matching and\nnon-matching pairs of products, however, often suffer from incomplete product\ninformation or contain only very distant non-matching products. Therefore,\nwhile predictive models trained on these datasets achieve good results on them,\nin practice, they are unusable as they cannot distinguish very similar but\nnon-matching pairs of products. This paper introduces two new datasets for\nproduct mapping: ProMapCz consisting of 1,495 Czech product pairs and ProMapEn\nconsisting of 1,555 English product pairs of matching and non-matching products\nmanually scraped from two pairs of e-shops. The datasets contain both images\nand textual descriptions of the products, including their specifications,\nmaking them one of the most complete datasets for product mapping.\nAdditionally, the non-matching products were selected in two phases, creating\ntwo types of non-matches -- close non-matches and medium non-matches. Even the\nmedium non-matches are pairs of products that are much more similar than\nnon-matches in other datasets -- for example, they still need to have the same\nbrand and similar name and price. After simple data preprocessing, several\nmachine learning algorithms were trained on these and two the other datasets to\ndemonstrate the complexity and completeness of ProMap datasets. ProMap datasets\nare presented as a golden standard for further research of product mapping\nfilling the gaps in existing ones.",
"title": "ProMap: Datasets for Product Mapping in E-commerce",
"url": "http://arxiv.org/abs/2309.06882v1"
} | null | null | new_dataset | admin | null | false | null | 98b23a80-626a-4e8e-a232-5615fb093ff9 | null | Validated | 2023-10-04 15:19:51.863617 | {
"text_length": 1650
} | 0new_dataset
|
TITLE: MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition
ABSTRACT: We present MultiCoNER, a large multilingual dataset for Named Entity
Recognition that covers 3 domains (Wiki sentences, questions, and search
queries) across 11 languages, as well as multilingual and code-mixing subsets.
This dataset is designed to represent contemporary challenges in NER, including
low-context scenarios (short and uncased text), syntactically complex entities
like movie titles, and long-tail entity distributions. The 26M token dataset is
compiled from public resources using techniques such as heuristic-based
sentence sampling, template extraction and slotting, and machine translation.
We applied two NER models on our dataset: a baseline XLM-RoBERTa model, and a
state-of-the-art GEMNET model that leverages gazetteers. The baseline achieves
moderate performance (macro-F1=54%), highlighting the difficulty of our data.
GEMNET, which uses gazetteers, improvement significantly (average improvement
of macro-F1=+30%). MultiCoNER poses challenges even for large pre-trained
language models, and we believe that it can help further research in building
robust NER systems. MultiCoNER is publicly available at
https://registry.opendata.aws/multiconer/ and we hope that this resource will
help advance research in various aspects of NER. | {
"abstract": "We present MultiCoNER, a large multilingual dataset for Named Entity\nRecognition that covers 3 domains (Wiki sentences, questions, and search\nqueries) across 11 languages, as well as multilingual and code-mixing subsets.\nThis dataset is designed to represent contemporary challenges in NER, including\nlow-context scenarios (short and uncased text), syntactically complex entities\nlike movie titles, and long-tail entity distributions. The 26M token dataset is\ncompiled from public resources using techniques such as heuristic-based\nsentence sampling, template extraction and slotting, and machine translation.\nWe applied two NER models on our dataset: a baseline XLM-RoBERTa model, and a\nstate-of-the-art GEMNET model that leverages gazetteers. The baseline achieves\nmoderate performance (macro-F1=54%), highlighting the difficulty of our data.\nGEMNET, which uses gazetteers, improvement significantly (average improvement\nof macro-F1=+30%). MultiCoNER poses challenges even for large pre-trained\nlanguage models, and we believe that it can help further research in building\nrobust NER systems. MultiCoNER is publicly available at\nhttps://registry.opendata.aws/multiconer/ and we hope that this resource will\nhelp advance research in various aspects of NER.",
"title": "MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition",
"url": "http://arxiv.org/abs/2208.14536v1"
} | null | null | new_dataset | admin | null | false | null | 56aa19a9-b936-46d9-9501-40cba4082c0e | null | Validated | 2023-10-04 15:19:51.884346 | {
"text_length": 1375
} | 0new_dataset
|
TITLE: A Systematic Review of Machine Learning Techniques for Cattle Identification: Datasets, Methods and Future Directions
ABSTRACT: Increased biosecurity and food safety requirements may increase demand for
efficient traceability and identification systems of livestock in the supply
chain. The advanced technologies of machine learning and computer vision have
been applied in precision livestock management, including critical disease
detection, vaccination, production management, tracking, and health monitoring.
This paper offers a systematic literature review (SLR) of vision-based cattle
identification. More specifically, this SLR is to identify and analyse the
research related to cattle identification using Machine Learning (ML) and Deep
Learning (DL). For the two main applications of cattle detection and cattle
identification, all the ML based papers only solve cattle identification
problems. However, both detection and identification problems were studied in
the DL based papers. Based on our survey report, the most used ML models for
cattle identification were support vector machine (SVM), k-nearest neighbour
(KNN), and artificial neural network (ANN). Convolutional neural network (CNN),
residual network (ResNet), Inception, You Only Look Once (YOLO), and Faster
R-CNN were popular DL models in the selected papers. Among these papers, the
most distinguishing features were the muzzle prints and coat patterns of
cattle. Local binary pattern (LBP), speeded up robust features (SURF),
scale-invariant feature transform (SIFT), and Inception or CNN were identified
as the most used feature extraction methods. | {
"abstract": "Increased biosecurity and food safety requirements may increase demand for\nefficient traceability and identification systems of livestock in the supply\nchain. The advanced technologies of machine learning and computer vision have\nbeen applied in precision livestock management, including critical disease\ndetection, vaccination, production management, tracking, and health monitoring.\nThis paper offers a systematic literature review (SLR) of vision-based cattle\nidentification. More specifically, this SLR is to identify and analyse the\nresearch related to cattle identification using Machine Learning (ML) and Deep\nLearning (DL). For the two main applications of cattle detection and cattle\nidentification, all the ML based papers only solve cattle identification\nproblems. However, both detection and identification problems were studied in\nthe DL based papers. Based on our survey report, the most used ML models for\ncattle identification were support vector machine (SVM), k-nearest neighbour\n(KNN), and artificial neural network (ANN). Convolutional neural network (CNN),\nresidual network (ResNet), Inception, You Only Look Once (YOLO), and Faster\nR-CNN were popular DL models in the selected papers. Among these papers, the\nmost distinguishing features were the muzzle prints and coat patterns of\ncattle. Local binary pattern (LBP), speeded up robust features (SURF),\nscale-invariant feature transform (SIFT), and Inception or CNN were identified\nas the most used feature extraction methods.",
"title": "A Systematic Review of Machine Learning Techniques for Cattle Identification: Datasets, Methods and Future Directions",
"url": "http://arxiv.org/abs/2210.09215v1"
} | null | null | no_new_dataset | admin | null | false | null | d475dae8-5066-4c2a-a44e-c6a6ec66c226 | null | Validated | 2023-10-04 15:19:51.883522 | {
"text_length": 1650
} | 1no_new_dataset
|
TITLE: Ensuring Dataset Quality for Machine Learning Certification
ABSTRACT: In this paper, we address the problem of dataset quality in the context of
Machine Learning (ML)-based critical systems. We briefly analyse the
applicability of some existing standards dealing with data and show that the
specificities of the ML context are neither properly captured nor taken into
ac-count. As a first answer to this concerning situation, we propose a dataset
specification and verification process, and apply it on a signal recognition
system from the railway domain. In addi-tion, we also give a list of
recommendations for the collection and management of datasets. This work is one
step towards the dataset engineering process that will be required for ML to be
used on safety critical systems. | {
"abstract": "In this paper, we address the problem of dataset quality in the context of\nMachine Learning (ML)-based critical systems. We briefly analyse the\napplicability of some existing standards dealing with data and show that the\nspecificities of the ML context are neither properly captured nor taken into\nac-count. As a first answer to this concerning situation, we propose a dataset\nspecification and verification process, and apply it on a signal recognition\nsystem from the railway domain. In addi-tion, we also give a list of\nrecommendations for the collection and management of datasets. This work is one\nstep towards the dataset engineering process that will be required for ML to be\nused on safety critical systems.",
"title": "Ensuring Dataset Quality for Machine Learning Certification",
"url": "http://arxiv.org/abs/2011.01799v1"
} | null | null | no_new_dataset | admin | null | false | null | 5613ca29-b4cb-4a6f-8670-9f37aeb9adba | null | Validated | 2023-10-04 15:19:51.897256 | {
"text_length": 809
} | 1no_new_dataset
|
TITLE: ImageSubject: A Large-scale Dataset for Subject Detection
ABSTRACT: Main subjects usually exist in the images or videos, as they are the objects
that the photographer wants to highlight. Human viewers can easily identify
them but algorithms often confuse them with other objects. Detecting the main
subjects is an important technique to help machines understand the content of
images and videos. We present a new dataset with the goal of training models to
understand the layout of the objects and the context of the image then to find
the main subjects among them. This is achieved in three aspects. By gathering
images from movie shots created by directors with professional shooting skills,
we collect the dataset with strong diversity, specifically, it contains
107\,700 images from 21\,540 movie shots. We labeled them with the bounding box
labels for two classes: subject and non-subject foreground object. We present a
detailed analysis of the dataset and compare the task with saliency detection
and object detection. ImageSubject is the first dataset that tries to localize
the subject in an image that the photographer wants to highlight. Moreover, we
find the transformer-based detection model offers the best result among other
popular model architectures. Finally, we discuss the potential applications and
conclude with the importance of the dataset. | {
"abstract": "Main subjects usually exist in the images or videos, as they are the objects\nthat the photographer wants to highlight. Human viewers can easily identify\nthem but algorithms often confuse them with other objects. Detecting the main\nsubjects is an important technique to help machines understand the content of\nimages and videos. We present a new dataset with the goal of training models to\nunderstand the layout of the objects and the context of the image then to find\nthe main subjects among them. This is achieved in three aspects. By gathering\nimages from movie shots created by directors with professional shooting skills,\nwe collect the dataset with strong diversity, specifically, it contains\n107\\,700 images from 21\\,540 movie shots. We labeled them with the bounding box\nlabels for two classes: subject and non-subject foreground object. We present a\ndetailed analysis of the dataset and compare the task with saliency detection\nand object detection. ImageSubject is the first dataset that tries to localize\nthe subject in an image that the photographer wants to highlight. Moreover, we\nfind the transformer-based detection model offers the best result among other\npopular model architectures. Finally, we discuss the potential applications and\nconclude with the importance of the dataset.",
"title": "ImageSubject: A Large-scale Dataset for Subject Detection",
"url": "http://arxiv.org/abs/2201.03101v2"
} | null | null | new_dataset | admin | null | false | null | 6184da8a-8083-465e-8aa1-86f4aeece61f | null | Validated | 2023-10-04 15:19:51.888964 | {
"text_length": 1388
} | 0new_dataset
|
TITLE: Constructing Multilingual Code Search Dataset Using Neural Machine Translation
ABSTRACT: Code search is a task to find programming codes that semantically match the
given natural language queries. Even though some of the existing datasets for
this task are multilingual on the programming language side, their query data
are only in English. In this research, we create a multilingual code search
dataset in four natural and four programming languages using a neural machine
translation model. Using our dataset, we pre-train and fine-tune the
Transformer-based models and then evaluate them on multiple code search test
sets. Our results show that the model pre-trained with all natural and
programming language data has performed best in most cases. By applying
back-translation data filtering to our dataset, we demonstrate that the
translation quality affects the model's performance to a certain extent, but
the data size matters more. | {
"abstract": "Code search is a task to find programming codes that semantically match the\ngiven natural language queries. Even though some of the existing datasets for\nthis task are multilingual on the programming language side, their query data\nare only in English. In this research, we create a multilingual code search\ndataset in four natural and four programming languages using a neural machine\ntranslation model. Using our dataset, we pre-train and fine-tune the\nTransformer-based models and then evaluate them on multiple code search test\nsets. Our results show that the model pre-trained with all natural and\nprogramming language data has performed best in most cases. By applying\nback-translation data filtering to our dataset, we demonstrate that the\ntranslation quality affects the model's performance to a certain extent, but\nthe data size matters more.",
"title": "Constructing Multilingual Code Search Dataset Using Neural Machine Translation",
"url": "http://arxiv.org/abs/2306.15604v1"
} | null | null | new_dataset | admin | null | false | null | cbb34d2f-3e2f-48ae-a723-2a62e1667451 | null | Default | 2023-10-04 15:19:51.869727 | {
"text_length": 964
} | 0new_dataset
|
TITLE: The CSIRO Crown-of-Thorn Starfish Detection Dataset
ABSTRACT: Crown-of-Thorn Starfish (COTS) outbreaks are a major cause of coral loss on
the Great Barrier Reef (GBR) and substantial surveillance and control programs
are underway in an attempt to manage COTS populations to ecologically
sustainable levels. We release a large-scale, annotated underwater image
dataset from a COTS outbreak area on the GBR, to encourage research on Machine
Learning and AI-driven technologies to improve the detection, monitoring, and
management of COTS populations at reef scale. The dataset is released and
hosted in a Kaggle competition that challenges the international Machine
Learning community with the task of COTS detection from these underwater
images. | {
"abstract": "Crown-of-Thorn Starfish (COTS) outbreaks are a major cause of coral loss on\nthe Great Barrier Reef (GBR) and substantial surveillance and control programs\nare underway in an attempt to manage COTS populations to ecologically\nsustainable levels. We release a large-scale, annotated underwater image\ndataset from a COTS outbreak area on the GBR, to encourage research on Machine\nLearning and AI-driven technologies to improve the detection, monitoring, and\nmanagement of COTS populations at reef scale. The dataset is released and\nhosted in a Kaggle competition that challenges the international Machine\nLearning community with the task of COTS detection from these underwater\nimages.",
"title": "The CSIRO Crown-of-Thorn Starfish Detection Dataset",
"url": "http://arxiv.org/abs/2111.14311v1"
} | null | null | new_dataset | admin | null | false | null | 7af65e19-249f-4827-94c3-8b910a8deda1 | null | Validated | 2023-10-04 15:19:51.889572 | {
"text_length": 768
} | 0new_dataset
|
TITLE: NOD: Taking a Closer Look at Detection under Extreme Low-Light Conditions with Night Object Detection Dataset
ABSTRACT: Recent work indicates that, besides being a challenge in producing
perceptually pleasing images, low light proves more difficult for machine
cognition than previously thought. In our work, we take a closer look at object
detection in low light. First, to support the development and evaluation of new
methods in this domain, we present a high-quality large-scale Night Object
Detection (NOD) dataset showing dynamic scenes captured on the streets at
night. Next, we directly link the lighting conditions to perceptual difficulty
and identify what makes low light problematic for machine cognition.
Accordingly, we provide instance-level annotation for a subset of the dataset
for an in-depth evaluation of future methods. We also present an analysis of
the baseline model performance to highlight opportunities for future research
and show that low light is a non-trivial problem that requires special
attention from the researchers. Further, to address the issues caused by low
light, we propose to incorporate an image enhancement module into the object
detection framework and two novel data augmentation techniques. Our image
enhancement module is trained under the guidance of the object detector to
learn image representation optimal for machine cognition rather than for the
human visual system. Finally, experimental results confirm that the proposed
method shows consistent improvement of the performance on low-light datasets. | {
"abstract": "Recent work indicates that, besides being a challenge in producing\nperceptually pleasing images, low light proves more difficult for machine\ncognition than previously thought. In our work, we take a closer look at object\ndetection in low light. First, to support the development and evaluation of new\nmethods in this domain, we present a high-quality large-scale Night Object\nDetection (NOD) dataset showing dynamic scenes captured on the streets at\nnight. Next, we directly link the lighting conditions to perceptual difficulty\nand identify what makes low light problematic for machine cognition.\nAccordingly, we provide instance-level annotation for a subset of the dataset\nfor an in-depth evaluation of future methods. We also present an analysis of\nthe baseline model performance to highlight opportunities for future research\nand show that low light is a non-trivial problem that requires special\nattention from the researchers. Further, to address the issues caused by low\nlight, we propose to incorporate an image enhancement module into the object\ndetection framework and two novel data augmentation techniques. Our image\nenhancement module is trained under the guidance of the object detector to\nlearn image representation optimal for machine cognition rather than for the\nhuman visual system. Finally, experimental results confirm that the proposed\nmethod shows consistent improvement of the performance on low-light datasets.",
"title": "NOD: Taking a Closer Look at Detection under Extreme Low-Light Conditions with Night Object Detection Dataset",
"url": "http://arxiv.org/abs/2110.10364v1"
} | null | null | new_dataset | admin | null | false | null | 451559c5-216a-4132-9336-76500462eacc | null | Validated | 2023-10-04 15:19:51.890271 | {
"text_length": 1580
} | 0new_dataset
|
TITLE: Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research
ABSTRACT: Benchmark datasets play a central role in the organization of machine
learning research. They coordinate researchers around shared research problems
and serve as a measure of progress towards shared goals. Despite the
foundational role of benchmarking practices in this field, relatively little
attention has been paid to the dynamics of benchmark dataset use and reuse,
within or across machine learning subcommunities. In this paper, we dig into
these dynamics. We study how dataset usage patterns differ across machine
learning subcommunities and across time from 2015-2020. We find increasing
concentration on fewer and fewer datasets within task communities, significant
adoption of datasets from other tasks, and concentration across the field on
datasets that have been introduced by researchers situated within a small
number of elite institutions. Our results have implications for scientific
evaluation, AI ethics, and equity/access within the field. | {
"abstract": "Benchmark datasets play a central role in the organization of machine\nlearning research. They coordinate researchers around shared research problems\nand serve as a measure of progress towards shared goals. Despite the\nfoundational role of benchmarking practices in this field, relatively little\nattention has been paid to the dynamics of benchmark dataset use and reuse,\nwithin or across machine learning subcommunities. In this paper, we dig into\nthese dynamics. We study how dataset usage patterns differ across machine\nlearning subcommunities and across time from 2015-2020. We find increasing\nconcentration on fewer and fewer datasets within task communities, significant\nadoption of datasets from other tasks, and concentration across the field on\ndatasets that have been introduced by researchers situated within a small\nnumber of elite institutions. Our results have implications for scientific\nevaluation, AI ethics, and equity/access within the field.",
"title": "Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research",
"url": "http://arxiv.org/abs/2112.01716v1"
} | null | null | no_new_dataset | admin | null | false | null | d36c09a4-4dfe-4ca2-b727-6965f322da1b | null | Validated | 2023-10-04 15:19:51.889419 | {
"text_length": 1075
} | 1no_new_dataset
|
TITLE: BioImageLoader: Easy Handling of Bioimage Datasets for Machine Learning
ABSTRACT: BioImageLoader (BIL) is a python library that handles bioimage datasets for
machine learning applications, easing simple workflows and enabling complex
ones. BIL attempts to wrap the numerous and varied bioimages datasets in
unified interfaces, to easily concatenate, perform image augmentation, and
batch-load them. By acting at a per experimental dataset level, it enables both
a high level of customization and a comparison across experiments. Here we
present the library and show some application it enables, including retraining
published deep learning architectures and evaluating their versatility in a
leave-one-dataset-out fashion. | {
"abstract": "BioImageLoader (BIL) is a python library that handles bioimage datasets for\nmachine learning applications, easing simple workflows and enabling complex\nones. BIL attempts to wrap the numerous and varied bioimages datasets in\nunified interfaces, to easily concatenate, perform image augmentation, and\nbatch-load them. By acting at a per experimental dataset level, it enables both\na high level of customization and a comparison across experiments. Here we\npresent the library and show some application it enables, including retraining\npublished deep learning architectures and evaluating their versatility in a\nleave-one-dataset-out fashion.",
"title": "BioImageLoader: Easy Handling of Bioimage Datasets for Machine Learning",
"url": "http://arxiv.org/abs/2303.02158v1"
} | null | null | no_new_dataset | admin | null | false | null | 0272e3a7-b0bd-4951-8221-adedfd239e95 | null | Validated | 2023-10-04 15:19:51.880860 | {
"text_length": 746
} | 1no_new_dataset
|
TITLE: Synthetic Distracted Driving (SynDD2) dataset for analyzing distracted behaviors and various gaze zones of a driver
ABSTRACT: This article presents a synthetic distracted driving (SynDD2 - a continuum of
SynDD1) dataset for machine learning models to detect and analyze drivers'
various distracted behavior and different gaze zones. We collected the data in
a stationary vehicle using three in-vehicle cameras positioned at locations: on
the dashboard, near the rearview mirror, and on the top right-side window
corner. The dataset contains two activity types: distracted activities and gaze
zones for each participant, and each activity type has two sets: without
appearance blocks and with appearance blocks such as wearing a hat or
sunglasses. The order and duration of each activity for each participant are
random. In addition, the dataset contains manual annotations for each activity,
having its start and end time annotated. Researchers could use this dataset to
evaluate the performance of machine learning algorithms to classify various
distracting activities and gaze zones of drivers. | {
"abstract": "This article presents a synthetic distracted driving (SynDD2 - a continuum of\nSynDD1) dataset for machine learning models to detect and analyze drivers'\nvarious distracted behavior and different gaze zones. We collected the data in\na stationary vehicle using three in-vehicle cameras positioned at locations: on\nthe dashboard, near the rearview mirror, and on the top right-side window\ncorner. The dataset contains two activity types: distracted activities and gaze\nzones for each participant, and each activity type has two sets: without\nappearance blocks and with appearance blocks such as wearing a hat or\nsunglasses. The order and duration of each activity for each participant are\nrandom. In addition, the dataset contains manual annotations for each activity,\nhaving its start and end time annotated. Researchers could use this dataset to\nevaluate the performance of machine learning algorithms to classify various\ndistracting activities and gaze zones of drivers.",
"title": "Synthetic Distracted Driving (SynDD2) dataset for analyzing distracted behaviors and various gaze zones of a driver",
"url": "http://arxiv.org/abs/2204.08096v3"
} | null | null | new_dataset | admin | null | false | null | 5c59b8de-ad72-45e6-9363-13371aef5b61 | null | Validated | 2023-10-04 15:19:51.887013 | {
"text_length": 1120
} | 0new_dataset
|
TITLE: Realistic Large-Scale Fine-Depth Dehazing Dataset from 3D Videos
ABSTRACT: Image dehazing is one of the important and popular topics in computer vision
and machine learning. A reliable real-time dehazing method with reliable
performance is highly desired for many applications such as autonomous driving,
security surveillance, etc. While recent learning-based methods require
datasets containing pairs of hazy images and clean ground truth, it is
impossible to capture them in real scenes. Many existing works compromise this
difficulty to generate hazy images by rendering the haze from depth on common
RGBD datasets using the haze imaging model. However, there is still a gap
between the synthetic datasets and real hazy images as large datasets with
high-quality depth are mostly indoor and depth maps for outdoor are imprecise.
In this paper, we complement the existing datasets with a new, large, and
diverse dehazing dataset containing real outdoor scenes from High-Definition
(HD) 3D movies. We select a large number of high-quality frames of real outdoor
scenes and render haze on them using depth from stereo. Our dataset is clearly
more realistic and more diversified with better visual quality than existing
ones. More importantly, we demonstrate that using this dataset greatly improves
the dehazing performance on real scenes. In addition to the dataset, we also
evaluate a series state of the art methods on the proposed benchmarking
datasets. | {
"abstract": "Image dehazing is one of the important and popular topics in computer vision\nand machine learning. A reliable real-time dehazing method with reliable\nperformance is highly desired for many applications such as autonomous driving,\nsecurity surveillance, etc. While recent learning-based methods require\ndatasets containing pairs of hazy images and clean ground truth, it is\nimpossible to capture them in real scenes. Many existing works compromise this\ndifficulty to generate hazy images by rendering the haze from depth on common\nRGBD datasets using the haze imaging model. However, there is still a gap\nbetween the synthetic datasets and real hazy images as large datasets with\nhigh-quality depth are mostly indoor and depth maps for outdoor are imprecise.\nIn this paper, we complement the existing datasets with a new, large, and\ndiverse dehazing dataset containing real outdoor scenes from High-Definition\n(HD) 3D movies. We select a large number of high-quality frames of real outdoor\nscenes and render haze on them using depth from stereo. Our dataset is clearly\nmore realistic and more diversified with better visual quality than existing\nones. More importantly, we demonstrate that using this dataset greatly improves\nthe dehazing performance on real scenes. In addition to the dataset, we also\nevaluate a series state of the art methods on the proposed benchmarking\ndatasets.",
"title": "Realistic Large-Scale Fine-Depth Dehazing Dataset from 3D Videos",
"url": "http://arxiv.org/abs/2004.08554v3"
} | null | null | new_dataset | admin | null | false | null | 45213a43-8253-4790-a349-8605f5f423a7 | null | Validated | 2023-10-04 15:19:51.901007 | {
"text_length": 1482
} | 0new_dataset
|
TITLE: RaidaR: A Rich Annotated Image Dataset of Rainy Street Scenes
ABSTRACT: We introduce RaidaR, a rich annotated image dataset of rainy street scenes,
to support autonomous driving research. The new dataset contains the largest
number of rainy images (58,542) to date, 5,000 of which provide semantic
segmentations and 3,658 provide object instance segmentations. The RaidaR
images cover a wide range of realistic rain-induced artifacts, including fog,
droplets, and road reflections, which can effectively augment existing street
scene datasets to improve data-driven machine perception during rainy weather.
To facilitate efficient annotation of a large volume of images, we develop a
semi-automatic scheme combining manual segmentation and an automated processing
akin to cross validation, resulting in 10-20 fold reduction on annotation time.
We demonstrate the utility of our new dataset by showing how data augmentation
with RaidaR can elevate the accuracy of existing segmentation algorithms. We
also present a novel unpaired image-to-image translation algorithm for
adding/removing rain artifacts, which directly benefits from RaidaR. | {
"abstract": "We introduce RaidaR, a rich annotated image dataset of rainy street scenes,\nto support autonomous driving research. The new dataset contains the largest\nnumber of rainy images (58,542) to date, 5,000 of which provide semantic\nsegmentations and 3,658 provide object instance segmentations. The RaidaR\nimages cover a wide range of realistic rain-induced artifacts, including fog,\ndroplets, and road reflections, which can effectively augment existing street\nscene datasets to improve data-driven machine perception during rainy weather.\nTo facilitate efficient annotation of a large volume of images, we develop a\nsemi-automatic scheme combining manual segmentation and an automated processing\nakin to cross validation, resulting in 10-20 fold reduction on annotation time.\nWe demonstrate the utility of our new dataset by showing how data augmentation\nwith RaidaR can elevate the accuracy of existing segmentation algorithms. We\nalso present a novel unpaired image-to-image translation algorithm for\nadding/removing rain artifacts, which directly benefits from RaidaR.",
"title": "RaidaR: A Rich Annotated Image Dataset of Rainy Street Scenes",
"url": "http://arxiv.org/abs/2104.04606v3"
} | null | null | new_dataset | admin | null | false | null | 2811a11c-72ae-43e1-bf62-d086501ece10 | null | Validated | 2023-10-04 15:19:51.895097 | {
"text_length": 1163
} | 0new_dataset
|
TITLE: Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product Search
ABSTRACT: Improving the quality of search results can significantly enhance users
experience and engagement with search engines. In spite of several recent
advancements in the fields of machine learning and data mining, correctly
classifying items for a particular user search query has been a long-standing
challenge, which still has a large room for improvement. This paper introduces
the "Shopping Queries Dataset", a large dataset of difficult Amazon search
queries and results, publicly released with the aim of fostering research in
improving the quality of search results. The dataset contains around 130
thousand unique queries and 2.6 million manually labeled (query,product)
relevance judgements. The dataset is multilingual with queries in English,
Japanese, and Spanish. The Shopping Queries Dataset is being used in one of the
KDDCup'22 challenges. In this paper, we describe the dataset and present three
evaluation tasks along with baseline results: (i) ranking the results list,
(ii) classifying product results into relevance categories, and (iii)
identifying substitute products for a given query. We anticipate that this data
will become the gold standard for future research in the topic of product
search. | {
"abstract": "Improving the quality of search results can significantly enhance users\nexperience and engagement with search engines. In spite of several recent\nadvancements in the fields of machine learning and data mining, correctly\nclassifying items for a particular user search query has been a long-standing\nchallenge, which still has a large room for improvement. This paper introduces\nthe \"Shopping Queries Dataset\", a large dataset of difficult Amazon search\nqueries and results, publicly released with the aim of fostering research in\nimproving the quality of search results. The dataset contains around 130\nthousand unique queries and 2.6 million manually labeled (query,product)\nrelevance judgements. The dataset is multilingual with queries in English,\nJapanese, and Spanish. The Shopping Queries Dataset is being used in one of the\nKDDCup'22 challenges. In this paper, we describe the dataset and present three\nevaluation tasks along with baseline results: (i) ranking the results list,\n(ii) classifying product results into relevance categories, and (iii)\nidentifying substitute products for a given query. We anticipate that this data\nwill become the gold standard for future research in the topic of product\nsearch.",
"title": "Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product Search",
"url": "http://arxiv.org/abs/2206.06588v1"
} | null | null | new_dataset | admin | null | false | null | 65b92422-b8d8-4ef9-8790-b055567e87c6 | null | Validated | 2023-10-04 15:19:51.885772 | {
"text_length": 1334
} | 0new_dataset
|
TITLE: A Survey on RGB-D Datasets
ABSTRACT: RGB-D data is essential for solving many problems in computer vision.
Hundreds of public RGB-D datasets containing various scenes, such as indoor,
outdoor, aerial, driving, and medical, have been proposed. These datasets are
useful for different applications and are fundamental for addressing classic
computer vision tasks, such as monocular depth estimation. This paper reviewed
and categorized image datasets that include depth information. We gathered 203
datasets that contain accessible data and grouped them into three categories:
scene/objects, body, and medical. We also provided an overview of the different
types of sensors, depth applications, and we examined trends and future
directions of the usage and creation of datasets containing depth data, and how
they can be applied to investigate the development of generalizable machine
learning models in the monocular depth estimation field. | {
"abstract": "RGB-D data is essential for solving many problems in computer vision.\nHundreds of public RGB-D datasets containing various scenes, such as indoor,\noutdoor, aerial, driving, and medical, have been proposed. These datasets are\nuseful for different applications and are fundamental for addressing classic\ncomputer vision tasks, such as monocular depth estimation. This paper reviewed\nand categorized image datasets that include depth information. We gathered 203\ndatasets that contain accessible data and grouped them into three categories:\nscene/objects, body, and medical. We also provided an overview of the different\ntypes of sensors, depth applications, and we examined trends and future\ndirections of the usage and creation of datasets containing depth data, and how\nthey can be applied to investigate the development of generalizable machine\nlearning models in the monocular depth estimation field.",
"title": "A Survey on RGB-D Datasets",
"url": "http://arxiv.org/abs/2201.05761v2"
} | null | null | no_new_dataset | admin | null | false | null | 9bdc9f69-2e20-45d5-809b-16d14b380ebd | null | Validated | 2023-10-04 15:19:51.888844 | {
"text_length": 963
} | 1no_new_dataset
|
TITLE: Teaching Key Machine Learning Principles Using Anti-learning Datasets
ABSTRACT: Much of the teaching of machine learning focuses on iterative hill-climbing
approaches and the use of local knowledge to gain information leading to local
or global maxima. In this paper we advocate the teaching of alternative methods
of generalising to the best possible solution, including a method called
anti-learning. By using simple teaching methods, students can achieve a deeper
understanding of the importance of validation on data excluded from the
training process and that each problem requires its own methods to solve. We
also exemplify the requirement to train a model using sufficient data by
showing that different granularities of cross-validation can yield very
different results. | {
"abstract": "Much of the teaching of machine learning focuses on iterative hill-climbing\napproaches and the use of local knowledge to gain information leading to local\nor global maxima. In this paper we advocate the teaching of alternative methods\nof generalising to the best possible solution, including a method called\nanti-learning. By using simple teaching methods, students can achieve a deeper\nunderstanding of the importance of validation on data excluded from the\ntraining process and that each problem requires its own methods to solve. We\nalso exemplify the requirement to train a model using sufficient data by\nshowing that different granularities of cross-validation can yield very\ndifferent results.",
"title": "Teaching Key Machine Learning Principles Using Anti-learning Datasets",
"url": "http://arxiv.org/abs/2011.10660v1"
} | null | null | no_new_dataset | admin | null | false | null | 6ea149f7-3695-4710-8671-d85b05bee738 | null | Validated | 2023-10-04 15:19:51.897021 | {
"text_length": 803
} | 1no_new_dataset
|
TITLE: EMAHA-DB1: A New Upper Limb sEMG Dataset for Classification of Activities of Daily Living
ABSTRACT: In this paper, we present electromyography analysis of human activity -
database 1 (EMAHA-DB1), a novel dataset of multi-channel surface
electromyography (sEMG) signals to evaluate the activities of daily living
(ADL). The dataset is acquired from 25 able-bodied subjects while performing 22
activities categorised according to functional arm activity behavioral system
(FAABOS) (3 - full hand gestures, 6 - open/close office draw, 8 - grasping and
holding of small office objects, 2 - flexion and extension of finger movements,
2 - writing and 1 - rest). The sEMG data is measured by a set of five Noraxon
Ultium wireless sEMG sensors with Ag/Agcl electrodes placed on a human hand.
The dataset is analyzed for hand activity recognition classification
performance. The classification is performed using four state-ofthe-art machine
learning classifiers, including Random Forest (RF), Fine K-Nearest Neighbour
(KNN), Ensemble KNN (sKNN) and Support Vector Machine (SVM) with seven
combinations of time domain and frequency domain feature sets. The
state-of-theart classification accuracy on five FAABOS categories is 83:21% by
using the SVM classifier with the third order polynomial kernel using energy
feature and auto regressive feature set ensemble. The classification accuracy
on 22 class hand activities is 75:39% by the same SVM classifier with the log
moments in frequency domain (LMF) feature, modified LMF, time domain
statistical (TDS) feature, spectral band powers (SBP), channel cross
correlation and local binary patterns (LBP) set ensemble. The analysis depicts
the technical challenges addressed by the dataset. The developed dataset can be
used as a benchmark for various classification methods as well as for sEMG
signal analysis corresponding to ADL and for the development of prosthetics and
other wearable robotics. | {
"abstract": "In this paper, we present electromyography analysis of human activity -\ndatabase 1 (EMAHA-DB1), a novel dataset of multi-channel surface\nelectromyography (sEMG) signals to evaluate the activities of daily living\n(ADL). The dataset is acquired from 25 able-bodied subjects while performing 22\nactivities categorised according to functional arm activity behavioral system\n(FAABOS) (3 - full hand gestures, 6 - open/close office draw, 8 - grasping and\nholding of small office objects, 2 - flexion and extension of finger movements,\n2 - writing and 1 - rest). The sEMG data is measured by a set of five Noraxon\nUltium wireless sEMG sensors with Ag/Agcl electrodes placed on a human hand.\nThe dataset is analyzed for hand activity recognition classification\nperformance. The classification is performed using four state-ofthe-art machine\nlearning classifiers, including Random Forest (RF), Fine K-Nearest Neighbour\n(KNN), Ensemble KNN (sKNN) and Support Vector Machine (SVM) with seven\ncombinations of time domain and frequency domain feature sets. The\nstate-of-theart classification accuracy on five FAABOS categories is 83:21% by\nusing the SVM classifier with the third order polynomial kernel using energy\nfeature and auto regressive feature set ensemble. The classification accuracy\non 22 class hand activities is 75:39% by the same SVM classifier with the log\nmoments in frequency domain (LMF) feature, modified LMF, time domain\nstatistical (TDS) feature, spectral band powers (SBP), channel cross\ncorrelation and local binary patterns (LBP) set ensemble. The analysis depicts\nthe technical challenges addressed by the dataset. The developed dataset can be\nused as a benchmark for various classification methods as well as for sEMG\nsignal analysis corresponding to ADL and for the development of prosthetics and\nother wearable robotics.",
"title": "EMAHA-DB1: A New Upper Limb sEMG Dataset for Classification of Activities of Daily Living",
"url": "http://arxiv.org/abs/2301.03325v1"
} | null | null | new_dataset | admin | null | false | null | ea316272-a6f6-41cc-950f-e538db12a16a | null | Validated | 2023-10-04 15:19:51.881606 | {
"text_length": 1960
} | 0new_dataset
|