sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
sequence | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
81dd00f3ce6d26dd7b103af91ef0013a535caacd | NbAiLab/NST | [
"license:apache-2.0",
"region:us"
] | 2022-04-20T11:06:56+00:00 | {"license": "apache-2.0"} | 2022-08-12T13:09:29+00:00 |
|
72cac22487c265b0b27b424f561f0f3659c5746d |
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a climate related claim and evidence, predict if evidence is related to claim. | mwong/climatetext-evidence-related-evaluation | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | 2022-04-20T11:18:14+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]} | 2022-10-25T09:08:46+00:00 |
3140c1105204085e7461bcb8fd2301e9d4be9611 | Peihao/CURE-Pretrain | [
"license:lgpl",
"region:us"
] | 2022-04-20T12:25:29+00:00 | {"license": "lgpl"} | 2022-04-21T15:07:25+00:00 |
|
8dec0f04d38cb2d2a2b83a72ac88df63c4c4e6da | crisdev/comentarios | [
"license:mit",
"region:us"
] | 2022-04-20T19:07:39+00:00 | {"license": "mit"} | 2022-05-06T13:18:49+00:00 |
|
eec8b7881f5b1c5fe586b476fce67ba9f93fdcbe | daniel-dona/tfg-voice-2 | [
"license:cc-by-sa-3.0",
"region:us"
] | 2022-04-20T21:23:17+00:00 | {"license": "cc-by-sa-3.0"} | 2022-04-20T21:26:10+00:00 |
|
61c95318fd71c55b6ba355d76253254615f387ec |
# Dataset Card for WANLI
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [WANLI homepage](https://wanli.allenai.org/)
- **Repository:** [Github repo](https://github.com/alisawuffles/wanli)
- **Paper:** [arXiv](https://arxiv.org/abs/2201.05955)
- **Point of Contact:** [Alisa Liu](mailto:alisaliu@cs.washington.edu)
### Dataset Summary
WANLI (**W**orker-**A**I Collaboration for **NLI**) is a collection of 108K English sentence pairs for the task of natural language inference (NLI).
Each example is created by first identifying a "pocket" of examples in [MultiNLI (Williams et al., 2018)](https://cims.nyu.edu/~sbowman/multinli/) that share a challenging reasoning pattern, then instructing GPT-3 to write a new example with the same pattern.
The set of generated examples are automatically filtered to contain those most likely to aid model training, and finally labeled and optionally revised by human annotators.
WANLI presents unique empirical strengths compared to existing NLI datasets. Remarkably, training a model on WANLI instead of MultiNLI (which is 4 times larger) improves performance on seven out-of-domain test sets we consider, including by 11% on HANS and 9% on Adversarial NLI.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for natural language inference, which determines whether a premise entails (i.e., implies the truth of) a hypothesis, both expressed in natural language. Success on this task is typically measured by achieving a high accuracy. A RoBERTa-large model currently achieves 75.40%.
Models trained on NLI are often adapted to other downstream tasks, and NLI data can be mixed with other sources of supervision.
### Languages
The dataset consists of English examples generated by GPT-3 and revised by English-speaking crowdworkers located in the United States.
## Dataset Structure
### Data Instances
Here is an example of an NLI example in `data/wanli/train.jsonl` or `data/wanli/test.jsonl`.
```
{
"id": 225295,
"premise": "It is a tribute to the skill of the coach that the team has been able to compete at the highest level.",
"hypothesis": "The coach is a good coach.",
"gold": "entailment",
"genre": "generated",
"pairID": "171408"
}
```
- `id`: unique identifier for the example
- `premise`: a piece of text
- `hypothesis`: a piece of text that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `gold`: one of `entailment`, `neutral`, and `contradiction`
- `genre`: one of `generated` and `generated_revised`, depending on whether the example was revised by annotators
- `pairID`: id of seed MNLI example, corresponding to those in `data/mnli/train.jsonl`
We also release the raw annotations for each worker, which can be found in `data/wanli/anonymized_annotations.jsonl`.
```
"WorkerId": "EUJ",
"id": 271560,
"nearest_neighbors": [
309783,
202988,
145310,
98030,
148759
],
"premise": "I don't know what I'd do without my cat. He is my only friend.",
"hypothesis": "I would be alone.",
"label": "neutral",
"revised_premise": "I don't know what I'd do without my cat. He is my only friend.",
"revised_hypothesis": "I would be alone without my cat.",
"gold": "entailment",
"revised": true
```
- `WorkerId`: a unique identification for each crowdworker (NOT the real worker ID from AMT)
- `id`: id of generated example
- `nearest_neighbors`: ordered ids of the group of MNLI nearest neighbors that were used as in-context examples, where the first one is seed ambiguous MNLI example. MNLI ids correspond to those in `mnli/train.jsonl`.
- `premise`: GPT-3 generated premise
- `hypothesis`: GPT-3 generated hypothesis
- `label`: the shared label of the in-context examples, which is the "intended" label for this generation
- `revised_premise`: premise after human review
- `revised_hypothesis`: hypothesis after human review
- `gold`: annotator-assigned gold label for the (potentially revised) example
- `revised`: whether the example was revised
### Data Splits
The dataset is randomly split into a *train* and *test* set.
| | train | test |
|-------------------------|------:|-----:|
| Examples | 102885| 5000|
## Dataset Creation
### Curation Rationale
A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. On the other hand, there has been remarkable progress in open-ended text generation based on massive language models. We create WANLI to demonstrate the effectiveness an approach that leverages the best of both worlds: a language model's ability to efficiently generate diverse examples, and a human's ability to revise the examples for quality and assign a gold label.
### Source Data
#### Initial Data Collection and Normalization
Our pipeline starts with an existing dataset, MultiNLI (Williams et al., 2018). We use dataset cartography from [Swayamdipta et al. (2020)](https://aclanthology.org/2020.emnlp-main.746/) to automatically identify pockets of examples that demonstrate challenging reasoning patterns rela081 tive to a trained model. Using each group as a set of in-context examples, we leverage a pretrained language model to *generate new examples* likely to have the same pattern. We then automatically filter generations to keep those that are most likely to aid model learning. Finally, we validate the generated examples by subjecting them to human review, where crowdworkers assign a gold label and (optionally) revise for quality.
#### Who are the source language producers?
The GPT-3 Curie model generated examples which were then revised and labeled by crowdworkers on Amazon Mechanical Turk.
Workers were paid $0.12 for each example that they annotate. At the end of data collection, we aggregate the earning and time spent from each crowdworker, and find that the median hourly rate was $22.72, with 85% of workers being paid over the $15/hour target.
### Annotations
#### Annotation process
Given an unlabeled example, annotators are asked to optionally revise it for quality (while preserving the intended meaning as much as possible through minimal revisions), and then assign a label. Alternatively, if an example would require a great deal of revision to fix *or* if it could be perceived as offensive, they were asked to discard it.
Details about instructions, guidelines, and instructional examples can be found in Appendix D of the paper.
Crowdworkers annotate a total of 118,724 examples, with two distinct workers reviewing each example.
For examples that both annotators labeled without revision, annotators achieved a Cohen Kappa score of 0.60, indicating substantial agreement.
#### Who are the annotators?
Annotators were required to have a HIT approval rate of 98%, a total of 10,000 approved HITs, and be located in the United States.
300 Turkers took our qualification test, of which 69 passed. Turkers who were later found to produce extremely careless annotations were removed from the qualification list (and oftentimes, their annotations were discarded, though they were still paid for their work). The number of workers who contributed to the final dataset is 62.
### Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed to explore the potential of worker-AI collaboration for dataset curation, train more robust NLI models, and provide more challenging evaluation of existing systems.
### Discussion of Biases
Text generated from large pretrained language models is susceptible to perpetuating social harms and containing toxic language.
To partially remedy this, we ask annotators to discard any examples that may be perceived as offensive.
Nonetheless, it is possible that harmful examples (especially if they contain subtle biases) may have been missed by annotators and included in the final dataset.
## Additional Information
### Dataset Curators
WANLI was developed by Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi from the [University of Washington](https://www.cs.washington.edu/) and [AI2](https://allenai.org/).
### Citation Information
```
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
``` | alisawuffles/WANLI | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2201.05955",
"region:us"
] | 2022-04-20T23:57:25+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "pretty_name": "WANLI"} | 2022-11-21T17:31:56+00:00 |
7a13ba87386bd8c9083ff858944a5f516e43f939 |
# Dataset Card for Corpus of Diverse Styles
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
## Disclaimer
I am not the original author of the paper that presents the Corpus of Diverse Styles. I uploaded the dataset to HuggingFace as a convenience.
## Dataset Description
- **Homepage:** http://style.cs.umass.edu/
- **Repository:** https://github.com/martiansideofthemoon/style-transfer-paraphrase
- **Paper:** https://arxiv.org/abs/2010.05700
### Dataset Summary
A new benchmark dataset that contains 15M
sentences from 11 diverse styles.
To create CDS, we obtain data from existing academic
research datasets and public APIs or online collections
like Project Gutenberg. We choose
styles that are easy for human readers to identify at
a sentence level (e.g., Tweets or Biblical text). While
prior benchmarks involve a transfer between two
styles, CDS has 110 potential transfer directions.
### Citation Information
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
``` | billray110/corpus-of-diverse-styles | [
"task_categories:text-classification",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"arxiv:2010.05700",
"region:us"
] | 2022-04-21T00:13:59+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": [], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "Corpus of Diverse Styles"} | 2022-10-21T23:52:53+00:00 |
eb798af6f91a5305eb0f18aeb15378cc3c91b421 | The dataset is a mix of topics from 3 forums: "Hotline", "Kids Psychology and Development", "Everything Else".
It contains topic name (Topic), start post (message) and post unique id (Message_Id). | Kateryna/eva_ru_forum_headlines | [
"region:us"
] | 2022-04-21T01:05:25+00:00 | {} | 2022-04-21T01:17:55+00:00 |
ef485238c1494962da9f8896bfacbcf3a0747c73 |
## Dataset overview
This is a dataset that contains restaurant reviews gathered in 2019 using a webscraping tool in Python. Reviews on restaurant visits and restaurant features were collected for Dutch restaurants.
The dataset is formatted using the 🤗[DatasetDict](https://huggingface.co/docs/datasets/index) format and contains the following indices:
- train, 116693 records
- test, 14587 records
- validation, 14587 records
The dataset holds both information of the restaurant level as well as the review level and contains the following features:
- [restaurant_ID] > unique restaurant ID
- [restaurant_review_ID] > unique review ID
- [michelin_label] > indicator whether this restaurant was awarded one (or more) Michelin stars prior to 2020
- [score_total] > restaurant level total score
- [score_food] > restaurant level food score
- [score_service] > restaurant level service score
- [score_decor] > restaurant level decor score
- [fame_reviewer] > label for how often a reviewer has posted a restaurant review
- [reviewscore_food] > review level food score
- [reviewscore_service] > review level service score
- [reviewscore_ambiance] > review level ambiance score
- [reviewscore_waiting] > review level waiting score
- [reviewscore_value] > review level value for money score
- [reviewscore_noise] > review level noise score
- [review_text] > the full review that was written by the reviewer for this restaurant
- [review_length] > total length of the review (tokens)
## Purpose
The restaurant reviews submitted by visitor can be used to model the restaurant scores (food, ambiance etc) or used to model Michelin star holders. In [this blog series](https://medium.com/broadhorizon-cmotions/natural-language-processing-for-predictive-purposes-with-r-cb65f009c12b) we used the review texts to predict next Michelin star restaurants, using R. | cmotions/NL_restaurant_reviews | [
"language:nl",
"text-classification",
"sentiment-analysis",
"region:us"
] | 2022-04-21T08:48:54+00:00 | {"language": ["nl"], "tags": ["text-classification", "sentiment-analysis"], "datasets": ["train", "test", "validation"]} | 2022-04-21T10:20:02+00:00 |
ec205ab74f5244e1cf50c06c200832cd50493546 | # Dataset Card for [FrozenLake-v1] with slippery = False
| AntoineLB/FrozenLakeNotFrozen | [
"region:us"
] | 2022-04-21T08:53:07+00:00 | {} | 2022-04-26T06:40:20+00:00 |
d96c3ca050b694c3150bb53e6c6431f2144ce15a |
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if claim is related to evidence. | mwong/climatetext-climate_evidence-claim-related-evaluation | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | 2022-04-21T08:55:30+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]} | 2022-10-25T09:08:48+00:00 |
54b4fc98b56081e4ed5bfe6f76f68c8f52d4fc98 |
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if evidence is related to claim. | mwong/climatetext-claim-climate_evidence-related-evaluation | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | 2022-04-21T09:07:08+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]} | 2022-10-25T09:08:50+00:00 |
4f0fab91e806940ab0e95f573193eb79f5052c70 |
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a climate related evidence and claim, predict if pair is related. | mwong/climatetext-evidence-claim-pair-related-evaluation | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | 2022-04-21T09:16:15+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]} | 2022-10-25T09:08:53+00:00 |
0961ace6703a76cb598eb4fcdb7f92227aa3c4b3 |
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if pair is related. | mwong/climatetext-claim-evidence-pair-related-evaluation | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:us"
] | 2022-04-21T09:26:24+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0", "gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|climate_text"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"]} | 2022-10-25T09:08:55+00:00 |
74ddfcfd50ea96a8ebc1456bf5d8e63eb840a084 |
# Fashion-Mnist-C (Corrupted Fashion-Mnist)
A corrupted Fashion-MNIST benchmark for testing out-of-distribution robustness of computer vision models, which were trained on Fashion-Mmnist.
[Fashion-Mnist](https://github.com/zalandoresearch/fashion-mnist) is a drop-in replacement for MNIST and Fashion-Mnist-C is a corresponding drop-in replacement for [MNIST-C](https://arxiv.org/abs/1906.02337).
## Corruptions
The following corruptions are applied to the images, equivalently to MNIST-C:
- **Noise** (shot noise and impulse noise)
- **Blur** (glass and motion blur)
- **Transformations** (shear, scale, rotate, brightness, contrast, saturate, inverse)
In addition, we apply various **image flippings and turnings**: For fashion images, flipping the image does not change its label,
and still keeps it a valid image. However, we noticed that in the nominal fmnist dataset, most images are identically oriented
(e.g. most shoes point to the left side). Thus, flipped images provide valid OOD inputs.
Most corruptions are applied at a randomly selected level of *severity*, s.t. some corrupted images are really hard to classify whereas for others the corruption, while present, is subtle.
## Examples
| Turned | Blurred | Rotated | Noise | Noise | Turned |
| ------------- | ------------- | --------| --------- | -------- | --------- |
| <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_0.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_1.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_6.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_3.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_4.png" width="100" height="100"> | <img src="https://github.com/testingautomated-usi/fashion-mnist-c/raw/main/generated/png-examples/single_5.png" width="100" height="100"> |
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{Weiss2022SimpleTechniques,
title={Simple Techniques Work Surprisingly Well for Neural Network Test Prioritization and Active Learning},
author={Weiss, Michael and Tonella, Paolo},
booktitle={Proceedings of the 31th ACM SIGSOFT International Symposium on Software Testing and Analysis},
year={2022}
}
```
Also, you may want to cite FMNIST and MNIST-C.
## Credits
- Fashion-Mnist-C is inspired by Googles MNIST-C and our repository is essentially a clone of theirs. See their [paper](https://arxiv.org/abs/1906.02337) and [repo](https://github.com/google-research/mnist-c).
- Find the nominal (i.e., non-corrupted) Fashion-MNIST dataset [here](https://github.com/zalandoresearch/fashion-mnist).
| mweiss/fashion_mnist_corrupted | [
"task_categories:image-classification",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|fashion_mnist",
"language:en",
"license:mit",
"arxiv:1906.02337",
"region:us"
] | 2022-04-21T10:34:02+00:00 | {"annotations_creators": ["expert-generated", "machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|fashion_mnist"], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "fashion-mnist-corrupted"} | 2023-03-19T11:45:31+00:00 |
65bc9e7e7353fff750326c9523e384701934e530 |
# Dataset Card for Visual Genome
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://homes.cs.washington.edu/~ranjay/visualgenome/
- **Repository:**
- **Paper:** https://doi.org/10.1007/s11263-016-0981-7
- **Leaderboard:**
- **Point of Contact:** ranjaykrishna [at] gmail [dot] com
### Dataset Summary
Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language.
From the paper:
> Despite progress in perceptual tasks such as
image classification, computers still perform poorly on
cognitive tasks such as image description and question
answering. Cognition is core to tasks that involve not
just recognizing, but reasoning about our visual world.
However, models used to tackle the rich content in images for cognitive tasks are still being trained using the
same datasets designed for perceptual tasks. To achieve
success at cognitive tasks, models need to understand
the interactions and relationships between objects in an
image. When asked “What vehicle is the person riding?”,
computers will need to identify the objects in an image
as well as the relationships riding(man, carriage) and
pulling(horse, carriage) to answer correctly that “the
person is riding a horse-drawn carriage.”
Visual Genome has:
- 108,077 image
- 5.4 Million Region Descriptions
- 1.7 Million Visual Question Answers
- 3.8 Million Object Instances
- 2.8 Million Attributes
- 2.3 Million Relationships
From the paper:
> Our dataset contains over 108K images where each
image has an average of 35 objects, 26 attributes, and 21
pairwise relationships between objects. We canonicalize
the objects, attributes, relationships, and noun phrases
in region descriptions and questions answer pairs to
WordNet synsets.
### Dataset Preprocessing
### Supported Tasks and Leaderboards
### Languages
All of annotations use English as primary language.
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
from datasets import load_dataset
load_dataset("visual_genome", "region_description_v1.2.0")
```
#### region_descriptions
An example of looks as follows.
```
{
"image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
"image_id": 1,
"url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
"width": 800,
"height": 600,
"coco_id": null,
"flickr_id": null,
"regions": [
{
"region_id": 1382,
"image_id": 1,
"phrase": "the clock is green in colour",
"x": 421,
"y": 57,
"width": 82,
"height": 139
},
...
]
}
```
#### objects
An example of looks as follows.
```
{
"image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
"image_id": 1,
"url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
"width": 800,
"height": 600,
"coco_id": null,
"flickr_id": null,
"objects": [
{
"object_id": 1058498,
"x": 421,
"y": 91,
"w": 79,
"h": 339,
"names": [
"clock"
],
"synsets": [
"clock.n.01"
]
},
...
]
}
```
#### attributes
An example of looks as follows.
```
{
"image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
"image_id": 1,
"url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
"width": 800,
"height": 600,
"coco_id": null,
"flickr_id": null,
"attributes": [
{
"object_id": 1058498,
"x": 421,
"y": 91,
"w": 79,
"h": 339,
"names": [
"clock"
],
"synsets": [
"clock.n.01"
],
"attributes": [
"green",
"tall"
]
},
...
}
]
```
#### relationships
An example of looks as follows.
```
{
"image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
"image_id": 1,
"url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
"width": 800,
"height": 600,
"coco_id": null,
"flickr_id": null,
"relationships": [
{
"relationship_id": 15927,
"predicate": "ON",
"synsets": "['along.r.01']",
"subject": {
"object_id": 5045,
"x": 119,
"y": 338,
"w": 274,
"h": 192,
"names": [
"shade"
],
"synsets": [
"shade.n.01"
]
},
"object": {
"object_id": 5046,
"x": 77,
"y": 328,
"w": 714,
"h": 262,
"names": [
"street"
],
"synsets": [
"street.n.01"
]
}
}
...
}
]
```
#### question_answers
An example of looks as follows.
```
{
"image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7F2F60698610>,
"image_id": 1,
"url": "https://cs.stanford.edu/people/rak248/VG_100K_2/1.jpg",
"width": 800,
"height": 600,
"coco_id": null,
"flickr_id": null,
"qas": [
{
"qa_id": 986768,
"image_id": 1,
"question": "What color is the clock?",
"answer": "Green.",
"a_objects": [],
"q_objects": []
},
...
}
]
```
### Data Fields
When loading a specific configuration, users has to append a version dependent suffix:
```python
from datasets import load_dataset
load_dataset("visual_genome", "region_description_v1.2.0")
```
#### region_descriptions
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `image_id`: Unique numeric ID of the image.
- `url`: URL of source image.
- `width`: Image width.
- `height`: Image height.
- `coco_id`: Id mapping to MSCOCO indexing.
- `flickr_id`: Id mapping to Flicker indexing.
- `regions`: Holds a list of `Region` dataclasses:
- `region_id`: Unique numeric ID of the region.
- `image_id`: Unique numeric ID of the image.
- `x`: x coordinate of bounding box's top left corner.
- `y`: y coordinate of bounding box's top left corner.
- `width`: Bounding box width.
- `height`: Bounding box height.
#### objects
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `image_id`: Unique numeric ID of the image.
- `url`: URL of source image.
- `width`: Image width.
- `height`: Image height.
- `coco_id`: Id mapping to MSCOCO indexing.
- `flickr_id`: Id mapping to Flicker indexing.
- `objects`: Holds a list of `Object` dataclasses:
- `object_id`: Unique numeric ID of the object.
- `x`: x coordinate of bounding box's top left corner.
- `y`: y coordinate of bounding box's top left corner.
- `w`: Bounding box width.
- `h`: Bounding box height.
- `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg
- `synsets`: List of `WordNet synsets`.
#### attributes
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `image_id`: Unique numeric ID of the image.
- `url`: URL of source image.
- `width`: Image width.
- `height`: Image height.
- `coco_id`: Id mapping to MSCOCO indexing.
- `flickr_id`: Id mapping to Flicker indexing.
- `attributes`: Holds a list of `Object` dataclasses:
- `object_id`: Unique numeric ID of the region.
- `x`: x coordinate of bounding box's top left corner.
- `y`: y coordinate of bounding box's top left corner.
- `w`: Bounding box width.
- `h`: Bounding box height.
- `names`: List of names associated with the object. This field can hold multiple values in the sense the multiple names are considered as acceptable. For example: ['monitor', 'computer'] at https://cs.stanford.edu/people/rak248/VG_100K/3.jpg
- `synsets`: List of `WordNet synsets`.
- `attributes`: List of attributes associated with the object.
#### relationships
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `image_id`: Unique numeric ID of the image.
- `url`: URL of source image.
- `width`: Image width.
- `height`: Image height.
- `coco_id`: Id mapping to MSCOCO indexing.
- `flickr_id`: Id mapping to Flicker indexing.
- `relationships`: Holds a list of `Relationship` dataclasses:
- `relationship_id`: Unique numeric ID of the object.
- `predicate`: Predicate defining relationship between a subject and an object.
- `synsets`: List of `WordNet synsets`.
- `subject`: Object dataclass. See subsection on `objects`.
- `object`: Object dataclass. See subsection on `objects`.
#### question_answers
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `image_id`: Unique numeric ID of the image.
- `url`: URL of source image.
- `width`: Image width.
- `height`: Image height.
- `coco_id`: Id mapping to MSCOCO indexing.
- `flickr_id`: Id mapping to Flicker indexing.
- `qas`: Holds a list of `Question-Answering` dataclasses:
- `qa_id`: Unique numeric ID of the question-answer pair.
- `image_id`: Unique numeric ID of the image.
- `question`: Question.
- `answer`: Answer.
- `q_objects`: List of object dataclass associated with `question` field. See subsection on `objects`.
- `a_objects`: List of object dataclass associated with `answer` field. See subsection on `objects`.
### Data Splits
All the data is contained in training set.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
From the paper:
> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over
33, 000 unique workers contributed to the dataset. The
dataset was collected over the course of 6 months after
15 months of experimentation and iteration on the data
representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where
each HIT involved creating descriptions, questions and
answers, or region graphs. Each HIT was designed such
that workers manage to earn anywhere between $6-$8
per hour if they work continuously, in line with ethical
research standards on Mechanical Turk (Salehi et al.,
2015). Visual Genome HITs achieved a 94.1% retention
rate, meaning that 94.1% of workers who completed one
of our tasks went ahead to do more. [...] 93.02% of workers contributed from the United States.
The majority of our workers were
between the ages of 25 and 34 years old. Our youngest
contributor was 18 years and the oldest was 68 years
old. We also had a near-balanced split of 54.15% male
and 45.85% female workers.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Visual Genome by Ranjay Krishna is licensed under a Creative Commons Attribution 4.0 International License.
### Citation Information
```bibtex
@article{Krishna2016VisualGC,
title={Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations},
author={Ranjay Krishna and Yuke Zhu and Oliver Groth and Justin Johnson and Kenji Hata and Joshua Kravitz and Stephanie Chen and Yannis Kalantidis and Li-Jia Li and David A. Shamma and Michael S. Bernstein and Li Fei-Fei},
journal={International Journal of Computer Vision},
year={2017},
volume={123},
pages={32-73},
url={https://doi.org/10.1007/s11263-016-0981-7},
doi={10.1007/s11263-016-0981-7}
}
```
### Contributions
Due to limitation of the dummy_data creation, we provide a `fix_generated_dummy_data.py` script that fix the dataset in-place.
Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset. | visual_genome | [
"task_categories:image-to-text",
"task_categories:object-detection",
"task_categories:visual-question-answering",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-21T12:09:21+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["image-to-text", "object-detection", "visual-question-answering"], "task_ids": ["image-captioning"], "paperswithcode_id": "visual-genome", "pretty_name": "VisualGenome", "config_names": ["objects", "question_answers", "region_descriptions"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int32"}, {"name": "url", "dtype": "string"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "coco_id", "dtype": "int64"}, {"name": "flickr_id", "dtype": "int64"}, {"name": "regions", "list": [{"name": "region_id", "dtype": "int32"}, {"name": "image_id", "dtype": "int32"}, {"name": "phrase", "dtype": "string"}, {"name": "x", "dtype": "int32"}, {"name": "y", "dtype": "int32"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}]}], "config_name": "region_descriptions_v1.0.0", "splits": [{"name": "train", "num_bytes": 260873884, "num_examples": 108077}], "download_size": 15304605295, "dataset_size": 260873884}} | 2023-06-29T14:23:59+00:00 |
b31afad97a9fada96522cc2f5b080338d4a3f7cd |
Named Entity Recognition for COVID-19 Bio Entities
The dataset was taken from https://github.com/davidcampos/covid19-corpus
Dataset
The dataset was then split into several datasets each one representing one entity. Namely, Disorder, Species, Chemical or Drug, Gene and Protein, Enzyme, Anatomy, Biological Process, Molecular Function, Cellular Component, Pathway and microRNA. Moreover, another dataset is also created with all those aforementioned that are non-overlapping in nature.
Dataset Formats
The datasets are available in two formats IOB and Spacy's JSONL format.
IOB : https://github.com/tsantosh7/COVID-19-Named-Entity-Recognition/tree/master/Datasets/BIO
SpaCy JSONL: https://github.com/tsantosh7/COVID-19-Named-Entity-Recognition/tree/master/Datasets/SpaCy
| tsantosh7/COVID-19_Annotations | [
"license:cc",
"region:us"
] | 2022-04-21T12:57:27+00:00 | {"license": "cc"} | 2022-04-21T13:03:06+00:00 |
b3bbb554daa84ecc2b8c5bfd1b861a55fbabf639 | # PIE Dataset Card for "conll2003"
This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the
[CoNLL 2003 Huggingface dataset loading script](https://huggingface.co/datasets/conll2003).
## Data Schema
The document type for this dataset is `CoNLL2003Document` which defines the following data fields:
- `text` (str)
- `id` (str, optional)
- `metadata` (dictionary, optional)
and the following annotation layers:
- `entities` (annotation type: `LabeledSpan`, target: `text`)
See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/annotations.py) for the annotation type definitions.
## Document Converters
The dataset provides document converters for the following target document types:
- `pytorch_ie.documents.TextDocumentWithLabeledSpans`
See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
definitions.
| pie/conll2003 | [
"region:us"
] | 2022-04-21T13:15:40+00:00 | {} | 2024-01-03T13:20:14+00:00 |
f4c8f95b2143cc3d276df440d57f66e9e4ab1346 |
# Dataset Card for RVL-CDIP
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [The RVL-CDIP Dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/)
- **Repository:**
- **Paper:** [Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval](https://arxiv.org/abs/1502.07058)
- **Leaderboard:** [RVL-CDIP leaderboard](https://paperswithcode.com/dataset/rvl-cdip)
- **Point of Contact:** [Adam W. Harley](mailto:aharley@cmu.edu)
### Dataset Summary
The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given document into one of 16 classes representing document types (letter, form, etc.). The leaderboard for this task is available [here](https://paperswithcode.com/sota/document-image-classification-on-rvl-cdip).
### Languages
All the classes and documents use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the training set is provided below :
```
{
'image': <PIL.TiffImagePlugin.TiffImageFile image mode=L size=754x1000 at 0x7F9A5E92CA90>,
'label': 15
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing a document.
- `label`: an `int` classification label.
<details>
<summary>Class Label Mappings</summary>
```json
{
"0": "letter",
"1": "form",
"2": "email",
"3": "handwritten",
"4": "advertisement",
"5": "scientific report",
"6": "scientific publication",
"7": "specification",
"8": "file folder",
"9": "news article",
"10": "budget",
"11": "invoice",
"12": "presentation",
"13": "questionnaire",
"14": "resume",
"15": "memo"
}
```
</details>
### Data Splits
| |train|test|validation|
|----------|----:|----:|---------:|
|# of examples|320000|40000|40000|
The dataset was split in proportions similar to those of ImageNet.
- 320000 images were used for training,
- 40000 images for validation, and
- 40000 images for testing.
## Dataset Creation
### Curation Rationale
From the paper:
> This work makes available a new labelled subset of the IIT-CDIP collection, containing 400,000
document images across 16 categories, useful for training new CNNs for document analysis.
### Source Data
#### Initial Data Collection and Normalization
The same as in the IIT-CDIP collection.
#### Who are the source language producers?
The same as in the IIT-CDIP collection.
### Annotations
#### Annotation process
The same as in the IIT-CDIP collection.
#### Who are the annotators?
The same as in the IIT-CDIP collection.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was curated by the authors - Adam W. Harley, Alex Ufkes, and Konstantinos G. Derpanis.
### Licensing Information
RVL-CDIP is a subset of IIT-CDIP, which came from the [Legacy Tobacco Document Library](https://www.industrydocuments.ucsf.edu/tobacco/), for which license information can be found [here](https://www.industrydocuments.ucsf.edu/help/copyright/).
### Citation Information
```bibtex
@inproceedings{harley2015icdar,
title = {Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval},
author = {Adam W Harley and Alex Ufkes and Konstantinos G Derpanis},
booktitle = {International Conference on Document Analysis and Recognition ({ICDAR})}},
year = {2015}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. | aharley/rvl_cdip | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|iit_cdip",
"language:en",
"license:other",
"arxiv:1502.07058",
"region:us"
] | 2022-04-21T13:21:01+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|iit_cdip"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "rvl-cdip", "pretty_name": "RVL-CDIP", "viewer": false, "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo"}}}}], "splits": [{"name": "train", "num_bytes": 38816373360, "num_examples": 320000}, {"name": "test", "num_bytes": 4863300853, "num_examples": 40000}, {"name": "validation", "num_bytes": 4868685208, "num_examples": 40000}], "download_size": 38779484559, "dataset_size": 48548359421}} | 2023-05-02T08:06:16+00:00 |
36076b03a64c3dc168fa7222da61de07b6eac67e |
# Dataset Card for Goud summarization dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Needs More Information]
- **Repository:**[Needs More Information]
- **Paper:**[Goud.ma: a News Article Dataset for Summarization in Moroccan Darija](https://openreview.net/forum?id=BMVq5MELb9)
- **Leaderboard:**[Needs More Information]
- **Point of Contact:**[Needs More Information]
### Dataset Summary
Goud-sum contains 158k articles and their headlines extracted from [Goud.ma](https://www.goud.ma/) news website. The articles are written in the Arabic script. All headlines are in Moroccan Darija, while articles may be in Moroccan Darija, in Modern Standard Arabic, or a mix of both (code-switched Moroccan Darija).
### Supported Tasks and Leaderboards
Text Summarization
### Languages
* Moroccan Arabic (Darija)
* Modern Standard Arabic
## Dataset Structure
### Data Instances
The dataset consists of article-headline pairs in string format.
### Data Fields
* article: a string containing the body of the news article
* headline: a string containing the article's headline
* categories: a list of string of article categories
### Data Splits
Goud-sum dataset has 3 splits: _train_, _validation_, and _test_. Below are the number of instances in each split.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 139,288 |
| Validation | 9,497 |
| Test | 9,497 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The text was written by journalists at [Goud](https://www.goud.ma/).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{issam2022goudma,
title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija},
author={Abderrahmane Issam and Khalil Mrini},
booktitle={3rd Workshop on African Natural Language Processing},
year={2022},
url={https://openreview.net/forum?id=BMVq5MELb9}
}
```
### Contributions
Thanks to [@issam9](https://github.com/issam9) and [@KhalilMrini](https://github.com/KhalilMrini) for adding this dataset.
| Goud/Goud-sum | [
"task_categories:summarization",
"task_ids:news-articles-headline-generation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"size_categories:100K<n<1M",
"source_datasets:original",
"region:us"
] | 2022-04-21T14:25:00+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": [], "license": [], "multilinguality": [], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-headline-generation"], "pretty_name": "Goud-sum"} | 2022-07-04T15:02:36+00:00 |
24eab2c29829f2672c4a9516f0d7aa750b88ba61 | Dataset for API: https://github.com/eleldar/Translation
Test English-Russian dataset:
```
DatasetDict({
normal: Dataset({
features: ['en', 'ru'],
num_rows: 2009
})
short: Dataset({
features: ['en', 'ru'],
num_rows: 2664
})
train: Dataset({
features: ['en', 'ru'],
num_rows: 1660
})
validation: Dataset({
features: ['en', 'ru'],
num_rows: 208
})
test: Dataset({
features: ['en', 'ru'],
num_rows: 4170
})
})
```
The dataset get from tables:
* https://github.com/eleldar/Translator/blob/master/test_dataset/flores101_dataset/101_languages.xlsx?raw=true
* https://github.com/eleldar/Translator/blob/master/test_dataset/normal.xlsx?raw=true
* https://github.com/eleldar/Translator/blob/master/test_dataset/corrected_vocab.xlsx?raw=true | eleldar/sub_train-normal_tests-datasets | [
"region:us"
] | 2022-04-21T14:25:32+00:00 | {} | 2022-06-16T10:19:47+00:00 |
1f2761557622d85a47d719882e5e8654f2c4dec1 | # GEM Submission
Submission name: SeqPlan-SportSett
| GEM-submissions/ratishsp__seqplan-sportsett__1650556902 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-04-21T15:01:43+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "SeqPlan-SportSett", "tags": ["evaluation", "benchmark"]} | 2022-04-21T15:01:45+00:00 |
1f2a598128b862851ba63f35a9d7c277c005e2d7 | ## Overview
Original dataset available [here](https://gluebenchmark.com/diagnostics).
## Dataset curation
Filled in the empty rows of columns "lexical semantics", "predicate-argument structure",
"logic", "knowledge" with empty string `""`.
Labels are encoded as follows
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
## Code to create dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset
df = pd.read_csv("<path to file>/diagnostic-full.tsv", sep="\t")
# column names to lower
df.columns = df.columns.str.lower()
# fill na
assert df["label"].isna().sum() == 0
df = df.fillna("")
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"lexical semantics": Value(dtype="string", id=None),
"predicate-argument structure": Value(dtype="string", id=None),
"logic": Value(dtype="string", id=None),
"knowledge": Value(dtype="string", id=None),
"domain": Value(dtype="string", id=None),
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
dataset = Dataset.from_pandas(df, features=features)
dataset.push_to_hub("glue_diagnostics", token="<token>", split="test")
```
| pietrolesci/glue_diagnostics | [
"region:us"
] | 2022-04-21T15:46:38+00:00 | {} | 2022-04-21T15:51:56+00:00 |
bb68655c6b6f1431cdf2b90239cbf2fb5e52f3cd |
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-other)
- **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com)
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean and ranks models based on their WER.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
| patrickvonplaten/librispeech_asr_self_contained | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-21T16:06:19+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "task_ids": ["audio-speaker-identification"], "paperswithcode_id": "librispeech-1", "pretty_name": "LibriSpeech"} | 2022-10-24T16:48:37+00:00 |
996e72dea151ca0856d1d16efd71f560b18da817 |
# Dataset Card for XLEL-WD-Dictionary
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** <https://github.com/adithya7/xlel-wd>
- **Repository:** <https://github.com/adithya7/xlel-wd>
- **Paper:** <https://arxiv.org/abs/2204.06535>
- **Leaderboard:** N/A
- **Point of Contact:** Adithya Pratapa
### Dataset Summary
XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles.
### Supported Tasks and Leaderboards
This dictionary can be used as a part of the event linking task.
### Languages
This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.
| Language | Code | Language | Code | Language | Code | Language | Code |
| -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- |
| Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg |
| Bengali | bn | Catalan | ca | Czech | cs | Danish | da |
| German | de | Greek | el | English | en | Spanish | es |
| Persian | fa | Finnish | fi | French | fr | Hebrew | he |
| Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it |
| Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr |
| Malay | ms | Dutch | nl | Norwegian | no | Polish | pl |
| Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si |
| Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv |
| Swahili | sw | Tamil | ta | Telugu | te | Thai | th |
| Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh |
## Dataset Structure
### Data Instances
Each instance in the `label_dict.jsonl` file follows the below template,
```json
{
"label_id": "830917",
"label_title": "2010 European Aquatics Championships",
"label_desc": "The 2010 European Aquatics Championships were held from 4–15 August 2010 in Budapest and Balatonfüred, Hungary. It was the fourth time that the city of Budapest hosts this event after 1926, 1958 and 2006. Events in swimming, diving, synchronised swimming (synchro) and open water swimming were scheduled.",
"label_lang": "en"
}
```
### Data Fields
| Field | Meaning |
| ----- | ------- |
| `label_id` | Wikidata ID |
| `label_title` | Title for the event, as collected from the corresponding Wikipedia article |
| `label_desc` | Description for the event, as collected from the corresponding Wikipedia article |
| `label_lang` | language used for the title and description |
### Data Splits
This dictionary has a single split, `dictionary`. It contains 10947 event items from Wikidata and a total of 114834 text descriptions collected from multilingual Wikipedia articles.
## Dataset Creation
### Curation Rationale
This datasets helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. Event items are collected from Wikidata.
### Source Data
#### Initial Data Collection and Normalization
A Wikidata item is considered a potential event if it has spatial and temporal properties. The final event set is collected after post-processing for quality control.
#### Who are the source language producers?
The titles and descriptions for the events are written by Wikipedia contributors.
### Annotations
#### Annotation process
This dataset was automatically compiled from Wikidata. It was post-processed to improve data quality.
#### Who are the annotators?
Wikidata and Wikipedia contributors.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
This dictionary primarily contains eventive nouns from Wikidata. It does not include other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676), war (Q198), etc.,
## Additional Information
### Dataset Curators
The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd).
### Licensing Information
XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bib
@article{pratapa-etal-2022-multilingual,
title = {Multilingual Event Linking to Wikidata},
author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2204.06535},
}
```
### Contributions
Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset.
| adithya7/xlel_wd_dictionary | [
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:af",
"language:ar",
"language:be",
"language:bg",
"language:bn",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:ml",
"language:mr",
"language:ms",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:si",
"language:sk",
"language:sl",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tr",
"language:uk",
"language:vi",
"language:zh",
"license:cc-by-4.0",
"arxiv:2204.06535",
"region:us"
] | 2022-04-22T01:36:27+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["af", "ar", "be", "bg", "bn", "ca", "cs", "da", "de", "el", "en", "es", "fa", "fi", "fr", "he", "hi", "hu", "id", "it", "ja", "ko", "ml", "mr", "ms", "nl", "no", "pl", "pt", "ro", "ru", "si", "sk", "sl", "sr", "sv", "sw", "ta", "te", "th", "tr", "uk", "vi", "zh"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles."} | 2022-07-01T16:30:21+00:00 |
a6d542d37b24cc1f2536af5e4afb850b9641e3ff |
# Dataset Card for XLEL-WD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** <https://github.com/adithya7/xlel-wd>
- **Repository:** <https://github.com/adithya7/xlel-wd>
- **Paper:** <https://arxiv.org/abs/2204.06535>
- **Leaderboard:** N/A
- **Point of Contact:** Adithya Pratapa
### Dataset Summary
XLEL-WD is a multilingual event linking dataset. This dataset repo contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata.
The descriptions for Wikidata event items were collected from the corresponding Wikipedia articles. Download the event dictionary from [adithya7/xlel_wd_dictionary](https://huggingface.co/datasets/adithya7/xlel_wd_dictionary).
### Supported Tasks and Leaderboards
This dataset can be used for the task of event linking. There are two variants of the task, multilingual and crosslingual.
- Multilingual linking: mention and the event descriptions are in the same language.
- Crosslingual linking: the event descriptions are only available in English.
### Languages
This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.
| Language | Code | Language | Code | Language | Code | Language | Code |
| -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- |
| Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg |
| Bengali | bn | Catalan | ca | Czech | cs | Danish | da |
| German | de | Greek | el | English | en | Spanish | es |
| Persian | fa | Finnish | fi | French | fr | Hebrew | he |
| Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it |
| Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr |
| Malay | ms | Dutch | nl | Norwegian | no | Polish | pl |
| Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si |
| Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv |
| Swahili | sw | Tamil | ta | Telugu | te | Thai | th |
| Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh |
## Dataset Structure
### Data Instances
Each instance in the `train.jsonl`, `dev.jsonl` and `test.jsonl` files follow the below template.
```json
{
"context_left": "Minibaev's first major international medal came in the men's synchronized 10 metre platform event at the ",
"mention": "2010 European Championships",
"context_right": ".",
"context_lang": "en",
"label_id": "830917",
}
```
### Data Fields
| Field | Meaning |
| ----- | ------- |
| `mention` | text span of the mention |
| `context_left` | left paragraph context from the document |
| `context_right` | right paragraph context from the document |
| `context_lang` | language of the context (and mention) |
| `context_title` | document title of the mention (only Wikinews subset) |
| `context_date` | document publication date of the mention (only Wikinews subset) |
| `label_id` | Wikidata label ID for the event. E.g. 830917 refers to Q830917 from Wikidata. |
### Data Splits
The Wikipedia-based corpus has three splits. This is a zero-shot evaluation setup.
| | Train | Dev | Test | Total |
| ---- | :-----: | :---: | :----: | :-----: |
| Events | 8653 | 1090 | 1204 | 10947 |
| Event Sequences | 6758 | 844 | 846 | 8448 |
| Mentions | 1.44M | 165K | 190K | 1.8M |
| Languages | 44 | 44 | 44 | 44 |
The Wikinews-based evaluation set has two variants, one for cross-domain evaluation and another for zero-shot evaluation.
| | (Cross-domain) Test | (Zero-shot) Test |
| --- | :------------------: | :-----: |
| Events | 802 | 149 |
| Mentions | 2562 | 437 |
| Languages | 27 | 21 |
## Dataset Creation
### Curation Rationale
This dataset helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. We use Wikidata as our KB, as it allows for linking mentions from multilingual Wikipedia and Wikinews articles.
### Source Data
#### Initial Data Collection and Normalization
First, we utilize spatial & temporal properties from Wikidata to identify event items. Second, we identify corresponding multilingual Wikipedia pages for each Wikidata event item. Third, we pool hyperlinks from multilingual Wikipedia & Wikinews articles to these event items.
#### Who are the source language producers?
The documents in XLEL-WD are written by Wikipedia and Wikinews contributors in respective languages.
### Annotations
#### Annotation process
This dataset was originally collected automatically from Wikipedia, Wikinews and Wikidata. It was post-processed to improve data quality.
#### Who are the annotators?
The annotations in XLEL-WD (hyperlinks from Wikipedia/Wikinews to Wikidata) are added the original Wiki contributors.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
XLEL-WD v1.0.0 mostly caters to eventive nouns from Wikidata. It does not include any links to other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676) and war (Q198).
## Additional Information
### Dataset Curators
The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd).
### Licensing Information
XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bib
@article{pratapa-etal-2022-multilingual,
title = {Multilingual Event Linking to Wikidata},
author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2204.06535},
}
```
### Contributions
Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset.
| adithya7/xlel_wd | [
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:af",
"language:ar",
"language:be",
"language:bg",
"language:bn",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:ml",
"language:mr",
"language:ms",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:si",
"language:sk",
"language:sl",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tr",
"language:uk",
"language:vi",
"language:zh",
"license:cc-by-4.0",
"arxiv:2204.06535",
"region:us"
] | 2022-04-22T01:50:11+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["af", "ar", "be", "bg", "bn", "ca", "cs", "da", "de", "el", "en", "es", "fa", "fi", "fr", "he", "hi", "hu", "id", "it", "ja", "ko", "ml", "mr", "ms", "nl", "no", "pl", "pt", "ro", "ru", "si", "sk", "sl", "sr", "sv", "sw", "ta", "te", "th", "tr", "uk", "vi", "zh"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "XLEL-WD is a multilingual event linking dataset. This dataset contains mention references in multilingual Wikipedia/Wikinews articles to event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding Wikipedia articles."} | 2022-07-13T06:46:57+00:00 |
8938cae73cbe122b5af2f8d483f48e3112f533e6 | Lumos/imdb_test | [
"license:apache-2.0",
"region:us"
] | 2022-04-22T02:10:09+00:00 | {"license": "apache-2.0"} | 2022-04-22T02:11:35+00:00 |
|
a156ba94142aa70a7ed31153a815f3990d87ff03 | # Dataset Card for [FrozenLake-v1] with slippery = True
| AntoineLB/FrozenLakeFrozen | [
"region:us"
] | 2022-04-22T06:06:34+00:00 | {} | 2022-04-22T06:57:15+00:00 |
b9dee7e7cf675ed6f2b97378b8de74920162b617 | ## Overview
Original dataset [here](https://github.com/felipessalvatore/NLI_datasets).
Below the original description reported for convenience.
```latex
@MISC{Fracas96,
author = {{The Fracas Consortium} and Robin Cooper and Dick Crouch and Jan Van Eijck and Chris Fox and Josef Van Genabith and Jan Jaspars and Hans Kamp and David Milward and Manfred Pinkal and Massimo Poesio and Steve Pulman and Ted Briscoe and Holger Maier and Karsten Konrad},
title = {Using the Framework},
year = {1996}
}
```
Adapted from [https://nlp.stanford.edu/~wcmac/downloads/fracas.xml](https://nlp.stanford.edu/~wcmac/downloads/fracas.xml). We took `P1, ..., Pn` as premise and H as hypothesis. Labels have been mapped as follows `{'yes': "entailment", 'no': 'contradiction', 'undef': "neutral", 'unknown': "neutral"}`. And we randomly split 80/20 for train/dev.
## Dataset curation
One hypothesis in the dev set and three hypotheses in the train set are empty and have been
filled in with the empty string `""`. Labels are encoded with custom NLI mapping, that is
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
## Code to create the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset
from pathlib import Path
# load datasets
path = Path("<path to folder>/nli_datasets")
datasets = {}
for dataset_path in path.iterdir():
datasets[dataset_path.name] = {}
for name in dataset_path.iterdir():
df = pd.read_csv(name)
datasets[dataset_path.name][name.name.split(".")[0]] = df
ds = {}
for name, df_ in datasets["fracas"].items():
df = df_.copy()
assert df["label"].isna().sum() == 0
# fill-in empty hypothesis
df = df.fillna("")
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
ds[name] = Dataset.from_pandas(df, features=features)
dataset = DatasetDict(ds)
dataset.push_to_hub("fracas", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["label", "premise", "hypothesis"],
how="inner",
).shape[0],
)
#> train - dev: 0
``` | pietrolesci/fracas | [
"region:us"
] | 2022-04-22T07:35:48+00:00 | {} | 2022-04-25T07:40:07+00:00 |
af87ac826a01c8ce7aaed0015c8710cee48007bc | licenses:
- cc-by-2-0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: tatoeba
pretty_name: Tatoeba
---
# Dataset Card for Tatoeba
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/Tatoeba.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
Tatoeba is a collection of sentences and translations.
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Tatoeba.php
E.g.
`dataset = load_dataset("tatoeba", lang1="en", lang2="he")`
The default date is v2021-07-22, but you can also change the date with
`dataset = load_dataset("tatoeba", lang1="en", lang2="he", date="v2020-11-09")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[@loretoparisi](https://github.com/loretoparisi)
| loretoparisi/tatoeba-sentences | [
"region:us"
] | 2022-04-22T07:48:18+00:00 | {"license": "cc-by-2-0"} | 2022-04-27T16:26:31+00:00 |
36dbc520e45ddad0b14c6526ebbae8ed01bc5d7c | ## Overview
Original dataset is available on the HuggingFace Hub [here](https://huggingface.co/datasets/scitail).
## Dataset curation
This is the same as the `snli_format` split of the SciTail dataset available on the HuggingFace Hub (i.e., same data, same splits, etc).
The only differences are the following:
- selecting only the columns `["sentence1", "sentence2", "gold_label", "label"]`
- renaming columns with the following mapping `{"sentence1": "premise", "sentence2": "hypothesis"}`
- creating a new column "label" from "gold_label" with the following mapping `{"entailment": "entailment", "neutral": "not_entailment"}`
- encoding labels with the following mapping `{"not_entailment": 0, "entailment": 1}`
Note that there are 10 overlapping instances (as found by merging on columns "label", "premise", and "hypothesis") between
`train` and `test` splits.
## Code to create the dataset
```python
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset
# load datasets from the Hub
dd = load_dataset("scitail", "snli_format")
ds = {}
for name, df_ in dd.items():
df = df_.to_pandas()
# select important columns
df = df[["sentence1", "sentence2", "gold_label"]]
# rename columns
df = df.rename(columns={"sentence1": "premise", "sentence2": "hypothesis"})
# encode labels
df["label"] = df["gold_label"].map({"entailment": "entailment", "neutral": "not_entailment"})
df["label"] = df["label"].map({"not_entailment": 0, "entailment": 1})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=2, names=["not_entailment", "entailment"]),
})
ds[name] = Dataset.from_pandas(df, features=features)
dataset = DatasetDict(ds)
dataset.push_to_hub("scitail", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(dataset.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
dataset[i].to_pandas(),
dataset[j].to_pandas(),
on=["label", "premise", "hypothesis"],
how="inner",
).shape[0],
)
#> train - test: 10
#> train - validation: 0
#> test - validation: 0
``` | pietrolesci/scitail | [
"region:us"
] | 2022-04-22T08:06:21+00:00 | {} | 2022-04-25T09:40:47+00:00 |
2dceb8142327bf9eac3ff8927e2f39533a4afc8e |
# TermITH-Eval Benchmark Dataset for Keyphrase Generation
## About
TermITH-Eval is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 400 abstracts of scientific papers in French collected from the FRANCIS and PASCAL databases of the French [Institute for Scientific and Technical Information (Inist)](https://www.inist.fr/).
Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the dataset can be found in the original paper [(Bougouin et al., 2016)][bougouin-2016].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021]. Present reference keyphrases are also ordered by their order of apparition in the concatenation of title and abstract.
Text pre-processing (tokenization) is carried out using `spacy` (`fr_core_news_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Snowball stemmer implementation for french provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset contains the following test split:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- |------------:|-----------:|-------------:|----------:|------------:|--------:|---------:|
| Test | 399 | 156.9 | 11.81 | 40.60 | 7.32 | 19.28 | 32.80 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **category**: category of the document, i.e. chimie (chemistry), archeologie (archeology), linguistique (linguistics) and scienceInfo (information sciences).
## References
- (Bougouin et al., 2016) Adrien Bougouin, Sabine Barreaux, Laurent Romary, Florian Boudin, and Béatrice Daille. 2016.
[TermITH-Eval: a French Standard-Based Resource for Keyphrase Extraction Evaluation][bougouin-2016].
In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1924–1927, Portorož, Slovenia. European Language Resources Association (ELRA).Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[bougouin-2016]: https://aclanthology.org/L16-1304/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ | taln-ls2n/termith-eval | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:multilingual",
"size_categories:n<1K",
"language:fr",
"license:cc-by-4.0",
"region:us"
] | 2022-04-22T08:09:23+00:00 | {"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["fr"], "license": "cc-by-4.0", "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "TermITH-Eval"} | 2022-09-23T06:49:04+00:00 |
8a11d2b48a0276e70d77b4eb21e3078415a10822 | ROOTS Subset: roots_ar_uncorpus
# uncorpus
- Dataset uid: `uncorpus`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.8023 % of total
- 10.7390 % of ar
- 5.7970 % of fr
- 9.7477 % of es
- 2.0417 % of en
- 1.2540 % of zh
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
| bigscience-data/roots_ar_uncorpus | [
"language:ar",
"license:cc-by-4.0",
"region:us"
] | 2022-04-22T09:23:52+00:00 | {"language": "ar", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}} | 2022-12-12T10:59:32+00:00 |
32c8b25b9390fcbf17012195d7480d1b91e7f751 | ROOTS Subset: roots_en_uncorpus
# uncorpus
- Dataset uid: `uncorpus`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.8023 % of total
- 10.7390 % of ar
- 5.7970 % of fr
- 9.7477 % of es
- 2.0417 % of en
- 1.2540 % of zh
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
| bigscience-data/roots_en_uncorpus | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-22T09:26:12+00:00 | {"language": "en", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}} | 2022-12-12T10:59:37+00:00 |
3c479087d05129205cecc815ce199ce803c66149 | ROOTS Subset: roots_es_uncorpus
# uncorpus
- Dataset uid: `uncorpus`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.8023 % of total
- 10.7390 % of ar
- 5.7970 % of fr
- 9.7477 % of es
- 2.0417 % of en
- 1.2540 % of zh
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
| bigscience-data/roots_es_uncorpus | [
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-04-22T09:28:27+00:00 | {"language": "es", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}} | 2022-12-12T10:59:42+00:00 |
020c5babdd484dc981b357a153db664beb1fdbba | ROOTS Subset: roots_fr_uncorpus
# uncorpus
- Dataset uid: `uncorpus`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.8023 % of total
- 10.7390 % of ar
- 5.7970 % of fr
- 9.7477 % of es
- 2.0417 % of en
- 1.2540 % of zh
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
| bigscience-data/roots_fr_uncorpus | [
"language:fr",
"license:cc-by-4.0",
"region:us"
] | 2022-04-22T09:30:47+00:00 | {"language": "fr", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}} | 2022-12-12T10:29:02+00:00 |
f7782c950faee7385b25eef0bb0499009e6df956 | ROOTS Subset: roots_zh_uncorpus
# uncorpus
- Dataset uid: `uncorpus`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.8023 % of total
- 10.7390 % of ar
- 5.7970 % of fr
- 9.7477 % of es
- 2.0417 % of en
- 1.2540 % of zh
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
| bigscience-data/roots_zh_uncorpus | [
"language:zh",
"license:cc-by-4.0",
"region:us"
] | 2022-04-22T09:33:31+00:00 | {"language": "zh", "license": "cc-by-4.0", "extra_gated_prompt": "By accessing this dataset, you agree to abide by the BigScience Ethical Charter. The charter can be found at:\nhttps://hf.co/spaces/bigscience/ethical-charter", "extra_gated_fields": {"I have read and agree to abide by the BigScience Ethical Charter": "checkbox"}} | 2022-12-12T10:59:49+00:00 |
3671c49f3c072e6ec8047f15926db10e02de487c |
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# Dataset Card for HiNER-original
[](https://twitter.com/cfiltnlp)
[](https://twitter.com/PeopleCentredAI)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/cfiltnlp/HiNER
- **Repository:** https://github.com/cfiltnlp/HiNER
- **Paper:** https://arxiv.org/abs/2204.13743
- **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-on-hiner-collapsed
- **Point of Contact:** Rudra Murthy V
### Dataset Summary
This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.
**Note:** The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.
### Supported Tasks and Leaderboards
Named Entity Recognition
### Languages
Hindi
## Dataset Structure
### Data Instances
{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग', 'के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}
### Data Fields
- `id`: The ID value of the data point.
- `tokens`: Raw tokens in the dataset.
- `ner_tags`: the NER tags for this dataset.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| original | 76025 | 10861 | 21722|
| collapsed | 76025 | 10861 | 21722|
## About
This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available [here](https://arxiv.org/abs/2204.13743).
### Recent Updates
* Version 0.0.5: HiNER initial release
## Usage
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
```code
pip install datasets
```
To use the original dataset with all the tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-original')
```
To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-collapsed')
```
However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder.
## Model(s)
Our best performing models are hosted on the HuggingFace models repository:
1. [HiNER-Collapsed-XLM-R](https://huggingface.co/cfilt/HiNER-Collapse-XLM-Roberta-Large)
2. [HiNER-Original-XLM-R](https://huggingface.co/cfilt/HiNER-Original-XLM-Roberta-Large)
## Dataset Creation
### Curation Rationale
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.
### Source Data
#### Initial Data Collection and Normalization
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi
#### Who are the source language producers?
Various Government of India webpages
### Annotations
#### Annotation process
This dataset was manually annotated by a single annotator of a long span of time.
#### Who are the annotators?
Pallab Bhattacharjee
### Personal and Sensitive Information
We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
Any biases contained in the data released by the Indian government are bound to be present in our data.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Pallab Bhattacharjee
### Licensing Information
CC-BY-SA 4.0
### Citation Information
```latex
@misc{https://doi.org/10.48550/arxiv.2204.13743,
doi = {10.48550/ARXIV.2204.13743},
url = {https://arxiv.org/abs/2204.13743},
author = {Murthy, Rudra and Bhattacharjee, Pallab and Sharnagat, Rahul and Khatri, Jyotsana and Kanojia, Diptesh and Bhattacharyya, Pushpak},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {HiNER: A Large Hindi Named Entity Recognition Dataset},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | cfilt/HiNER-collapsed | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:hi",
"license:cc-by-sa-4.0",
"arxiv:2204.13743",
"region:us"
] | 2022-04-22T09:51:15+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["hi"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "hiner-collapsed-1", "pretty_name": "HiNER - Large Hindi Named Entity Recognition dataset"} | 2023-03-07T16:32:27+00:00 |
1c2747b56b9f6f1f22dbd7ca543447f6a900fc1a | surrey-nlp/SDU-test | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-04-22T10:05:10+00:00 | {"license": "cc-by-sa-4.0"} | 2022-04-24T06:11:10+00:00 |
|
c98da16de9bf6c8c09143b61be6079f85bfd1373 |
# Preprocessed SemEval-2010 Benchmark dataset for Keyphrase Generation
## About
SemEval-2010 is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 244 **full-text** scientific papers collected from the [ACM Digital Library](https://dl.acm.org/).
Keyphrases were annotated by readers and combined with those provided by the authors.
Details about the SemEval-2010 dataset can be found in the original paper [(kim et al., 2010)][kim-2010].
This version of the dataset was produced by [(Boudin et al., 2016)][boudin-2016] and provides four increasingly sophisticated levels of document preprocessing:
* `lvl-1`: default text files provided by the SemEval-2010 organizers.
* `lvl-2`: for each file, we manually retrieved the original PDF file from the ACM Digital Library.
We then extract the enriched textual content of the PDF files using an Optical Character Recognition (OCR) system and perform document logical structure detection using ParsCit v110505.
We use the detected logical structure to remove author-assigned keyphrases and select only relevant elements : title, headers, abstract, introduction, related work, body text and conclusion.
We finally apply a systematic dehyphenation at line breaks.s
* `lvl-3`: we further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion.
* `lvl-4`: we abridge the input text from level 3 preprocessed documents using an unsupervised summarization technique.
We keep the title and abstract and select the most content bearing sentences from the remaining contents.
Titles and abstracts, collected from the [SciCorefCorpus](https://github.com/melsk125/SciCorefCorpus), are also provided.
Details about how they were extracted and cleaned up can be found in [(Chaimongkol et al., 2014)][chaimongkol-2014].
Reference keyphrases are provided in stemmed form (because they were provided like this for the test split in the competition).
They are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
The <u>P</u>resent reference keyphrases are also ordered by their order of apparition in the concatenation of title and text (lvl-1).
## Content and statistics
The dataset is divided into the following two splits:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- |------------:|-------:|-------------:|----------:|------------:|--------:|---------:|
| Train | 144 | 184.6 | 15.44 | 42.16 | 7.36 | 26.85 | 23.63 |
| Test | 100 | 203.1 | 14.66 | 40.11 | 8.34 | 27.12 | 24.43 |
Statistics (#words, PRMU distributions) are computed using the title/abstract and not the full text of scientific papers.
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **lvl-1**: content of the document with no text processing.
- **lvl-2**: content of the document retrieved from original PDF files and cleaned up.
- **lvl-3**: content of the document further abridged to relevant sections.
- **lvl-4**: content of the document further abridged using an unsupervised summarization technique.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Kim et al., 2010) Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010.
[SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles][kim-2010].
In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics.
- (Chaimongkol et al., 2014) Panot Chaimongkol, Akiko Aizawa, and Yuka Tateisi. 2014.
[Corpus for Coreference Resolution on Scientific Papers][chaimongkol-2014].
In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3187–3190, Reykjavik, Iceland. European Language Resources Association (ELRA).
- (Boudin et al., 2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016.
[How Document Pre-processing affects Keyphrase Extraction Performance][boudin-2016].
In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 121–128, Osaka, Japan. The COLING 2016 Organizing Committee.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness][boudin-2021].
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[kim-2010]: https://aclanthology.org/S10-1004/
[chaimongkol-2014]: https://aclanthology.org/L14-1259/
[boudin-2016]: https://aclanthology.org/W16-3917/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/
| taln-ls2n/semeval-2010-pre | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-22T11:10:54+00:00 | {"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "Preprocessed SemEval-2010 Benchmark dataset"} | 2022-09-23T06:37:43+00:00 |
5bd658aa3bfea14d2c051f1c7dd34b456bbda4a0 | ## Overview
Original dataset [here](https://github.com/aylai/MultiPremiseEntailment).
## Dataset curation
Same data and splits as the original. The following columns have been added:
- `premise`: concatenation of `premise1`, `premise2`, `premise3`, and `premise4`
- `label`: encoded `gold_label` with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}`
## Code to create the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict
from pathlib import Path
# read data
path = Path("<path to files>")
datasets = {}
for dataset_path in path.rglob("*.txt"):
df = pd.read_csv(dataset_path, sep="\t")
datasets[dataset_path.name.split("_")[1].split(".")[0]] = df
ds = {}
for name, df_ in datasets.items():
df = df_.copy()
# fix parsing error for dev split
if name == "dev":
# fix parsing error
df.loc[df["contradiction_judgments"] == "3 contradiction", "contradiction_judgments"] = 3
df.loc[df["gold_label"].isna(), "gold_label"] = "contradiction"
# check no nan
assert df.isna().sum().sum() == 0
# fix dtypes
for col in ("entailment_judgments", "neutral_judgments", "contradiction_judgments"):
df[col] = df[col].astype(int)
# fix premise column
for i in range(1, 4 + 1):
df[f"premise{i}"] = df[f"premise{i}"].str.split("/", expand=True)[1]
df["premise"] = df[[f"premise{i}" for i in range(1, 4 + 1)]].agg(" ".join, axis=1)
# encode labels
df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise1": Value(dtype="string", id=None),
"premise2": Value(dtype="string", id=None),
"premise3": Value(dtype="string", id=None),
"premise4": Value(dtype="string", id=None),
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"entailment_judgments": Value(dtype="int32"),
"neutral_judgments": Value(dtype="int32"),
"contradiction_judgments": Value(dtype="int32"),
"gold_label": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
ds[name] = Dataset.from_pandas(df, features=features)
# push to hub
ds = DatasetDict(ds)
ds.push_to_hub("mpe", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> dev - test: 0
#> dev - train: 0
#> test - train: 0
``` | pietrolesci/mpe | [
"region:us"
] | 2022-04-22T11:38:29+00:00 | {} | 2022-04-25T08:00:18+00:00 |
a5bdde974239556a20e6fc1624c2e32ee20b0c6a | ## Overview
Original data available [here](http://www.seas.upenn.edu/~nlp/resources/AN-composition.tgz).
## Dataset curation
`premise` and `hypothesis` columns have been cleaned following common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/add_one_rte.py#L51-L52), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_add_1_rte.py#L31-L32)), that is
- remove HTML tags `<b>`, `<u>`, `</b>`, `</u>`
- normalize repeated white spaces
- strip
`mean_human_score` has been transformed into class labels following common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/add_one_rte.py#L20-L35), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_add_1_rte.py#L6-L17)), that is
- for test set: `mean_human_score <= 3 -> "not-entailed"` and `mean_human_score >= 4 -> "entailed"` (anything between 3 and 4 has been removed)
- for all other splits: `mean_human_score < 3.5 -> "not-entailed"` else `"entailed"`
more details below.
## Code to generate the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict
def convert_label(score, is_test):
if is_test:
if score <= 3:
return "not-entailed"
elif score >= 4:
return "entailed"
return "REMOVE"
if score < 3.5:
return "not-entailed"
return "entailed"
ds = {}
for split in ("dev", "test", "train"):
# read data
df = pd.read_csv(f"<path to folder>/AN-composition/addone-entailment/splits/data.{split}", sep="\t", header=None)
df.columns = ["mean_human_score", "binary_label", "sentence_id", "adjective", "noun", "premise", "hypothesis"]
# clean text from html tags and useless spaces
for col in ("premise", "hypothesis"):
df[col] = (
df[col]
.str.replace("(<b>)|(<u>)|(</b>)|(</u>)", " ", regex=True)
.str.replace(" {2,}", " ", regex=True)
.str.strip()
)
# encode labels
if split == "test":
df["label"] = df["mean_human_score"].map(lambda x: convert_label(x, True))
df = df.loc[df["label"] != "REMOVE"]
else:
df["label"] = df["mean_human_score"].map(lambda x: convert_label(x, False))
assert df["label"].isna().sum() == 0
df["label"] = df["label"].map({"not-entailed": 0, "entailed": 1})
# cast to dataset
features = Features({
"mean_human_score": Value(dtype="float32"),
"binary_label": Value(dtype="string"),
"sentence_id": Value(dtype="string"),
"adjective": Value(dtype="string"),
"noun": Value(dtype="string"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]),
})
ds[split] = Dataset.from_pandas(df, features=features)
ds = DatasetDict(ds)
ds.push_to_hub("add_one_rte", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> dev - test: 0
#> dev - train: 0
#> test - train: 0
``` | pietrolesci/add_one_rte | [
"region:us"
] | 2022-04-22T12:56:41+00:00 | {} | 2022-04-25T07:48:42+00:00 |
c021bbdca0b644116166a56119e2adf49e575647 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Andrés Pitta: andres.pitta@un.org**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | AndresPitta/sg-reports_labeled | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"license:unknown",
"region:us"
] | 2022-04-22T13:52:01+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en-US"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "Gender language in the reports of the secretary general 2020-2021"} | 2022-10-25T09:08:57+00:00 |
49f76692fb17d5f51bfff93c80276ba700010005 | ## Overview
This dataset has been introduced by "Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework", Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme. IJCNLP, 2017. Original data available [here](https://github.com/decompositional-semantics-initiative/DNC/raw/master/inference_is_everything.zip).
## Dataset curation
The following processing is applied
- `hypothesis_grammatical` and `judgement_valid` columns are filled with `""` when empty
- all columns are stripped
- the `entailed` column is renamed `label`
- `label` column is encoded with the following mapping `{"not-entailed": 0, "entailed": 1}`
- columns `rating` and `good_word` are dropped from `fnplus` dataset
## Code to generate the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict
ds = {}
for name in ("fnplus", "sprl", "dpr"):
# read data
with open(f"<path to files>/{name}_data.txt", "r") as f:
data = f.read()
data = data.split("\n\n")
data = [lines.split("\n") for lines in data]
data = [dict([col.split(":", maxsplit=1) for col in line if len(col) > 0]) for line in data]
df = pd.DataFrame(data)
# fill empty hypothesis_grammatical and judgement_valid
df["hypothesis_grammatical"] = df["hypothesis_grammatical"].fillna("")
df["judgement_valid"] = df["judgement_valid"].fillna("")
# fix dtype
df["index"] = df["index"].astype(int)
# strip
for col in df.select_dtypes(object).columns:
df[col] = df[col].str.strip()
# rename columns
df = df.rename(columns={"entailed": "label"})
# encode labels
df["label"] = df["label"].map({"not-entailed": 0, "entailed": 1})
# cast to dataset
features = Features({
"provenance": Value(dtype="string", id=None),
"index": Value(dtype="int64", id=None),
"text": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"partof": Value(dtype="string", id=None),
"hypothesis_grammatical": Value(dtype="string", id=None),
"judgement_valid": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]),
})
# select common columns
df = df.loc[:, list(features.keys())]
ds[name] = Dataset.from_pandas(df, features=features)
ds = DatasetDict(ds)
ds.push_to_hub("recast_white", token="<token>")
``` | pietrolesci/recast_white | [
"region:us"
] | 2022-04-22T14:27:37+00:00 | {} | 2022-04-22T14:34:14+00:00 |
9603afe1e507fdc70f80ab3c532872fb217c7cc5 | This dataset is the subset of original eli5 dataset from hugging face. | Pavithree/askHistorians | [
"region:us"
] | 2022-04-22T15:14:54+00:00 | {} | 2022-04-22T15:22:10+00:00 |
9372640c3a19eeae1396f9137339a8081fe38caa | This dataset is derived from the eli5 dataset vailable on hugging face. | Pavithree/askScience | [
"region:us"
] | 2022-04-22T15:39:35+00:00 | {} | 2022-04-22T15:45:27+00:00 |
27063178a7482239b710e3fd96a8d8eded299d1d | [Needs More Information]
# Dataset Card for dei_article_sentiment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Diversity Equity and Inclusion related article title, content, url, sentiment and basis. Basis is a term I use to describe the underline topic related to diveristy I have four at the moment 1 = Gender, 2 = Race, 3 = Disability and 4 = Other.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
ID
Title
Content
Basis
URL
Sentiment
### Data Splits
train
validate
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | deancgarcia/Diversity | [
"region:us"
] | 2022-04-22T15:55:24+00:00 | {} | 2022-12-08T00:16:35+00:00 |
765f4ff12812f047f92bd417ed64e5578436ebfe | # Dataset Card for [IU Ontology Trahsed]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/ntcuong777) for adding this dataset. | ntcuong777/iuontology | [
"region:us"
] | 2022-04-23T03:02:40+00:00 | {} | 2022-04-23T13:49:22+00:00 |
ef89c8242e095980a51c2264b0439ef0920ff2b1 | VQGAN is great, but leaves artifacts that are especially visible around things like faces.
It's be great to be able to train a model to fix ('devqganify') these flaws.
For this purpose, I've made this dataset, which contains 100k examples, each with
- A 512px image
- A smaller 256px version of the same image
- A reconstructed version, which is made by encoding the 256px image with VQGAN (f16, 1024 version from https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92, one of the ones from taming-transformers) and then decoding the result.
The idea is to train a model to go from the 256px vqgan output back to something as close to the original image as possible, or even to try and output an up-scaled 512px version for extra points.
Let me know what you come up with :)
Usage:
```python
from datasets import load_dataset
dataset = load_dataset('johnowhitaker/vqgan1024_reconstruction')
dataset['train'][0]['image_256'] # Original image
dataset['train'][0]['reconstruction_256'] # Reconstructed version
````
Approximate code used to prepare this data: https://colab.research.google.com/drive/1AXzlRMvAIE6krkpFwFnFr2c5SnOsygf-?usp=sharing (let me know if you hit issues)
I'll be making a similar dataset with other VQGAN variants and posting progress on devqganify models soon, feel free to get in touch for more info (@johnowhitaker) | johnowhitaker/vqgan1024_reconstruction | [
"region:us"
] | 2022-04-23T03:52:52+00:00 | {} | 2022-04-23T11:50:13+00:00 |
62371637e4c902138b1a813028c29d509b875084 | Neku/meme | [
"license:artistic-2.0",
"region:us"
] | 2022-04-23T05:37:43+00:00 | {"license": "artistic-2.0"} | 2022-04-23T05:37:43+00:00 |
|
45fcb031e0510483c13d10b6557aae26fc85df52 | This dataset is the subset of original eli5 dataset available in hugging face space | Pavithree/eli5_split | [
"region:us"
] | 2022-04-23T07:22:39+00:00 | {} | 2022-04-23T07:33:53+00:00 |
9bcb69f0dcc08b2097900c96c7f1332276aede6e | dnaveenr/cmu_mocap | [
"license:other",
"region:us"
] | 2022-04-23T09:33:36+00:00 | {"license": "other"} | 2022-04-24T10:33:25+00:00 |
|
d616736b70abaddf043ab517649e367b0d2bb20c | AliceTears/thanadol_sin | [
"region:us"
] | 2022-04-23T09:37:22+00:00 | {} | 2022-04-23T09:37:28+00:00 |
|
a4060f6c30fac71147c6f424fd6adb3b0b753f59 | Images from CC12M encoded with VQGAN f16 1024
Script to continue prep is included in the repo if you want more than the ~1.5M images I did here.
VQGAN model:
```
!curl -L 'https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/files/?p=%2Fckpts%2Flast.ckpt&dl=1' > vqgan_im1024.ckpt
!curl -L 'https://heibox.uni-heidelberg.de/d/8088892a516d4e3baf92/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' > vqgan_im1024.yaml
```
Try it out: TODO | johnowhitaker/vqgan1024_encs_sf | [
"region:us"
] | 2022-04-23T15:07:38+00:00 | {} | 2022-04-23T15:22:37+00:00 |
44fe0b34f20ba09aa287148447873c1f3992e265 |
# MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/alexa/massive
- **Repository:** https://github.com/alexa/massive
- **Paper:** https://arxiv.org/abs/2204.08582
- **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1697/overview
- **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues)
### Dataset Summary
MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
| Name | Lang | Utt/Lang | Domains | Intents | Slots |
|:-------------------------------------------------------------------------------:|:-------:|:--------------:|:-------:|:--------:|:------:|
| MASSIVE | 51 | 19,521 | 18 | 60 | 55 |
| SLURP (Bastianelli et al., 2020) | 1 | 16,521 | 18 | 60 | 55 |
| NLU Evaluation Data (Liu et al., 2019) | 1 | 25,716 | 18 | 54 | 56 |
| Airline Travel Information System (ATIS) (Price, 1990) | 1 | 5,871 | 1 | 26 | 129 |
| ATIS with Hindi and Turkish (Upadhyay et al., 2018) | 3 | 1,315-5,871 | 1 | 26 | 129 |
| MultiATIS++ (Xu et al., 2020) | 9 | 1,422-5,897 | 1 | 21-26 | 99-140 |
| Snips (Coucke et al., 2018) | 1 | 14,484 | - | 7 | 53 |
| Snips with French (Saade et al., 2019) | 2 | 4,818 | 2 | 14-15 | 11-12 |
| Task Oriented Parsing (TOP) (Gupta et al., 2018) | 1 | 44,873 | 2 | 25 | 36 |
| Multilingual Task-Oriented Semantic Parsing (MTOP) (Li et al., 2021) | 6 | 15,195-22,288 | 11 | 104-113 | 72-75 |
| Cross-Lingual Multilingual Task Oriented Dialog (Schuster et al., 2019) | 3 | 5,083-43,323 | 3 | 12 | 11 |
| Microsoft Dialog Challenge (Li et al., 2018) | 1 | 38,276 | 3 | 11 | 29 |
| Fluent Speech Commands (FSC) (Lugosch et al., 2019) | 1 | 30,043 | - | 31 | - |
| Chinese Audio-Textual Spoken Language Understanding (CATSLU) (Zhu et al., 2019) | 1 | 16,258 | 4 | - | 94 |
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `natural-language-understanding` (NLU) :
- `intent-classification`
- `multi-class-classification`
- `natural-language-understanding`
### Languages
The corpora consists of parallel sentences from 51 languages :
- `Afrikaans - South Africa (af-ZA)`
- `Amharic - Ethiopia (am-ET)`
- `Arabic - Saudi Arabia (ar-SA)`
- `Azeri - Azerbaijan (az-AZ)`
- `Bengali - Bangladesh (bn-BD)`
- `Chinese - China (zh-CN)`
- `Chinese - Taiwan (zh-TW)`
- `Danish - Denmark (da-DK)`
- `German - Germany (de-DE)`
- `Greek - Greece (el-GR)`
- `English - United States (en-US)`
- `Spanish - Spain (es-ES)`
- `Farsi - Iran (fa-IR)`
- `Finnish - Finland (fi-FI)`
- `French - France (fr-FR)`
- `Hebrew - Israel (he-IL)`
- `Hungarian - Hungary (hu-HU)`
- `Armenian - Armenia (hy-AM)`
- `Indonesian - Indonesia (id-ID)`
- `Icelandic - Iceland (is-IS)`
- `Italian - Italy (it-IT)`
- `Japanese - Japan (ja-JP)`
- `Javanese - Indonesia (jv-ID)`
- `Georgian - Georgia (ka-GE)`
- `Khmer - Cambodia (km-KH)`
- `Korean - Korea (ko-KR)`
- `Latvian - Latvia (lv-LV)`
- `Mongolian - Mongolia (mn-MN)`
- `Malay - Malaysia (ms-MY)`
- `Burmese - Myanmar (my-MM)`
- `Norwegian - Norway (nb-NO)`
- `Dutch - Netherlands (nl-NL)`
- `Polish - Poland (pl-PL)`
- `Portuguese - Portugal (pt-PT)`
- `Romanian - Romania (ro-RO)`
- `Russian - Russia (ru-RU)`
- `Slovanian - Slovania (sl-SL)`
- `Albanian - Albania (sq-AL)`
- `Swedish - Sweden (sv-SE)`
- `Swahili - Kenya (sw-KE)`
- `Hindi - India (hi-IN)`
- `Kannada - India (kn-IN)`
- `Malayalam - India (ml-IN)`
- `Tamil - India (ta-IN)`
- `Telugu - India (te-IN)`
- `Thai - Thailand (th-TH)`
- `Tagalog - Philippines (tl-PH)`
- `Turkish - Turkey (tr-TR)`
- `Urdu - Pakistan (ur-PK)`
- `Vietnamese - Vietnam (vi-VN)`
- `Welsh - United Kingdom (cy-GB)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/MASSIVE", "en-US", split='train')
print(dataset)
print(dataset[0])
```
## Dataset Structure
### Data Instances
```json
{
"id": "1",
"locale": "fr-FR",
"partition": "train",
"scenario": 16,
"intent": 48,
"utt": "réveille-moi à neuf heures du matin le vendredi",
"annot_utt": "réveille-moi à [time : neuf heures du matin] le [date : vendredi]",
"tokens": [
"réveille-moi",
"à",
"neuf",
"heures",
"du",
"matin",
"le",
"vendredi"
],
"ner_tags": [0, 0, 71, 6, 6, 6, 0, 14],
"worker_id": "22",
"slot_method": {
"slot": ["time", "date"],
"method": ["translation", "translation"]
},
"judgments": {
"worker_id": ["11", "22", "0"],
"intent_score": [2, 1, 1],
"slots_score": [1, 1, 1],
"grammar_score": [3, 4, 4],
"spelling_score": [2, 2, 2],
"language_identification": ["target", "target", "target"]
}
}
```
### Data Fields (taken from Alexa Github)
`id`: maps to the original ID in the [SLURP](https://github.com/pswietojanski/slurp) collection. Mapping back to the SLURP en-US utterance, this utterance served as the basis for this localization.
`locale`: is the language and country code accoring to ISO-639-1 and ISO-3166.
`partition`: is either `train`, `dev`, or `test`, according to the original split in [SLURP](https://github.com/pswietojanski/slurp).
`scenario`: is the general domain, aka "scenario" in SLURP terminology, of an utterance
`intent`: is the specific intent of an utterance within a domain formatted as `{scenario}_{intent}`
`utt`: the raw utterance text without annotations
`annot_utt`: the text from `utt` with slot annotations formatted as `[{label} : {entity}]`
`worker_id`: The obfuscated worker ID from MTurk of the worker completing the localization of the utterance. Worker IDs are specific to a locale and do *not* map across locales.
`slot_method`: for each slot in the utterance, whether that slot was a `translation` (i.e., same expression just in the target language), `localization` (i.e., not the same expression but a different expression was chosen more suitable to the phrase in that locale), or `unchanged` (i.e., the original en-US slot value was copied over without modification).
`judgments`: Each judgment collected for the localized utterance has 6 keys. `worker_id` is the obfuscated worker ID from MTurk of the worker completing the judgment. Worker IDs are specific to a locale and do *not* map across locales, but *are* consistent across the localization tasks and the judgment tasks, e.g., judgment worker ID 32 in the example above may appear as the localization worker ID for the localization of a different de-DE utterance, in which case it would be the same worker.
```plain
intent_score : "Does the sentence match the intent?"
0: No
1: Yes
2: It is a reasonable interpretation of the goal
slots_score : "Do all these terms match the categories in square brackets?"
0: No
1: Yes
2: There are no words in square brackets (utterance without a slot)
grammar_score : "Read the sentence out loud. Ignore any spelling, punctuation, or capitalization errors. Does it sound natural?"
0: Completely unnatural (nonsensical, cannot be understood at all)
1: Severe errors (the meaning cannot be understood and doesn't sound natural in your language)
2: Some errors (the meaning can be understood but it doesn't sound natural in your language)
3: Good enough (easily understood and sounds almost natural in your language)
4: Perfect (sounds natural in your language)
spelling_score : "Are all words spelled correctly? Ignore any spelling variances that may be due to differences in dialect. Missing spaces should be marked as a spelling error."
0: There are more than 2 spelling errors
1: There are 1-2 spelling errors
2: All words are spelled correctly
language_identification : "The following sentence contains words in the following languages (check all that apply)"
1: target
2: english
3: other
4: target & english
5: target & other
6: english & other
7: target & english & other
```
### Data Splits
|Language|Train|Dev|Test|
|:---:|:---:|:---:|:---:|
|af-ZA|11514|2033|2974|
|am-ET|11514|2033|2974|
|ar-SA|11514|2033|2974|
|az-AZ|11514|2033|2974|
|bn-BD|11514|2033|2974|
|cy-GB|11514|2033|2974|
|da-DK|11514|2033|2974|
|de-DE|11514|2033|2974|
|el-GR|11514|2033|2974|
|en-US|11514|2033|2974|
|es-ES|11514|2033|2974|
|fa-IR|11514|2033|2974|
|fi-FI|11514|2033|2974|
|fr-FR|11514|2033|2974|
|he-IL|11514|2033|2974|
|hi-IN|11514|2033|2974|
|hu-HU|11514|2033|2974|
|hy-AM|11514|2033|2974|
|id-ID|11514|2033|2974|
|is-IS|11514|2033|2974|
|it-IT|11514|2033|2974|
|ja-JP|11514|2033|2974|
|jv-ID|11514|2033|2974|
|ka-GE|11514|2033|2974|
|km-KH|11514|2033|2974|
|kn-IN|11514|2033|2974|
|ko-KR|11514|2033|2974|
|lv-LV|11514|2033|2974|
|ml-IN|11514|2033|2974|
|mn-MN|11514|2033|2974|
|ms-MY|11514|2033|2974|
|my-MM|11514|2033|2974|
|nb-NO|11514|2033|2974|
|nl-NL|11514|2033|2974|
|pl-PL|11514|2033|2974|
|pt-PT|11514|2033|2974|
|ro-RO|11514|2033|2974|
|ru-RU|11514|2033|2974|
|sl-SL|11514|2033|2974|
|sq-AL|11514|2033|2974|
|sv-SE|11514|2033|2974|
|sw-KE|11514|2033|2974|
|ta-IN|11514|2033|2974|
|te-IN|11514|2033|2974|
|th-TH|11514|2033|2974|
|tl-PH|11514|2033|2974|
|tr-TR|11514|2033|2974|
|ur-PK|11514|2033|2974|
|vi-VN|11514|2033|2974|
|zh-CN|11514|2033|2974|
|zh-TW|11514|2033|2974|
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Amazon Alexa.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__MASSIVE__: Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan.
__SLURP__: Bastianelli, Emanuele and Vanzo, Andrea and Swietojanski, Pawel and Rieser, Verena.
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
Copyright Amazon.com Inc. or its affiliates.
Attribution 4.0 International
=======================================================================
Creative Commons Corporation ("Creative Commons") is not a law firm and
does not provide legal services or legal advice. Distribution of
Creative Commons public licenses does not create a lawyer-client or
other relationship. Creative Commons makes its licenses and related
information available on an "as-is" basis. Creative Commons gives no
warranties regarding its licenses, any material licensed under their
terms and conditions, or any related information. Creative Commons
disclaims all liability for damages resulting from their use to the
fullest extent possible.
Using Creative Commons Public Licenses
Creative Commons public licenses provide a standard set of terms and
conditions that creators and other rights holders may use to share
original works of authorship and other material subject to copyright
and certain other rights specified in the public license below. The
following considerations are for informational purposes only, are not
exhaustive, and do not form part of our licenses.
Considerations for licensors: Our public licenses are
intended for use by those authorized to give the public
permission to use material in ways otherwise restricted by
copyright and certain other rights. Our licenses are
irrevocable. Licensors should read and understand the terms
and conditions of the license they choose before applying it.
Licensors should also secure all rights necessary before
applying our licenses so that the public can reuse the
material as expected. Licensors should clearly mark any
material not subject to the license. This includes other CC-
licensed material, or material used under an exception or
limitation to copyright. More considerations for licensors:
wiki.creativecommons.org/Considerations_for_licensors
Considerations for the public: By using one of our public
licenses, a licensor grants the public permission to use the
licensed material under specified terms and conditions. If
the licensor's permission is not necessary for any reason--for
example, because of any applicable exception or limitation to
copyright--then that use is not regulated by the license. Our
licenses grant only permissions under copyright and certain
other rights that a licensor has authority to grant. Use of
the licensed material may still be restricted for other
reasons, including because others have copyright or other
rights in the material. A licensor may make special requests,
such as asking that all changes be marked or described.
Although not required by our licenses, you are encouraged to
respect those requests where reasonable. More considerations
for the public:
wiki.creativecommons.org/Considerations_for_licensees
=======================================================================
Creative Commons Attribution 4.0 International Public License
By exercising the Licensed Rights (defined below), You accept and agree
to be bound by the terms and conditions of this Creative Commons
Attribution 4.0 International Public License ("Public License"). To the
extent this Public License may be interpreted as a contract, You are
granted the Licensed Rights in consideration of Your acceptance of
these terms and conditions, and the Licensor grants You such rights in
consideration of benefits the Licensor receives from making the
Licensed Material available under these terms and conditions.
Section 1 -- Definitions.
a. Adapted Material means material subject to Copyright and Similar
Rights that is derived from or based upon the Licensed Material
and in which the Licensed Material is translated, altered,
arranged, transformed, or otherwise modified in a manner requiring
permission under the Copyright and Similar Rights held by the
Licensor. For purposes of this Public License, where the Licensed
Material is a musical work, performance, or sound recording,
Adapted Material is always produced where the Licensed Material is
synched in timed relation with a moving image.
b. Adapter's License means the license You apply to Your Copyright
and Similar Rights in Your contributions to Adapted Material in
accordance with the terms and conditions of this Public License.
c. Copyright and Similar Rights means copyright and/or similar rights
closely related to copyright including, without limitation,
performance, broadcast, sound recording, and Sui Generis Database
Rights, without regard to how the rights are labeled or
categorized. For purposes of this Public License, the rights
specified in Section 2(b)(1)-(2) are not Copyright and Similar
Rights.
d. Effective Technological Measures means those measures that, in the
absence of proper authority, may not be circumvented under laws
fulfilling obligations under Article 11 of the WIPO Copyright
Treaty adopted on December 20, 1996, and/or similar international
agreements.
e. Exceptions and Limitations means fair use, fair dealing, and/or
any other exception or limitation to Copyright and Similar Rights
that applies to Your use of the Licensed Material.
f. Licensed Material means the artistic or literary work, database,
or other material to which the Licensor applied this Public
License.
g. Licensed Rights means the rights granted to You subject to the
terms and conditions of this Public License, which are limited to
all Copyright and Similar Rights that apply to Your use of the
Licensed Material and that the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights
under this Public License.
i. Share means to provide material to the public by any means or
process that requires permission under the Licensed Rights, such
as reproduction, public display, public performance, distribution,
dissemination, communication, or importation, and to make material
available to the public including in ways that members of the
public may access the material from a place and at a time
individually chosen by them.
j. Sui Generis Database Rights means rights other than copyright
resulting from Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases,
as amended and/or succeeded, as well as other essentially
equivalent rights anywhere in the world.
k. You means the individual or entity exercising the Licensed Rights
under this Public License. Your has a corresponding meaning.
Section 2 -- Scope.
a. License grant.
1. Subject to the terms and conditions of this Public License,
the Licensor hereby grants You a worldwide, royalty-free,
non-sublicensable, non-exclusive, irrevocable license to
exercise the Licensed Rights in the Licensed Material to:
a. reproduce and Share the Licensed Material, in whole or
in part; and
b. produce, reproduce, and Share Adapted Material.
2. Exceptions and Limitations. For the avoidance of doubt, where
Exceptions and Limitations apply to Your use, this Public
License does not apply, and You do not need to comply with
its terms and conditions.
3. Term. The term of this Public License is specified in Section
6(a).
4. Media and formats; technical modifications allowed. The
Licensor authorizes You to exercise the Licensed Rights in
all media and formats whether now known or hereafter created,
and to make technical modifications necessary to do so. The
Licensor waives and/or agrees not to assert any right or
authority to forbid You from making technical modifications
necessary to exercise the Licensed Rights, including
technical modifications necessary to circumvent Effective
Technological Measures. For purposes of this Public License,
simply making modifications authorized by this Section 2(a)
(4) never produces Adapted Material.
5. Downstream recipients.
a. Offer from the Licensor -- Licensed Material. Every
recipient of the Licensed Material automatically
receives an offer from the Licensor to exercise the
Licensed Rights under the terms and conditions of this
Public License.
b. No downstream restrictions. You may not offer or impose
any additional or different terms or conditions on, or
apply any Effective Technological Measures to, the
Licensed Material if doing so restricts exercise of the
Licensed Rights by any recipient of the Licensed
Material.
6. No endorsement. Nothing in this Public License constitutes or
may be construed as permission to assert or imply that You
are, or that Your use of the Licensed Material is, connected
with, or sponsored, endorsed, or granted official status by,
the Licensor or others designated to receive attribution as
provided in Section 3(a)(1)(A)(i).
b. Other rights.
1. Moral rights, such as the right of integrity, are not
licensed under this Public License, nor are publicity,
privacy, and/or other similar personality rights; however, to
the extent possible, the Licensor waives and/or agrees not to
assert any such rights held by the Licensor to the limited
extent necessary to allow You to exercise the Licensed
Rights, but not otherwise.
2. Patent and trademark rights are not licensed under this
Public License.
3. To the extent possible, the Licensor waives any right to
collect royalties from You for the exercise of the Licensed
Rights, whether directly or through a collecting society
under any voluntary or waivable statutory or compulsory
licensing scheme. In all other cases the Licensor expressly
reserves any right to collect such royalties.
Section 3 -- License Conditions.
Your exercise of the Licensed Rights is expressly made subject to the
following conditions.
a. Attribution.
1. If You Share the Licensed Material (including in modified
form), You must:
a. retain the following if it is supplied by the Licensor
with the Licensed Material:
i. identification of the creator(s) of the Licensed
Material and any others designated to receive
attribution, in any reasonable manner requested by
the Licensor (including by pseudonym if
designated);
ii. a copyright notice;
iii. a notice that refers to this Public License;
iv. a notice that refers to the disclaimer of
warranties;
v. a URI or hyperlink to the Licensed Material to the
extent reasonably practicable;
b. indicate if You modified the Licensed Material and
retain an indication of any previous modifications; and
c. indicate the Licensed Material is licensed under this
Public License, and include the text of, or the URI or
hyperlink to, this Public License.
2. You may satisfy the conditions in Section 3(a)(1) in any
reasonable manner based on the medium, means, and context in
which You Share the Licensed Material. For example, it may be
reasonable to satisfy the conditions by providing a URI or
hyperlink to a resource that includes the required
information.
3. If requested by the Licensor, You must remove any of the
information required by Section 3(a)(1)(A) to the extent
reasonably practicable.
4. If You Share Adapted Material You produce, the Adapter's
License You apply must not prevent recipients of the Adapted
Material from complying with this Public License.
Section 4 -- Sui Generis Database Rights.
Where the Licensed Rights include Sui Generis Database Rights that
apply to Your use of the Licensed Material:
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
to extract, reuse, reproduce, and Share all or a substantial
portion of the contents of the database;
b. if You include all or a substantial portion of the database
contents in a database in which You have Sui Generis Database
Rights, then the database in which You have Sui Generis Database
Rights (but not its individual contents) is Adapted Material; and
c. You must comply with the conditions in Section 3(a) if You Share
all or a substantial portion of the contents of the database.
For the avoidance of doubt, this Section 4 supplements and does not
replace Your obligations under this Public License where the Licensed
Rights include other Copyright and Similar Rights.
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
c. The disclaimer of warranties and limitation of liability provided
above shall be interpreted in a manner that, to the extent
possible, most closely approximates an absolute disclaimer and
waiver of all liability.
Section 6 -- Term and Termination.
a. This Public License applies for the term of the Copyright and
Similar Rights licensed here. However, if You fail to comply with
this Public License, then Your rights under this Public License
terminate automatically.
b. Where Your right to use the Licensed Material has terminated under
Section 6(a), it reinstates:
1. automatically as of the date the violation is cured, provided
it is cured within 30 days of Your discovery of the
violation; or
2. upon express reinstatement by the Licensor.
For the avoidance of doubt, this Section 6(b) does not affect any
right the Licensor may have to seek remedies for Your violations
of this Public License.
c. For the avoidance of doubt, the Licensor may also offer the
Licensed Material under separate terms or conditions or stop
distributing the Licensed Material at any time; however, doing so
will not terminate this Public License.
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
License.
Section 7 -- Other Terms and Conditions.
a. The Licensor shall not be bound by any additional or different
terms or conditions communicated by You unless expressly agreed.
b. Any arrangements, understandings, or agreements regarding the
Licensed Material not stated herein are separate from and
independent of the terms and conditions of this Public License.
Section 8 -- Interpretation.
a. For the avoidance of doubt, this Public License does not, and
shall not be interpreted to, reduce, limit, restrict, or impose
conditions on any use of the Licensed Material that could lawfully
be made without permission under this Public License.
b. To the extent possible, if any provision of this Public License is
deemed unenforceable, it shall be automatically reformed to the
minimum extent necessary to make it enforceable. If the provision
cannot be reformed, it shall be severed from this Public License
without affecting the enforceability of the remaining terms and
conditions.
c. No term or condition of this Public License will be waived and no
failure to comply consented to unless expressly agreed to by the
Licensor.
d. Nothing in this Public License constitutes or may be interpreted
as a limitation upon, or waiver of, any privileges and immunities
that apply to the Licensor or You, including from the legal
processes of any jurisdiction or authority.
=======================================================================
Creative Commons is not a party to its public licenses.
Notwithstanding, Creative Commons may elect to apply one of its public
licenses to material it publishes and in those instances will be
considered the “Licensor.” The text of the Creative Commons public
licenses is dedicated to the public domain under the CC0 Public Domain
Dedication. Except for the limited purpose of indicating that material
is shared under a Creative Commons public license or as otherwise
permitted by the Creative Commons policies published at
creativecommons.org/policies, Creative Commons does not authorize the
use of the trademark "Creative Commons" or any other trademark or logo
of Creative Commons without its prior written consent including,
without limitation, in connection with any unauthorized modifications
to any of its public licenses or any other arrangements,
understandings, or agreements concerning use of licensed material. For
the avoidance of doubt, this paragraph does not form part of the public
licenses.
Creative Commons may be contacted at creativecommons.org.
```
### Citation Information
Please cite the following paper when using this dataset.
```latex
@misc{fitzgerald2022massive,
title={MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages},
author={Jack FitzGerald and Christopher Hench and Charith Peris and Scott Mackie and Kay Rottmann and Ana Sanchez and Aaron Nash and Liam Urbach and Vishesh Kakarala and Richa Singh and Swetha Ranganath and Laurie Crist and Misha Britan and Wouter Leeuwis and Gokhan Tur and Prem Natarajan},
year={2022},
eprint={2204.08582},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{bastianelli-etal-2020-slurp,
title = "{SLURP}: A Spoken Language Understanding Resource Package",
author = "Bastianelli, Emanuele and
Vanzo, Andrea and
Swietojanski, Pawel and
Rieser, Verena",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.588",
doi = "10.18653/v1/2020.emnlp-main.588",
pages = "7252--7262",
abstract = "Spoken Language Understanding infers semantic meaning directly from audio data, and thus promises to reduce error propagation and misunderstandings in end-user applications. However, publicly available SLU resources are limited. In this paper, we release SLURP, a new SLU package containing the following: (1) A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets; (2) Competitive baselines based on state-of-the-art NLU and ASR systems; (3) A new transparent metric for entity labelling which enables a detailed error analysis for identifying potential areas of improvement. SLURP is available at https://github.com/pswietojanski/slurp."
}
```
| qanastek/MASSIVE | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:az",
"language:bn",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:km",
"language:kn",
"language:ko",
"language:lv",
"language:ml",
"language:mn",
"language:ms",
"language:my",
"language:nb",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sl",
"language:sq",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:ur",
"language:vi",
"language:zh",
"arxiv:2204.08582",
"region:us"
] | 2022-04-23T15:23:09+00:00 | {"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["found"], "language": ["af", "am", "ar", "az", "bn", "cy", "da", "de", "el", "en", "es", "fa", "fi", "fr", "he", "hi", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "km", "kn", "ko", "lv", "ml", "mn", "ms", "my", "nb", "nl", "pl", "pt", "ro", "ru", "sl", "sq", "sv", "sw", "ta", "te", "th", "tl", "tr", "ur", "vi", "zh", "zh"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification", "multi-class-classification", "named-entity-recognition"], "pretty_name": "MASSIVE", "language_bcp47": ["af-ZA", "am-ET", "ar-SA", "az-AZ", "bn-BD", "cy-GB", "da-DK", "de-DE", "el-GR", "en-US", "es-ES", "fa-IR", "fi-FI", "fr-FR", "he-IL", "hi-IN", "hu-HU", "hy-AM", "id-ID", "is-IS", "it-IT", "ja-JP", "jv-ID", "ka-GE", "km-KH", "kn-IN", "ko-KR", "lv-LV", "ml-IN", "mn-MN", "ms-MY", "my-MM", "nb-NO", "nl-NL", "pl-PL", "pt-PT", "ro-RO", "ru-RU", "sl-SL", "sq-AL", "sv-SE", "sw-KE", "ta-IN", "te-IN", "th-TH", "tl-PH", "tr-TR", "ur-PK", "vi-VN", "zh-CN", "zh-TW"]} | 2022-12-23T21:28:08+00:00 |
5ccd054e794667994e2fd3b6a5ff01bed70f9acf | VQGAN is great, but leaves artifacts that are especially visible around things like faces.
It's be great to be able to train a model to fix ('devqganify') these flaws.
For this purpose, I've made this dataset, which contains >100k examples, each with
- A 512px image
- A smaller 256px version of the same image
- A reconstructed version, which is made by encoding the 256px image with VQGAN (f16, 16384 imagenet version from https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/) and then decoding the result.
The idea is to train a model to go from the 256px vqgan output back to something as close to the original image as possible, or even to try and output an up-scaled 512px version for extra points.
Let me know what you come up with :)
Usage:
```python
from datasets import load_dataset
dataset = load_dataset('johnowhitaker/vqgan1024_reconstruction')
dataset['train'][0]['image_256'] # Original image
dataset['train'][0]['reconstruction_256'] # Reconstructed version
````
Approximate code used to prepare this data (vqgan model was changed for this version): https://colab.research.google.com/drive/1AXzlRMvAIE6krkpFwFnFr2c5SnOsygf-?usp=sharing (let me know if you hit issues)
The VQGAN model used for this version: https://heibox.uni-heidelberg.de/d/a7530b09fed84f80a887/
See also: https://huggingface.co/datasets/johnowhitaker/vqgan1024_reconstruction (same idea but vqgan with smaller vocab size of 1024) | johnowhitaker/vqgan16k_reconstruction | [
"region:us"
] | 2022-04-23T17:00:28+00:00 | {} | 2022-04-24T05:13:26+00:00 |
ebe02645e5511e32c87c79746a75dc2d45bae062 | # Dataset Card for [Kaggle MNLI]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://www.kaggle.com/c/multinli-matched-open-evaluation **
- **Repository: chrishuber/roberta-retrained-mlni **
- **Paper: Inference Detection in NLP Using the MultiNLI and SNLI Datasets**
- **Leaderboard: 8**
- **Point of Contact: chrish@sfsu.edu**
### Dataset Summary
[These are the datasets posted to Kaggle for an inference detection NLP competition. Moving them here to use with Pytorch.]
### Supported Tasks and Leaderboards
Provides train and validation data for sentence pairs with inference labels.
[https://www.kaggle.com/competitions/multinli-matched-open-evaluation/leaderboard]
[https://www.kaggle.com/competitions/multinli-mismatched-open-evaluation/leaderboard]
### Languages
[JSON, Python]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[Reposted from https://www.kaggle.com/c/multinli-matched-open-evaluation and https://www.kaggle.com/c/multinli-mismatched-open-evaluation]
### Source Data
#### Initial Data Collection and Normalization
[Please see the article at https://arxiv.org/abs/1704.05426 which discusses the creation of the MNLI dataset.]
#### Who are the source language producers?
[Please see the article at https://arxiv.org/abs/1704.05426 which discusses the creation of the MNLI dataset.]
### Annotations
#### Annotation process
[Crowdsourcing using MechanicalTurk.]
#### Who are the annotators?
[MechanicalTurk users.]
### Personal and Sensitive Information
[None.]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Kaggle]
### Licensing Information
[More Information Needed]
### Citation Information
[https://www.kaggle.com/c/multinli-matched-open-evaluation]
[https://www.kaggle.com/c/multinli-mismatched-open-evaluation]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | chrishuber/kaggle_mnli | [
"arxiv:1704.05426",
"region:us"
] | 2022-04-23T17:16:05+00:00 | {} | 2022-04-23T18:19:52+00:00 |
ac3f65840a512ce745231e9d6339c2bc83e61582 |
## Dataset Description
- **Homepage:** None
- **Repository:** None
- **Paper:** None
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
| d0r1h/Shlokam | [
"language_creators:found",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:sn",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2022-04-24T08:50:02+00:00 | {"annotations_creators": "found", "language_creators": ["found"], "language": ["sn", "en"], "license": "cc-by-3.0", "multilinguality": ["translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "pretty_name": "Shlokam"} | 2022-10-25T09:09:04+00:00 |
7a414e80725eac766f2602676dc8b39f80b061e4 |
## Dataset Summary
FaithDial is a faithful knowledge-grounded dialogue benchmark, composed of **50,761** turns spanning **5649** conversations. It was curated through Amazon Mechanical Turk by asking annotators to amend hallucinated utterances in [Wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) (WoW). In our dialogue setting, we simulate interactions between two speakers: **an information seeker** and **a bot wizard**. The seeker has a large degree of freedom as opposed to the wizard bot which is more restricted on what it can communicate. In fact, it must abide by the following rules:
- **First**, it should be truthful by providing information that is attributable to the source knowledge *K*.
- **Second**, it should provide information conversationally, i.e., use naturalistic phrasing of *K*, support follow-on discussion with questions, and prompt user's opinions.
- **Third**, it should acknowledge its ignorance of the answer in those cases where *K* does not include it while still moving the conversation forward using *K*.
## Dataset Description
- **Homepage:** [FaithDial](https://mcgill-nlp.github.io/FaithDial/)
- **Repository:** [GitHub](https://github.com/McGill-NLP/FaithDial)
- **Point of Contact:** [Nouha Dziri](mailto:dziri@ualberta.ca)
## Language
English
## Data Instance
An example of 'train' looks as follows:
```text
[
{
"utterances": [
... // prior utterances,
{
"history": [
"Have you ever been to a concert? They're so fun!",
"No I cannot as a bot. However, have you been to Madonna's? Her 10th concert was used to help her 13th album called \"Rebel Heart\".",
"Yeah I've heard of it but never went or what it was for. Can you tell me more about it?"
],
"speaker": "Wizard",
"knowledge": "It began on September 9, 2015, in Montreal, Canada, at the Bell Centre and concluded on March 20, 2016, in Sydney, Australia at Allphones Arena.",
"original_response": "It started in September of 2015 and ran all the way through March of 2016. Can you imagine being on the road that long?",
"response": "Sure. The concert started in September 9th of 2015 at Montreal, Canada. It continued till 20th of March of 2016, where it ended at Sydney, Australia.",
"BEGIN": [
"Hallucination",
"Entailment"
],
"VRM": [
"Disclosure",
"Question"
]
},
... // more utterances
]
},
... // more dialogues
]
```
If the `original_response` is empty, it means that the response is faithful to the source and we consider it as a FaithDial response. Faithful responses in WoW are also edited slightly if they are found to have some grammatical issues or typos.
## Data Fields
- `history`: `List[string]`. The dialogue history.
- `knowledge`: `string`. The source knowkedge on which the bot wizard should ground its response.
- `speaker`: `string`. The current speaker.
- `original response`: `string`. The WoW original response before editing it.
- `response`: `string`. The new Wizard response.
- `BEGIN`: `List[string]`. The BEGIN labels for the Wizard response.
- `VRM`: `List[string]`. The VRM labels for the wizard response.
## Data Splits
- `Train`: 36809 turns
- `Valid`: 6851 turns
- `Test`: 7101 turns
`Valid` includes both the `seen` and the `unseen` data splits from WoW. The same applies to `Test`. We also include those splits for FaithDial valid and test data.
## Annotations
Following the guidelines for ethical crowdsourcing outlined in [Sheehan. 2018](https://www.tandfonline.com/doi/abs/10.1080/03637751.2017.1342043),
we hire Amazon Mechanical Turk (AMT) workers to edit utterances in WoW dialogues that were found to exhibit unfaithful responses. To ensure clarity in the task definition, we provided detailed examples for our terminology. Moreover, we performed several staging rounds over the course of several months.
# Who are the annotators?
To be eligible for the task, workers have to be located in the United States and Canada and have to answer successfully 20 questions as part of a qualification test. Before launching the main annotation task, we perform a small pilot round (60 HITS) to check the performance of the workers. We email workers who commit errors, providing them with examples on how to fix their mistakes in future HITS.
## Personal and Sensitive Information
Seeker utterances in FaithDial may contain personal and sensitive information.
## Social Impact of Dataset
In recent years, the conversational AI market has seen
a proliferation of a variety of applications—which are powered by large pre-trained LMs—that span
across a broad range of domains, such as customer
support, education, e-commerce, health, entertainment, etc. Ensuring that
these systems are trustworthy is key to deploy systems safely at a large scale in real-world application, especially in high-stake domain. FaithDial holds promise to encourage faithfulness in information-seeking dialogue and make virtual assistants both safer and more reliable.
## Licensing Information
MIT
## Citation Information
```bibtex
@article{dziri2022faithdial,
title={FaithDial: A Faithful Benchmark for Information-Seeking Dialogue},
author={Dziri, Nouha and Kamalloo, Ehsan and Milton, Sivan and Zaiane, Osmar and Yu, Mo and Ponti, Edoardo and Reddy, Siva},
journal={arXiv preprint, arXiv:2204.10757},
year={2022},
url={https://arxiv.org/abs/2204.10757}
}
```
| McGill-NLP/FaithDial | [
"task_categories:conversational",
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"language:en",
"license:mit",
"faithful-dialogue-modeling",
"trustworthy-dialogue-modeling",
"arxiv:2204.10757",
"region:us"
] | 2022-04-24T22:10:52+00:00 | {"annotations_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "task_categories": ["conversational", "text-generation"], "task_ids": ["dialogue-modeling"], "pretty_name": "A Faithful Benchmark for Information-Seeking Dialogue", "tags": ["faithful-dialogue-modeling", "trustworthy-dialogue-modeling"]} | 2023-02-05T04:09:45+00:00 |
5af3b4f0df36436a071954af1d499b9753c0f27b |
# TAU Spatial Room Impulse Response Database (TAU-SRIR DB)
## Important
**This is a copy from the Zenodo Original one**
## Description
[Audio Research Group / Tampere University](https://webpages.tuni.fi/arg/)
AUTHORS
**Tampere University**
- Archontis Politis ([contact](mailto:archontis.politis@tuni.fi), [profile](https://scholar.google.fi/citations?user=DuCqB3sAAAAJ&hl=en))
- Sharath Adavanne ([contact](mailto:sharath.adavanne@tuni.fi), [profile](https://www.aane.in))
- Tuomas Virtanen ([contact](mailto:tuomas.virtanen@tuni.fi), [profile](https://homepages.tuni.fi/tuomas.virtanen/))
**Data Collection 2019-2020**
- Archontis Politis
- Aapo Hakala
- Ali Gohar
**Data Collection 2017-2018**
- Sharath Adavanne
- Aapo Hakala
- Eemi Fagerlund
- Aino Koskimies
The **TAU Spatial Room Impulse Response Database (TAU-SRIR DB)** database contains spatial room impulse responses (SRIRs) captured in various spaces of Tampere University (TAU), Finland, for a fixed receiver position and multiple source positions per room, along with separate recordings of spatial ambient noise captured at the same recording point. The dataset is intended for emulation of spatial multichannel recordings for evaluation and/or training of multichannel processing algorithms in realistic reverberant conditions and over multiple rooms. The major distinct properties of the database compared to other databases of room impulse responses are:
- Capturing in a high resolution multichannel format (32 channels) from which multiple more limited application-specific formats can be derived (e.g. tetrahedral array, circular array, first-order Ambisonics, higher-order Ambisonics, binaural).
- Extraction of densely spaced SRIRs along measurement trajectories, allowing emulation of moving source scenarios.
- Multiple source distances, azimuths, and elevations from the receiver per room, allowing emulation of complex configurations for multi-source methods.
- Multiple rooms, allowing evaluation of methods at various acoustic conditions, and training of methods with the aim of generalization on different rooms.
The RIRs were collected by staff of TAU between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The data collection received funding from the European Research Council, grant agreement [637422 EVERYSOUND](https://cordis.europa.eu/project/id/637422).
[](https://erc.europa.eu/)
> **NOTE**: This database is a work-in-progress. We intend to publish additional rooms, additional formats, and potentially higher-fidelity versions of the captured responses in the near future, as new versions of the database in this repository.
## Report and reference
A compact description of the dataset, recording setup, recording procedure, and extraction can be found in:
>Politis., Archontis, Adavanne, Sharath, & Virtanen, Tuomas (2020). **A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection**. In _Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020)_, Tokyo, Japan.
available [here](https://dcase.community/documents/workshop2020/proceedings/DCASE2020Workshop_Politis_88.pdf). A more detailed report specifically focusing on the dataset collection and properties will follow.
## Aim
The dataset can be used for generating multichannel or monophonic mixtures for testing or training of methods under realistic reverberation conditions, related to e.g. multichannel speech enhancement, acoustic scene analysis, and machine listening, among others. It is especially suitable for the follow application scenarios:
- monophonic and multichannal reverberant single- or multi-source speech in multi-room reverberant conditions,
- monophonic and multichannel polyphonic sound events in multi-room reverberant conditions,
- single-source and multi-source localization in multi-room reverberant conditions, in static or dynamic scenarios,
- single-source and multi-source tracking in multi-room reverberant conditions, in static or dynamic scenarios,
- sound event localization and detection in multi-room reverberant conditions, in static or dynamic scenarios.
## Specifications
The SRIRs were captured using an [Eigenmike](https://mhacoustics.com/products) spherical microphone array. A [Genelec G Three loudspeaker](https://www.genelec.com/g-three) was used to playback a maximum length sequence (MLS) around the Eigenmike. The SRIRs were obtained in the STFT domain using a least-squares regression between the known measurement signal (MLS) and far-field recording independently at each frequency. In this version of the dataset the SRIRs and ambient noise are downsampled to 24kHz for compactness.
The currently published SRIR set was recorded at nine different indoor locations inside the Tampere University campus at Hervanta, Finland. Additionally, 30 minutes of ambient noise recordings were collected at the same locations with the IR recording setup unchanged. SRIR directions and distances differ with the room. Possible azimuths span the whole range of $\phi\in[-180,180)$, while the elevations span approximately a range between $\theta\in[-45,45]$ degrees. The currently shared measured spaces are as follows:
1. Large open space in underground bomb shelter, with plastic-coated floor and rock walls. Ventilation noise.
2. Large open gym space. Ambience of people using weights and gym equipment in adjacent rooms.
3. Small classroom (PB132) with group work tables and carpet flooring. Ventilation noise.
4. Meeting room (PC226) with hard floor and partially glass walls. Ventilation noise.
5. Lecture hall (SA203) with inclined floor and rows of desks. Ventilation noise.
6. Small classroom (SC203) with group work tables and carpet flooring. Ventilation noise.
7. Large classroom (SE203) with hard floor and rows of desks. Ventilation noise.
8. Lecture hall (TB103) with inclined floor and rows of desks. Ventilation noise.
9. Meeting room (TC352) with hard floor and partially glass walls. Ventilation noise.
The measurement trajectories were organized in groups, with each group being specified by a circular or linear trace at the floor at a certain distance (range) from the z-axis of the microphone. For circular trajectories two ranges were measured, a _close_ and a _far_ one, except room TC352, where the same range was measured twice, but with different furniture configuration and open or closed doors. For linear trajectories also two ranges were measured, _close_ and _far_, but with linear paths at either side of the array, resulting in 4 unique trajectory groups, with the exception of room SA203 where 3 ranges were measurd resulting on 6 trajectory groups. Linear trajectory groups are always parallel to each other, in the same room.
Each trajectory group had multiple measurement trajectories, following the same floor path, but with the source at different heights.
The SRIRs are extracted from the noise recordings of the slowly moving source across those trajectories, at an angular spacing of approximately every 1 degree from the microphone. This extraction scheme instead of extracting SRIRs at equally spaced points along the path (e.g. every 20cm) was found more practical for synthesis purposes, making emulation of moving sources at an approximately constant angular speed easier.
The following table summarizes the above properties for the currently available rooms:
| | Room name | Room type | Traj. type | # ranges | # trajectory groups | # heights/group | # trajectories (total) | # RIRs/DOAs |
|---|--------------------------|----------------------------|------------|-------------|-----------------------|---------------------|------------------------|-------------|
| 1 | Bomb shelter | Complex/semi-open | Circular | 2 | 2 | 9 | 18 | 6480 |
| 2 | Gym | Rectangular/large | Circular | 2 | 2 | 9 | 18 | 6480 |
| 3 | PB132 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
| 4 | PC226 Meeting room | Rectangular/small | Circular | 2 | 2 | 9 | 18 | 6480 |
| 5 | SA203 Lecture hall | Trapezoidal/large | Linear | 3 | 6 | 3 | 18 | 1594 |
| 6 | SC203 Classroom | Rectangular/medium | Linear | 2 | 4 | 5 | 20 | 1592 |
| 7 | SE203 Classroom | Rectangular/large | Linear | 2 | 4 | 4 | 16 | 1760 |
| 8 | TB103 Classroom | Trapezoidal/large | Linear | 2 | 4 | 3 | 12 | 1184 |
| 9 | TC352 Meeting room | Rectangular/small | Circular | 1 | 2 | 9 | 18 | 6480 |
More details on the trajectory geometries can be found in the database info file (`measinfo.mat`).
## Recording formats
The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $\phi$ and elevation angle $\theta$.
**For the first-order ambisonics (FOA):**
\begin{eqnarray}
H_1(\phi, \theta, f) &=& 1 \\
H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\
H_3(\phi, \theta, f) &=& \sin(\theta) \\
H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta)
\end{eqnarray}
The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above. Routines that can compute the matrix of encoding filters for spherical and general arrays, based on theoretical array models or measurements, can be found [here](https://github.com/polarch/Spherical-Array-Processing).
**For the tetrahedral microphone array (MIC):**
The four microphone have the following positions, in spherical coordinates $(\phi, \theta, r)$:
\begin{eqnarray}
M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\\
M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber
\end{eqnarray}
Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion:
\begin{equation}
H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m))
\end{equation}
where $m$ is the channel number, $(\phi_m, \theta_m)$ are the specific microphone's azimuth and elevation position, $\omega = 2\pi f$ is the angular frequency, $R = 0.042$m is the array radius, $c = 343$m/s is the speed of sound, $\cos(\gamma_m)$ is the cosine angle between the microphone and the DOA, and $P_n$ is the unnormalized Legendre polynomial of degree $n$, and $h_n'^{(2)}$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found [here](https://github.com/polarch/Array-Response-Simulator).
## Reference directions-of-arrival
For each extracted RIR across a measurement trajectory there is a direction-of-arrival (DOA) associated with it, which can be used as the reference direction for sound source spatialized using this RIR, for training or evaluation purposes. The DOAs were determined acoustically from the extracted RIRs, by windowing the direct sound part and applying a broadband version of the MUSIC localization algorithm on the windowed multichannel signal.
The DOAs are provided as Cartesian components [x, y, z] of unit length vectors.
## Scene generator
A set of routines is shared, here termed scene generator, that can spatialize a bank of sound samples using the SRIRs and noise recordings of this library, to emulate scenes for the two target formats. The code is the same as the one used to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset, and has been ported to Python from the original version written in Matlab.
The generator can be found [**here**](https://github.com/danielkrause/DCASE2022-data-generator), along with more details on its use.
The generator at the moment is set to work with the [NIGENS](https://zenodo.org/record/2535878) sound event sample database, and the [FSD50K](https://zenodo.org/record/4060432) sound event database, but additional sample banks can be added with small modifications.
The dataset together with the generator has been used by the authors in the following public challenges:
- [DCASE 2019 Challenge Task 3](https://dcase.community/challenge2019/task-sound-event-localization-and-detection), to generate the **TAU Spatial Sound Events 2019** dataset ([development](https://doi.org/10.5281/zenodo.2599196)/[evaluation](https://doi.org/10.5281/zenodo.3377088))
- [DCASE 2020 Challenge Task 3](https://dcase.community/challenge2020/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2020**](https://doi.org/10.5281/zenodo.4064792) dataset
- [DCASE2021 Challenge Task 3](https://dcase.community/challenge2021/task-sound-event-localization-and-detection), to generate the [**TAU-NIGENS Spatial Sound Events 2021**](https://doi.org/10.5281/zenodo.5476980) dataset
- [DCASE2022 Challenge Task 3](https://dcase.community/challenge2022/task-sound-event-localization-and-detection), to generate additional [SELD synthetic mixtures for training the task baseline](https://doi.org/10.5281/zenodo.6406873)
> **NOTE**: The current version of the generator is work-in-progress, with some code being quite "rough". If something does not work as intended or it is not clear what certain parts do, please contact [daniel.krause@tuni.fi](mailto:daniel.krause@tuni.fi), or [archontis.politis@tuni.fi](mailto:archontis.politis@tuni.fi).
## Dataset structure
The dataset contains a folder of the SRIRs (`TAU-SRIR_DB`), with all the SRIRs per room in a single _mat_ file, e.g. `rirs_09_tb103.mat`. The specific room had 4 trajectory groups measured at 3 different heights, hence the mat file contains an `rirs` array of 4x3 structures, each with the fields `mic` and `foa`. Selecting e.g. the 2nd trajectory and 3rd height with `rirs(2,3)` returns `mic` and `foa` fields with an array of size `[7200x4x114]` on each. The array contains the SRIRs for the specific format, and it is arranged as `[samples x channels x DOAs]`, meaning that 300msec long (7200samples@24kHz) 4 channel RIRs are extracted at 114 positions along that specific trajectory.
The file `rirdata.mat` contains some general information such as sample rate, format specifications, and most importantly the DOAs of every extracted SRIR. Those can be found in the `rirdata.room` field, which is an array of 9 structures itself, one per room. Checking for example `rirdata.room(8)` returns the name of the specific room (_tb103_), the year the measurements were done, the numbers of SRIRs extracted for each trajectory, and finally the DOAs of the extracted SRIRs. The DOAs of a certain trajectory can be retrieved as e.g. `rirdata.room(8).rirs(2,3).doa_xyz` which returns an array of size `[114x3]`. These are the DOAs of the 114 SRIRs retrieved in the previous step for the 2nd trajectory, 3rd source height, of room `TB103`.
The file `measinfo.mat` contains measurement and recording information in each room. Those details are the name of each room, its dimensions for rectangular or trapezoidal shapes, start and end positions for the linear trajectories, or distances from center for the circular ones, the source heights for each trajectory group, the target formats, the trajectory type, the recording device, the A-weighted ambient sound pressure level, and the maximum and minimum A-weighted sound pressure level of the measurement noise signal. Coordinates are defined with respect to the origina being at the base of the microphone. Based on the information included in the `measinfo.mat`, one can plot a 3D arrangement of the trajectories around the microphone, even though keep in mind that these would be the ideal circular or linear intended trajectories, while the actual DOAs obtained from acoustic analysis have some deviations around those ideal paths.
Finally, the dataset contains a folder of spatial ambient noise recordings (`TAU-SNoise_DB`), with one subfolder per room having two audio recordings fo the spatial ambience, one for each format, FOA or MIC. The recordings vary in length between rooms, ranging from about 20 mins to 30 mins. Users of the dataset can segment these recordings and add them to spatialized sound samples at desired SNRs, or mix different segments to augment the recordings to additional ambience than the original recording time. Such a use case is demonstrated in the scene generator examples.
## Download
The files `TAU-SRIR_DB.z01`, ..., `TAU-SRIR_DB.zip` contain the SRIRs and measurement info files.
The files `TAU-SNoise_DB.z01`, ..., `TAU-SNoise_DB.zip` contain the ambient noise recordings.
Download the zip files and use your preferred compression tool to unzip these split zip files. To extract a split zip archive (named as zip, z01, z02, ...), you could use, for example, the following syntax in Linux or OSX terminal:
Combine the split archive to a single archive:
>zip -s 0 split.zip --out single.zip
Extract the single archive using unzip:
>unzip single.zip
# License
The database is published under a custom **open non-commercial with attribution** license. It can be found in the `LICENSE.txt` file that accompanies the data.
| Fhrozen/tau_srir_db | [
"task_categories:audio-classification",
"annotations_creators:unknown",
"language_creators:unknown",
"size_categories:n<1K",
"source_datasets:unknown",
"license:unknown",
"audio-slot-filling",
"region:us"
] | 2022-04-25T01:54:54+00:00 | {"annotations_creators": ["unknown"], "language_creators": ["unknown"], "license": "unknown", "size_categories": ["n<1K"], "source_datasets": ["unknown"], "task_categories": ["audio-classification"], "task_ids": [], "tags": ["audio-slot-filling"]} | 2022-12-03T03:27:05+00:00 |
f9c3dafb9b947ddeb04e0b4fcb5c3a904d9105e3 |
## Dataset Description
- **Homepage:** None
- **Repository:** [https://github.com/d0r1h/ILC]
- **Paper:** None
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed] | d0r1h/ILC | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"legal",
"region:us"
] | 2022-04-25T06:13:24+00:00 | {"language": ["en"], "license": "cc-by-3.0", "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["summarization"], "pretty_name": "ILC", "tags": ["legal"]} | 2023-09-02T11:03:40+00:00 |
f23677a6713b1558fe0e6ba3ec8db76ec8e49e98 | ## Overview
Original dataset available [here](https://wellecks.github.io/dialogue_nli/).
## Dataset curation
Original `label` column is renamed `original_label`. The original classes are renamed as follows
```
{"positive": "entailment", "negative": "contradiction", "neutral": "neutral"})
```
and encoded with the following mapping
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
and stored in the newly created column `label`.
The following splits and the corresponding columns are present in the original files
```
train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
verified_test {'dtype', 'annotation3', 'sentence1', 'sentence2', 'annotation1', 'annotation2', 'original_label', 'label', 'triple2', 'triple1'}
extra_test {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
extra_dev {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
extra_train {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
valid_havenot {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
valid_attributes {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
valid_likedislike {'dtype', 'id', 'sentence1', 'sentence2', 'original_label', 'label', 'triple2', 'triple1'}
```
Note that I only keep the common columns, which means that I drop "annotation{1, 2, 3}" from `verified_test`.
Note that there are some splits with the same instances, as found by matching on "original_label", "sentence1", "sentence2".
## Code to create dataset
```python
import pandas as pd
from pathlib import Path
import json
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, Sequence
# load data
ds = {}
for path in Path(".").rglob("<path to folder>/*.jsonl"):
print(path, flush=True)
with path.open("r") as fl:
data = fl.read()
try:
d = json.loads(data, encoding="utf-8")
except json.JSONDecodeError as error:
print(error)
df = pd.DataFrame(d)
# encode labels
df["original_label"] = df["label"]
df["label"] = df["label"].map({"positive": "entailment", "negative": "contradiction", "neutral": "neutral"})
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds[path.name.split(".")[0]] = df
# prettify names of data splits
datasets = {
k.replace("dialogue_nli_", "").replace("uu_", "").lower(): v
for k, v in ds.items()
}
datasets.keys()
#> dict_keys(['train', 'dev', 'test', 'verified_test', 'extra_test', 'extra_dev', 'extra_train', 'valid_havenot', 'valid_attributes', 'valid_likedislike'])
# cast to datasets using only common columns
features = Features({
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"sentence1": Value(dtype="string", id=None),
"sentence2": Value(dtype="string", id=None),
"triple1": Sequence(feature=Value(dtype="string", id=None), length=3),
"triple2": Sequence(feature=Value(dtype="string", id=None), length=3),
"dtype": Value(dtype="string", id=None),
"id": Value(dtype="string", id=None),
"original_label": Value(dtype="string", id=None),
})
ds = {}
for name, df in datasets.items():
if "id" not in df.columns:
df["id"] = ""
ds[name] = Dataset.from_pandas(df.loc[:, list(features.keys())], features=features)
ds = DatasetDict(ds)
ds.push_to_hub("dialogue_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["original_label", "sentence1", "sentence2"],
how="inner",
).shape[0],
)
#> train - dev: 58
#> train - test: 98
#> train - verified_test: 90
#> train - extra_test: 0
#> train - extra_dev: 0
#> train - extra_train: 0
#> train - valid_havenot: 0
#> train - valid_attributes: 0
#> train - valid_likedislike: 0
#> dev - test: 19
#> dev - verified_test: 19
#> dev - extra_test: 0
#> dev - extra_dev: 75
#> dev - extra_train: 75
#> dev - valid_havenot: 75
#> dev - valid_attributes: 75
#> dev - valid_likedislike: 75
#> test - verified_test: 12524
#> test - extra_test: 34
#> test - extra_dev: 0
#> test - extra_train: 0
#> test - valid_havenot: 0
#> test - valid_attributes: 0
#> test - valid_likedislike: 0
#> verified_test - extra_test: 29
#> verified_test - extra_dev: 0
#> verified_test - extra_train: 0
#> verified_test - valid_havenot: 0
#> verified_test - valid_attributes: 0
#> verified_test - valid_likedislike: 0
#> extra_test - extra_dev: 0
#> extra_test - extra_train: 0
#> extra_test - valid_havenot: 0
#> extra_test - valid_attributes: 0
#> extra_test - valid_likedislike: 0
#> extra_dev - extra_train: 250946
#> extra_dev - valid_havenot: 250946
#> extra_dev - valid_attributes: 250946
#> extra_dev - valid_likedislike: 250946
#> extra_train - valid_havenot: 250946
#> extra_train - valid_attributes: 250946
#> extra_train - valid_likedislike: 250946
#> valid_havenot - valid_attributes: 250946
#> valid_havenot - valid_likedislike: 250946
#> valid_attributes - valid_likedislike: 250946
``` | pietrolesci/dialogue_nli | [
"region:us"
] | 2022-04-25T07:21:01+00:00 | {} | 2022-04-25T07:39:10+00:00 |
bafeb849715ae4aef0cd99fb2b82b4a7d8f31f95 | ceyda/test-privacy | [
"license:other",
"region:us"
] | 2022-04-25T07:36:19+00:00 | {"license": "other"} | 2022-04-25T07:36:19+00:00 |
|
1b34f1c8b073c6782b68dc3c5c10ef6356a284d3 | ## Overview
Original dataset [here](https://github.com/decompositional-semantics-initiative/DNC).
This dataset has been proposed in [Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation](https://www.aclweb.org/anthology/D18-1007/).
## Dataset curation
This version of the dataset does not include the `type-of-inference` "KG" as its label set is
`[1, 2, 3, 4, 5]` while here we focus on NLI-related label sets, i.e. `[entailed, not-entailed]`.
For this reason, I named the dataset DNLI for _Diverse_ NLI, as in [Liu et al 2020](https://aclanthology.org/2020.conll-1.48/), instead of DNC.
This version of the dataset contains columns from the `*_data.json` and the `*_metadata.json` files available in the repo.
In the original repo, each data file has the following keys and values:
- `context`: The context sentence for the NLI pair. The context is already tokenized.
- `hypothesis`: The hypothesis sentence for the NLI pair. The hypothesis is already tokenized.
- `label`: The label for the NLI pair
- `label-set`: The set of possible labels for the specific NLI pair
- `binary-label`: A `True` or `False` label. See the paper for details on how we convert the `label` into a binary label.
- `split`: This can be `train`, `dev`, or `test`.
- `type-of-inference`: A string indicating what type of inference is tested in this example.
- `pair-id`: A unique integer id for the NLI pair. The `pair-id` is used to find the corresponding metadata for any given NLI pair
while each metadata file has the following columns
- `pair-id`: A unique integer id for the NLI pair.
- `corpus`: The original corpus where this example came from.
- `corpus-sent-id`: The id of the sentence (or example) in the original dataset that we recast.
- `corpus-license`: The license for the data from the original dataset.
- `creation-approach`: Determines the method used to recast this example. Options are `automatic`, `manual`, or `human-labeled`.
- `misc`: A dictionary of other relevant information. This is an optional field.
The files are merged on the `pair-id` key. I **do not** include the `misc` column as it is not essential for NLI.
NOTE: the label mapping is **not** the custom (i.e., 3 class) for NLI tasks. They used a binary target and I encoded them
with the following mapping `{"not-entailed": 0, "entailed": 1}`.
NOTE: some instances are present in multiple splits (matching performed by exact matching on "context", "hypothesis", and "label").
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict, Sequence
from pathlib import Path
paths = {
"train": "<path_to_folder>/DNC-master/train",
"dev": "<path_to_folder>/DNC-master/dev",
"test": "<path_to_folder>/DNC-master/test",
}
# read all data files
dfs = []
for split, path in paths.items():
for f_name in Path(path).rglob("*_data.json"):
df = pd.read_json(str(f_name))
df["file_split_data"] = split
dfs.append(df)
data = pd.concat(dfs, ignore_index=False, axis=0)
# read all metadata files
meta_dfs = []
for split, path in paths.items():
for f_name in Path(path).rglob("*_metadata.json"):
df = pd.read_json(str(f_name))
meta_dfs.append(df)
metadata = pd.concat(meta_dfs, ignore_index=False, axis=0)
# merge
dataset = pd.merge(data, metadata, on="pair-id", how="left")
# check that the split column reflects file splits
assert sum(dataset["split"] != dataset["file_split_data"]) == 0
dataset = dataset.drop(columns=["file_split_data"])
# fix `binary-label` column
dataset.loc[~dataset["label"].isin(["entailed", "not-entailed"]), "binary-label"] = False
dataset.loc[dataset["label"].isin(["entailed", "not-entailed"]), "binary-label"] = True
# fix datatype
dataset["corpus-sent-id"] = dataset["corpus-sent-id"].astype(str)
# order columns as shown in the README.md
columns = [
"context",
"hypothesis",
"label",
"label-set",
"binary-label",
"split",
"type-of-inference",
"pair-id",
"corpus",
"corpus-sent-id",
"corpus-license",
"creation-approach",
"misc",
]
dataset = dataset.loc[:, columns]
# remove misc column
dataset = dataset.drop(columns=["misc"])
# remove KG for NLI
dataset.loc[(dataset["label"].isin([1, 2, 3, 4, 5])), "type-of-inference"].value_counts()
# > the only split with label-set [1, 2, 3, 4, 5], so remove as we focus on NLI
dataset = dataset.loc[~(dataset["type-of-inference"] == "KG")]
# encode labels
dataset["label"] = dataset["label"].map({"not-entailed": 0, "entailed": 1})
# fill NA in label-set
dataset["label-set"] = dataset["label-set"].ffill()
features = Features(
{
"context": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]),
"label-set": Sequence(length=2, feature=Value(dtype="string")),
"binary-label": Value(dtype="bool"),
"split": Value(dtype="string"),
"type-of-inference": Value(dtype="string"),
"pair-id": Value(dtype="int64"),
"corpus": Value(dtype="string"),
"corpus-sent-id": Value(dtype="string"),
"corpus-license": Value(dtype="string"),
"creation-approach": Value(dtype="string"),
}
)
dataset_splits = {}
for split in ("train", "dev", "test"):
df_split = dataset.loc[dataset["split"] == split]
dataset_splits[split] = Dataset.from_pandas(df_split, features=features)
dataset_splits = DatasetDict(dataset_splits)
dataset_splits.push_to_hub("pietrolesci/dnli", token="<your token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(dataset_splits.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
dataset_splits[i].to_pandas(),
dataset_splits[j].to_pandas(),
on=["context", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> train - dev: 127
#> train - test: 55
#> dev - test: 54
```
| pietrolesci/dnc | [
"region:us"
] | 2022-04-25T07:54:56+00:00 | {} | 2022-04-25T07:59:06+00:00 |
bbf6138e30cff48af0b9fa46ed710f68400dde85 | This dataset contains IMDB Ratings of various movies of different languages. This dataset also contains the number of votes each movies received | Meena/imdb_ratings_table | [
"region:us"
] | 2022-04-25T07:59:04+00:00 | {} | 2022-04-25T08:25:49+00:00 |
fbd6fcc5c3b8dc79ad26eaced52d7f04c6fea6d7 | ## Overview
Original dataset page [here](https://abhilasharavichander.github.io/NLI_StressTest/) and dataset available [here](https://drive.google.com/open?id=1faGA5pHdu5Co8rFhnXn-6jbBYC2R1dhw).
## Dataset curation
Added new column `label` with encoded labels with the following mapping
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
and the columns with parse information are dropped as they are not well formatted.
Also, the name of the file from which each instance comes is added in the column `dtype`.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
import json
from pathlib import Path
# load data
ds = {}
path = Path("<path to folder>")
for i in path.rglob("*.jsonl"):
print(i)
name = str(i).split("/")[0].lower()
dtype = str(i).split("/")[1].lower()
# read data
with i.open("r") as fl:
df = pd.DataFrame([json.loads(line) for line in fl])
# select columns
df = df.loc[:, ["sentence1", "sentence2", "gold_label"]]
# add file name as column
df["dtype"] = dtype
# encode labels
df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds[name] = df
# cast to dataset
features = Features(
{
"sentence1": Value(dtype="string"),
"sentence2": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"dtype": Value(dtype="string"),
"gold_label": Value(dtype="string"),
}
)
ds = DatasetDict({k: Dataset.from_pandas(v, features=features) for k, v in ds.items()})
ds.push_to_hub("pietrolesci/stress_tests_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["sentence1", "sentence2", "label"],
how="inner",
).shape[0],
)
#> numerical_reasoning - negation: 0
#> numerical_reasoning - length_mismatch: 0
#> numerical_reasoning - spelling_error: 0
#> numerical_reasoning - word_overlap: 0
#> numerical_reasoning - antonym: 0
#> negation - length_mismatch: 0
#> negation - spelling_error: 0
#> negation - word_overlap: 0
#> negation - antonym: 0
#> length_mismatch - spelling_error: 0
#> length_mismatch - word_overlap: 0
#> length_mismatch - antonym: 0
#> spelling_error - word_overlap: 0
#> spelling_error - antonym: 0
#> word_overlap - antonym: 0
``` | pietrolesci/stress_tests_nli | [
"region:us"
] | 2022-04-25T08:21:50+00:00 | {} | 2022-04-25T08:32:28+00:00 |
8526f3a347c2d5760dc79a3dbe88134cc89c36b9 | ## Overview
Original dataset available [here](https://github.com/jimmycode/gen-debiased-nli#training-with-our-datasets).
```latex
@inproceedings{gen-debiased-nli-2022,
title = "Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets",
author = "Wu, Yuxiang and
Gardner, Matt and
Stenetorp, Pontus and
Dasigi, Pradeep",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
month = may,
year = "2022",
publisher = "Association for Computational Linguistics",
}
```
## Dataset curation
No curation.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
import json
from pathlib import Path
# load data
path = Path("./")
ds = {}
for i in path.rglob("*.jsonl"):
print(i)
name = str(i).split(".")[0].lower().replace("-", "_")
with i.open("r") as fl:
df = pd.DataFrame([json.loads(line) for line in fl])
ds[name] = df
# cast to dataset
features = Features(
{
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"type": Value(dtype="string"),
}
)
ds = DatasetDict({k: Dataset.from_pandas(v, features=features) for k, v in ds.items()})
ds.push_to_hub("pietrolesci/gen_debiased_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> mnli_seq_z - snli_z_aug: 0
#> mnli_seq_z - mnli_par_z: 477149
#> mnli_seq_z - snli_seq_z: 0
#> mnli_seq_z - mnli_z_aug: 333840
#> mnli_seq_z - snli_par_z: 0
#> snli_z_aug - mnli_par_z: 0
#> snli_z_aug - snli_seq_z: 506624
#> snli_z_aug - mnli_z_aug: 0
#> snli_z_aug - snli_par_z: 504910
#> mnli_par_z - snli_seq_z: 0
#> mnli_par_z - mnli_z_aug: 334960
#> mnli_par_z - snli_par_z: 0
#> snli_seq_z - mnli_z_aug: 0
#> snli_seq_z - snli_par_z: 583107
#> mnli_z_aug - snli_par_z: 0
``` | pietrolesci/gen_debiased_nli | [
"region:us"
] | 2022-04-25T08:35:37+00:00 | {} | 2022-04-25T08:49:52+00:00 |
48d27a285f1919f3f7e6cd53b6a07fb13a238efb | ## Overview
Original dataset available [here](https://github.com/krandiash/gpt3-nli). Debiased dataset generated with GPT-3.
## Dataset curation
All string columns are stripped. Labels are encoded with the following mapping
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features
import json
# load data
with open("data/dataset.jsonl", "r") as fl:
df = pd.DataFrame([json.loads(line) for line in fl])
df.columns = df.columns.str.strip()
# fix dtypes
df["guid"] = df["guid"].astype(int)
for col in df.select_dtypes(object):
df[col] = df[col].str.strip()
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features(
{
"text_a": Value(dtype="string"),
"text_b": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"guid": Value(dtype="int64"),
}
)
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("pietrolesci/gpt3_nli", token="<token>")
``` | pietrolesci/gpt3_nli | [
"region:us"
] | 2022-04-25T08:49:23+00:00 | {} | 2022-04-25T09:17:45+00:00 |
cd38beddf8badad23b8224f515a35c5d53ae0a53 |
# Dataset Card for UK Selective Web Archive Classification Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The dataset comprises a manually curated selective archive produced by UKWA which includes the classification of sites into a two-tiered subject hierarchy. In partnership with the Internet Archive and JISC, UKWA had obtained access to the subset of the Internet Archives web collection that relates to the UK. The JISC UK Web Domain Dataset (1996 - 2013) contains all of the resources from the Internet Archive that were hosted on domains ending in .uk, or that are required in order to render those UK pages. UKWA have made this manually-generated classification information available as an open dataset in Tab Separated Values (TSV) format. UKWA is particularly interested in whether high-level metadata like this can be used to train an appropriate automatic classification system so that this manually generated dataset may be used to partially automate the categorisation of the UKWAs larger archives. UKWA expects that an appropriate classifier might require more information about each site in order to produce reliable results, and a future goal is to augment this dataset with further information. Options include: for each site, making the titles of every page on that site available, and for each site, extract a set of keywords that summarise the site, via the full-text index. For more information: http://data.webarchive.org.uk/opendata/ukwa.ds.1/classification/
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons Public Domain Mark 1.0.
### Citation Information
[Needs More Information] | TheBritishLibrary/web_archive_classification | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:other",
"lam",
"region:us"
] | 2022-04-25T09:14:45+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "UK Selective Web Archive Classification Dataset", "tags": ["lam"]} | 2023-05-04T11:59:29+00:00 |
429dde22805398bdd6cfece27284f53a44ed6e67 | ## Overview
Original dataset is available in the original [Github repo](https://github.com/tyliupku/nli-debiasing-datasets).
This dataset is a collection of NLI benchmarks constructed as described in the paper
[An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference](https://aclanthology.org/2020.conll-1.48/)
published at CoNLL 2020.
## Dataset curation
No specific curation for this dataset. Label encoding follows exactly what is reported in the paper by the authors.
Also, from the paper:
> _all the following datasets are collected based on the public available resources proposed by their authors, thus the experimental results in this paper are comparable to the numbers reported in the original papers and the other papers that use these datasets_
Most of the datasets included follow the custom 3-class NLI convention `{"entailment": 0, "neutral": 1, "contradiction": 2}`.
However, the following datasets have a particular label mapping
- `IS-SD`: `{"non-entailment": 0, "entailment": 1}`
- `LI_TS`: `{"non-contradiction": 0, "contradiction": 1}`
## Dataset structure
This benchmark dataset includes 10 adversarial datasets. To provide more insights on how the adversarial
datasets attack the models, the authors categorized them according to the bias(es) they test and they renamed
them accordingly. More details in section 2 of the paper.
A mapping with the original dataset names is provided below
| | Name | Original Name | Original Paper | Original Curation |
|---:|:-------|:-----------------------|:--------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | PI-CD | SNLI-Hard | [Gururangan et al. (2018)](https://aclanthology.org/N18-2017/) | SNLI test sets instances that cannot be correctly classified by a neural classifier (fastText) trained on only the hypothesis sentences. |
| 1 | PI-SP | MNLI-Hard | [Liu et al. (2020)](https://aclanthology.org/2020.lrec-1.846/) | MNLI-mismatched dev sets instances that cannot be correctly classified by surface patterns that are highly correlated with the labels. |
| 2 | IS-SD | HANS | [McCoy et al. (2019)](https://aclanthology.org/P19-1334/) | Dataset that tests lexical overlap, subsequence, and constituent heuristics between the hypothesis and premises sentences. |
| 3 | IS-CS | SoSwap-AddAMod | [Nie et al. (2019)](https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016867) | Pairs of sentences whose logical relations cannot be extracted from lexical information alone. Premise are taken from SNLI dev set and modified. The original paper assigns a Lexically Misleading Scores (LMS) to each instance. Here, only the subset with LMS > 0.7 is reported. |
| 4 | LI-LI | Stress tests (antonym) | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) and [Glockner et al. (2018)](https://aclanthology.org/P18-2103/) | Merge of the 'antonym' category in Naik et al. (2018) (from MNLI matched and mismatched dev sets) and Glockner et al. (2018) (SNLI training set). |
| 5 | LI-TS | Created by the authors | Created by the authors | Swap the two sentences in the original MultiNLI mismatched dev sets. If the gold label is 'contradiction', the corresponding label in the swapped instance remains unchanged, otherwise it becomes 'non-contradicted'. |
| 6 | ST-WO | Word overlap | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Word overlap' category in Naik et al. (2018). |
| 7 | ST-NE | Negation | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Negation' category in Naik et al. (2018). |
| 8 | ST-LM | Length mismatch | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Length mismatch' category in Naik et al. (2018). |
| 9 | ST-SE | Spelling errors | [Naik et al. (2018)](https://aclanthology.org/C18-1198/) | 'Spelling errors' category in Naik et al. (2018). |
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
Tri_dataset = ["IS_CS", "LI_LI", "PI_CD", "PI_SP", "ST_LM", "ST_NE", "ST_SE", "ST_WO"]
Ent_bin_dataset = ["IS_SD"]
Con_bin_dataset = ["LI_TS"]
# read data
with open("<path to file>/robust_nli.txt", encoding="utf-8", mode="r") as fl:
f = fl.read().strip().split("\n")
f = [eval(i) for i in f]
df = pd.DataFrame.from_dict(f)
# rename to map common names
df = df.rename(columns={"prem": "premise", "hypo": "hypothesis"})
# reorder columns
df = df.loc[:, ["idx", "split", "premise", "hypothesis", "label"]]
# create split-specific features
Tri_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
}
)
Ent_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]),
}
)
Con_features = Features(
{
"idx": Value(dtype="int64"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]),
}
)
# convert to datasets
dataset_splits = {}
for split in df["split"].unique():
print(split)
df_split = df.loc[df["split"] == split].copy()
if split in Tri_dataset:
df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
ds = Dataset.from_pandas(df_split, features=Tri_features)
elif split in Ent_bin_dataset:
df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1})
ds = Dataset.from_pandas(df_split, features=Ent_features)
elif split in Con_bin_dataset:
df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1})
ds = Dataset.from_pandas(df_split, features=Con_features)
else:
print("ERROR:", split)
dataset_splits[split] = ds
datasets = DatasetDict(dataset_splits)
datasets.push_to_hub("pietrolesci/robust_nli", token="<your token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(datasets.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
datasets[i].to_pandas(),
datasets[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> PI_SP - ST_LM: 0
#> PI_SP - ST_NE: 0
#> PI_SP - IS_CS: 0
#> PI_SP - LI_TS: 1
#> PI_SP - LI_LI: 0
#> PI_SP - ST_SE: 0
#> PI_SP - PI_CD: 0
#> PI_SP - IS_SD: 0
#> PI_SP - ST_WO: 0
#> ST_LM - ST_NE: 0
#> ST_LM - IS_CS: 0
#> ST_LM - LI_TS: 0
#> ST_LM - LI_LI: 0
#> ST_LM - ST_SE: 0
#> ST_LM - PI_CD: 0
#> ST_LM - IS_SD: 0
#> ST_LM - ST_WO: 0
#> ST_NE - IS_CS: 0
#> ST_NE - LI_TS: 0
#> ST_NE - LI_LI: 0
#> ST_NE - ST_SE: 0
#> ST_NE - PI_CD: 0
#> ST_NE - IS_SD: 0
#> ST_NE - ST_WO: 0
#> IS_CS - LI_TS: 0
#> IS_CS - LI_LI: 0
#> IS_CS - ST_SE: 0
#> IS_CS - PI_CD: 0
#> IS_CS - IS_SD: 0
#> IS_CS - ST_WO: 0
#> LI_TS - LI_LI: 0
#> LI_TS - ST_SE: 0
#> LI_TS - PI_CD: 0
#> LI_TS - IS_SD: 0
#> LI_TS - ST_WO: 0
#> LI_LI - ST_SE: 0
#> LI_LI - PI_CD: 0
#> LI_LI - IS_SD: 0
#> LI_LI - ST_WO: 0
#> ST_SE - PI_CD: 0
#> ST_SE - IS_SD: 0
#> ST_SE - ST_WO: 0
#> PI_CD - IS_SD: 0
#> PI_CD - ST_WO: 0
#> IS_SD - ST_WO: 0
``` | pietrolesci/robust_nli | [
"region:us"
] | 2022-04-25T10:43:30+00:00 | {} | 2022-04-25T10:45:07+00:00 |
8ede2d7bf4531a7b210c793fe7b9e483b871c8f5 | This is part of `robust_NLI`but since there seems to be a bug when loading and downloading
`DatasetDict` containing datasets with different configurations, I loaded the datasets with
the differing configs as standalone datasets.
Issue here: [https://github.com/huggingface/datasets/issues/4211](https://github.com/huggingface/datasets/issues/4211) | pietrolesci/robust_nli_li_ts | [
"region:us"
] | 2022-04-25T10:48:57+00:00 | {} | 2022-04-25T10:49:51+00:00 |
338d9797bb910381f7493343991c1055d425b9c4 | This is part of `robust_NLI`but since there seems to be a bug when loading and downloading
`DatasetDict` containing datasets with different configurations, I loaded the datasets with
the differing configs as standalone datasets.
Issue here: [https://github.com/huggingface/datasets/issues/4211](https://github.com/huggingface/datasets/issues/4211) | pietrolesci/robust_nli_is_sd | [
"region:us"
] | 2022-04-25T10:49:03+00:00 | {} | 2022-04-25T12:07:25+00:00 |
c47716065f1f2076c39c806dd7007027342da502 | # Python Subreddit
Dataset containing data scraped from the [Python subreddit](https://www.reddit.com/r/python). | jamescalam/reddit-python | [
"region:us"
] | 2022-04-25T11:29:25+00:00 | {} | 2022-04-25T11:41:35+00:00 |
2e7504f0d4a70d6bf0373a39767ecd2f85ae0d9f | # Pretokenized GitHub Code Dataset
## Dataset Description
This is a pretokenized version of the Python files of the [GitHub Code dataset](https://huggingface.co/datasets/lvwerra/github-code), that consists of 115M code files from GitHub in 32 programming languages. We tokenized the dataset using BPE Tokenizer trained on code, available in this [repo](https://huggingface.co/lvwerra/codeparrot). Having a pretokenized dataset can speed up the training loop by not having to tokenize data at each batch call. We also include `ratio_char_token` which gives the ratio between the number of characters in a file and the number of tokens we get after tokenization, this ratio can be a good filter to detect outlier files.
### How to use it
To avoid downloading the whole dataset, you can make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
```python
from datasets import load_dataset
ds = load_dataset("loubnabnl/tokenized-github-code-python", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{'input_ids': [504, 1639, 492,...,199, 504, 1639],
'ratio_char_token': 3.560888252148997
}
``` | loubnabnl/tokenized-github-code-python | [
"region:us"
] | 2022-04-25T11:34:38+00:00 | {} | 2022-04-27T23:13:55+00:00 |
8371e5cf43c3564daa1314ecf6086b58fcbf2178 | ## Overview
Original dataset available [here](https://github.com/sheng-z/JOCI/tree/master/data).
This dataset is the "full" JOCI dataset, which is the file named `joci.csv.zip`.
# Dataset curation
The following processing is applied,
- `label` column renamed to `original_label`
- creation of the `label` column using the following mapping, using common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/joci.py#L22-L27), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_joci.py#L7-L12))
```
{
0: "contradiction",
1: "contradiction",
2: "neutral",
3: "neutral",
4: "neutral",
5: "entailment",
}
```
- finally, converting this to the usual NLI classes, that is `{"entailment": 0, "neutral": 1, "contradiction": 2}`
## Code to create dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset
# read data
df = pd.read_csv("<path to folder>/joci.csv")
# column name to lower
df.columns = df.columns.str.lower()
# rename label column
df = df.rename(columns={"label": "original_label"})
# encode labels
df["label"] = df["original_label"].map({
0: "contradiction",
1: "contradiction",
2: "neutral",
3: "neutral",
4: "neutral",
5: "entailment",
})
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"context": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
"original_label": Value(dtype="int32"),
"context_from": Value(dtype="string"),
"hypothesis_from": Value(dtype="string"),
"subset": Value(dtype="string"),
})
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("joci", token="<token>")
```
| pietrolesci/joci | [
"region:us"
] | 2022-04-25T12:32:52+00:00 | {} | 2022-04-25T12:33:08+00:00 |
82b6583887562130331c99bba2c994b44eae310f | ## Overview
Proposed by
```latex
@InProceedings{glockner_acl18,
author = {Glockner, Max and Shwartz, Vered and Goldberg, Yoav},
title = {Breaking NLI Systems with Sentences that Require Simple Lexical Inferences},
booktitle = {The 56th Annual Meeting of the Association for Computational Linguistics (ACL)},
month = {July},
year = {2018},
address = {Melbourne, Australia}
}
```
Original dataset available [here](https://github.com/BIU-NLP/Breaking_NLI).
## Dataset curation
Labels encoded with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}`
and made available in the `label` column.
## Code to create the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, Sequence
# load data
with open("<path to folder>/dataset.jsonl", "r") as fl:
data = fl.read().split("\n")
df = pd.DataFrame([eval(i) for i in data if len(i) > 0])
# encode labels
df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"sentence1": Value(dtype="string", id=None),
"category": Value(dtype="string", id=None),
"gold_label": Value(dtype="string", id=None),
"annotator_labels": Sequence(feature=Value(dtype="string", id=None), length=3),
"pairID": Value(dtype="int32", id=None),
"sentence2": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("breaking_nli", token="<token>", split="all")
``` | pietrolesci/breaking_nli | [
"region:us"
] | 2022-04-25T12:36:48+00:00 | {} | 2022-04-25T12:37:23+00:00 |
e39c4231c5c09a3ee1d3fd9e9bdfab466a6254f6 | ## Overview
Original dataset available [here](https://people.ict.usc.edu/~gordon/copa.html).
Current dataset extracted from [this repo](https://github.com/felipessalvatore/NLI_datasets).
This is the "full" dataset.
# Curation
Same curation as the one applied in [this repo](https://github.com/felipessalvatore/NLI_datasets), that is
from the original COPA format:
|premise | choice1 | choice2 | label |
|---|---|---|---|
|My body cast a shadow over the grass | The sun was rising | The grass was cut | 0 |
to the NLI format:
| premise | hypothesis | label |
|---|---|---|
| My body cast a shadow over the grass | The sun was rising| entailment |
| My body cast a shadow over the grass | The grass was cut | not_entailment |
Also, the labels are encoded with the following mapping `{"not_entailment": 0, "entailment": 1}`
## Code to generate dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset
from pathlib import Path
# read data
path = Path("./nli_datasets")
datasets = {}
for dataset_path in path.iterdir():
datasets[dataset_path.name] = {}
for name in dataset_path.iterdir():
df = pd.read_csv(name)
datasets[dataset_path.name][name.name.split(".")[0]] = df
# merge all splits
df = pd.concat(list(datasets["copa"].values()))
# encode labels
df["label"] = df["label"].map({"not_entailment": 0, "entailment": 1})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=2, names=["not_entailment", "entailment"]),
})
ds = Dataset.from_pandas(df, features=features)
ds.push_to_hub("copa_nli", token="<token>")
``` | pietrolesci/copa_nli | [
"region:us"
] | 2022-04-25T12:46:42+00:00 | {} | 2022-04-25T12:47:10+00:00 |
bbafadc05d7fdc9c668653a5e81bb034a99af3d9 |
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# Dataset Card for HiNER-original
[](https://twitter.com/cfiltnlp)
[](https://twitter.com/PeopleCentredAI)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/cfiltnlp/HiNER
- **Repository:** https://github.com/cfiltnlp/HiNER
- **Paper:** https://arxiv.org/abs/2204.13743
- **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-on-hiner-original
- **Point of Contact:** Rudra Murthy V
### Dataset Summary
This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.
**Note:** The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.
### Supported Tasks and Leaderboards
Named Entity Recognition
### Languages
Hindi
## Dataset Structure
### Data Instances
{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग','के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}
### Data Fields
- `id`: The ID value of the data point.
- `tokens`: Raw tokens in the dataset.
- `ner_tags`: the NER tags for this dataset.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| original | 76025 | 10861 | 21722|
| collapsed | 76025 | 10861 | 21722|
## About
This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available [here](https://arxiv.org/abs/2204.13743).
### Recent Updates
* Version 0.0.5: HiNER initial release
## Usage
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
```code
pip install datasets
```
To use the original dataset with all the tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-original')
```
To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-collapsed')
```
However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder.
## Model(s)
Our best performing models are hosted on the HuggingFace models repository:
1. [HiNER-Collapsed-XLM-R](https://huggingface.co/cfilt/HiNER-Collapse-XLM-Roberta-Large)
2. [HiNER-Original-XLM-R](https://huggingface.co/cfilt/HiNER-Original-XLM-Roberta-Large)
## Dataset Creation
### Curation Rationale
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.
### Source Data
#### Initial Data Collection and Normalization
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi
#### Who are the source language producers?
Various Government of India webpages
### Annotations
#### Annotation process
This dataset was manually annotated by a single annotator of a long span of time.
#### Who are the annotators?
Pallab Bhattacharjee
### Personal and Sensitive Information
We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
Any biases contained in the data released by the Indian government are bound to be present in our data.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Pallab Bhattacharjee
### Licensing Information
CC-BY-SA 4.0
### Citation Information
```latex
@misc{https://doi.org/10.48550/arxiv.2204.13743,
doi = {10.48550/ARXIV.2204.13743},
url = {https://arxiv.org/abs/2204.13743},
author = {Murthy, Rudra and Bhattacharjee, Pallab and Sharnagat, Rahul and Khatri, Jyotsana and Kanojia, Diptesh and Bhattacharyya, Pushpak},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {HiNER: A Large Hindi Named Entity Recognition Dataset},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | cfilt/HiNER-original | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:hi",
"license:cc-by-sa-4.0",
"arxiv:2204.13743",
"region:us"
] | 2022-04-25T12:55:19+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["hi"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "hiner-original-1", "pretty_name": "HiNER - Large Hindi Named Entity Recognition dataset"} | 2023-03-07T16:42:05+00:00 |
f88b0c931a28aac0824a988e60b76e5a83fd0da3 | annotations_creators:
- annotation
languages:
- pt-br
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- ner
# Dataset Card for c_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
C corpus is a set of annotated data in portuguese for the recognition of named entities, being the extension of the UlyssesNER-Br corpus.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Named Entity Recognition that aims to identify all named entities such as person names, locations, among others, in a text.
### Languages
Brazilian Portuguese
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | rosimeirecosta/c_corpus | [
"region:us"
] | 2022-04-25T18:49:57+00:00 | {} | 2022-04-25T19:03:08+00:00 |
709fd56c19915e82eafc9bc39780e078daee5e00 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
List of lottiefiles uri for research purposes
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | AmirulOm/lottie-urls | [
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:n<1K",
"source_datasets:original",
"license:unknown",
"region:us"
] | 2022-04-25T21:45:19+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": [], "license": ["unknown"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["image-segmentation"], "task_ids": ["instance-segmentation"], "pretty_name": "lottie-uri"} | 2022-10-25T09:12:14+00:00 |
d9a925c71de5280a6397b8e433b506a031f95a53 | Davincilee/closure_system_door_inner | [
"license:lgpl-3.0",
"region:us"
] | 2022-04-26T00:48:09+00:00 | {"license": "lgpl-3.0"} | 2022-04-29T19:59:50+00:00 |
|
5d2ef1db3b12764224290882c360f966bdbb8aeb | # AutoTrain Dataset for project: isear_bert
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project isear_bert.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "I was going to go on a vacation to Texas this summer but was \nunable to go because of registration.",
"target": 5
},
{
"text": "When someone whom I considered my friend, without telling me he \nwas annoyed, proceeded to ignore m[...]",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=7, names=['anger', 'disgust', 'fear', 'guilt', 'joy', 'sadness', 'shame'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6008 |
| valid | 1507 |
| crcb/autotrain-data-isear_bert | [
"task_categories:text-classification",
"region:us"
] | 2022-04-26T02:06:30+00:00 | {"task_categories": ["text-classification"]} | 2022-04-26T02:10:34+00:00 |
8c7dd451752096e3932fcbbdc051d65be8dbd662 | hrithikpiyush/acl-arc | [
"license:apache-2.0",
"region:us"
] | 2022-04-26T04:00:57+00:00 | {"license": "apache-2.0"} | 2022-04-26T10:40:41+00:00 |
|
ac97fe2b8719890567bea1fbcf9a5b22594bf88b | Dataset for API: https://github.com/eleldar/Translation | eleldar/different_sub_normal_datasets | [
"region:us"
] | 2022-04-26T05:32:15+00:00 | {} | 2022-06-16T10:19:15+00:00 |
06fbc6482522edcba63c38da575269369694c6f2 |
Original from https://gitlab.inria.fr/french-crows-pairs/acl-2022-paper-data-and-code/-/tree/main/.
# Data Statement for CrowS-Pairs-fr
> **How to use this document:**
> Fill in each section according to the instructions. Give as much detail as you can, but there's no need to extrapolate. The goal is to help people understand your data when they approach it. This could be someone looking at it in ten years, or it could be you yourself looking back at the data in two years.
> For full details, the best source is the original Data Statements paper, here: https://www.aclweb.org/anthology/Q18-1041/ .
> Instruction fields are given as blockquotes; delete the instructions when you're done, and provide the file with your data, for example as "DATASTATEMENT.md". The lists in some blocks are designed to be filled in, but it's good to also leave a written description of what's happening, as well as the list. It's fine to skip some fields if the information isn't known.
> Only blockquoted content should be deleted; the final about statement should be left intact.
Data set name: Crows-Pairs-fr
Citation (if available): Névéol A, Dupont Y, Bezançon J, Fort K. French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics - ACL 2022
Data set developer(s): Aurélie Névéol, Yoann Dupont, Julien Bezançon, Karën Fort
Data statement author(s): Aurélie Névéol, Yoann Dupont
Others who contributed to this document: N/A
License: Creative Commons Attribution-ShareAlike 4.0 (CC BY-SA 4.0).
## A. CURATION RATIONALE
> *Explanation.* Which texts were included and what were the goals in selecting texts, both in the original collection and in any further sub-selection? This can be especially important in datasets too large to thoroughly inspect by hand. An explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to.
The French part of the corpus was built by first translating the original 1,508 sentence pairs of the English corpus into French.
We then adapted the crowdsourcing method described by [Nangia et al. (2020)](https://arxiv.org/pdf/2010.00133) to collect additional sentences expressing a stereotype relevant to the French socio-cultural environment. Data collection is implemented through LanguageARC [(Fiumara et al., 2020)](https://www.aclweb.org/anthology/2020.cllrd-1.1.pdf), a citizen science platform supporting the development of language resources dedicated to social improvement. We created a LanguageARC project (https://languagearc.com/projects/19) to collect these additional sentences. Participants were asked to submit a statement that expressed a stereotype in French along with a selection of ten bias types: the nine bias types offered in CrowS-Pairs and the additional category _other_. We collected 210 additional sentences this way.
## B. LANGUAGE VARIETY/VARIETIES
> *Explanation.* Languages differ from each other in structural ways that can interact with NLP algorithms. Within a language, regional or social dialects can also show great variation (Chambers and Trudgill, 1998). The language and language variety should be described with a language tag from BCP-47 identifying the language variety (e.g., en-US or yue-Hant-HK), and a prose description of the language variety, glossing the BCP-47 tag and also providing further information (e.g., "English as spoken in Palo Alto, California", or "Cantonese written with traditional characters by speakers in Hong Kong who are bilingual in Mandarin").
* BCP-47 language tags: fr-FR
* Language variety description: French spoken by native French people from metropolitan France.
## C. CONTRIBUTOR DEMOGRAPHIC
> ## C. SPEAKER DEMOGRAPHIC
> *Explanation.* Sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (Labov, 1966), as speakers use linguistic variation to construct and project identities (Eckert and Rickford, 2001). Transfer from native languages (L1) can affect the language produced by non-native (L2) speakers (Ellis, 1994, Ch. 8). A further important type of variation is disordered speech (e.g., dysarthria). Specifications include:
N/A
## D. ANNOTATOR DEMOGRAPHIC
> *Explanation.* What are the demographic characteristics of the annotators and annotation guideline developers? Their own “social address” influences their experience with language and thus their perception of what they are annotating. Specifications include:
Participants to the collection project were recruited through calls for volunteers posted to social media and mailing lists in the French research community.
## E. SPEECH SITUATION
N/A
## F. TEXT CHARACTERISTICS
> *Explanation.* Both genre and topic influence the vocabulary and structural characteristics of texts (Biber, 1995), and should be specified.
Collected data is a collection of offensive stereotyped statements in French, they might be upsetting.
Along these stereotyped statements are paired anti-stereotyped statements.
## G. RECORDING QUALITY
N/A
## H. OTHER
> *Explanation.* There may be other information of relevance as well. Please use this space to develop any further categories that are relevant for your dataset.
## I. PROVENANCE APPENDIX
Examples were gathered using the LanguageArc site and by creating a dedicated project: https://languagearc.com/projects/19
## About this document
A data statement is a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software.
Data Statements are from the University of Washington. Contact: [datastatements@uw.edu](mailto:datastatements@uw.edu). This document template is licensed as [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/).
This version of the markdown Data Statement is from June 4th 2020. The Data Statement template is based on worksheets distributed at the [2020 LREC workshop on Data Statements](https://sites.google.com/uw.edu/data-statements-for-nlp/), by Emily M. Bender, Batya Friedman, and Angelina McMillan-Major. Adapted to community Markdown template by Leon Dercyznski.
| BigScienceBiasEval/crows_pairs_multilingual | [
"language:en",
"language:fr",
"license:cc-by-sa-4.0",
"arxiv:2010.00133",
"region:us"
] | 2022-04-26T06:49:31+00:00 | {"language": ["en", "fr"], "license": "cc-by-sa-4.0"} | 2024-01-14T11:46:09+00:00 |
5e93d44a6d6fb1fe35c41df7af170a8618b23e70 |
# Dataset Card for CrosswordQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/albertkx/Berkeley-Crossword-Solver
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Albert Xu](mailto:albertxu@usc.edu) and [Eshaan Pathak](mailto:eshaanpathak@berkeley.edu)
### Dataset Summary
The CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
```
{
"id": 0,
"clue": "Clean-up target",
"answer": "mess"
}
```
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | albertxu/CrosswordQA | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:unknown",
"region:us"
] | 2022-04-26T07:05:14+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"]} | 2022-10-29T22:45:36+00:00 |
01020533529fc1cda0af7d99231eb96e7837f883 |
# Dataset Card for HuffPost
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://www.kaggle.com/datasets/rmisra/news-category-dataset/metadata**
### Dataset Summary
A dataset of approximately 200K news headlines from the year 2012 to 2018 collected from HuffPost.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
license: cc0-1.0
### Citation Information
```
@book{book,
author = {Misra, Rishabh and Grover, Jigyasa},
year = {2021},
month = {01},
pages = {},
title = {Sculpting Data for ML: The first act of Machine Learning},
isbn = {978-0-578-83125-1}
}
@dataset{dataset,
author = {Misra, Rishabh},
year = {2018},
month = {06},
pages = {},
title = {News Category Dataset},
doi = {10.13140/RG.2.2.20331.18729}
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| khalidalt/HuffPost | [
"license:cc0-1.0",
"region:us"
] | 2022-04-26T08:32:57+00:00 | {"license": "cc0-1.0"} | 2023-05-19T17:35:08+00:00 |
726a7cb5d4eab90c9035bd55b7bde3018c3bd06b |
### Dataset Summary
Kinopoisk movie reviews dataset (TOP250 & BOTTOM100 rank lists).
In total it contains 36,591 reviews from July 2004 to November 2012.
With following distribution along the 3-point sentiment scale:
- Good: 27,264;
- Bad: 4,751;
- Neutral: 4,576.
### Data Fields
Each sample contains the following fields:
- **part**: rank list top250 or bottom100;
- **movie_name**;
- **review_id**;
- **author**: review author;
- **date**: date of a review;
- **title**: review title;
- **grade3**: sentiment score Good, Bad or Neutral;
- **grade10**: sentiment score on a 10-point scale parsed from text;
- **content**: review text.
### Python
```python3
import pandas as pd
df = pd.read_json('kinopoisk.jsonl', lines=True)
df.sample(5)
```
### Citation
```
@article{blinov2013research,
title={Research of lexical approach and machine learning methods for sentiment analysis},
author={Blinov, PD and Klekovkina, Maria and Kotelnikov, Eugeny and Pestov, Oleg},
journal={Computational Linguistics and Intellectual Technologies},
volume={2},
number={12},
pages={48--58},
year={2013}
}
```
| blinoff/kinopoisk | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ru",
"region:us"
] | 2022-04-26T08:47:00+00:00 | {"language": ["ru"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Kinopoisk"} | 2022-10-23T15:51:58+00:00 |
485c67807d91e92466571c44279eacd217042b76 | hady/kurdiabadulhady | [
"region:us"
] | 2022-04-26T09:30:23+00:00 | {} | 2022-04-26T09:31:42+00:00 |
|
dec13d12c9fbda58367264342cba2376364aa2fe |
# Dataset Card for CANLI
### Dataset Summary
[CANLI: The Chinese Causative-Passive Homonymy Disambiguation: an Adversarial Dataset for NLI and a Probing Task](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.460.pdf)
The disambiguation of causative-passive homonymy (CPH) is potentially tricky for machines, as the causative and the passive
are not distinguished by the sentences syntactic structure. By transforming CPH disambiguation to a challenging natural
language inference (NLI) task, we present the first Chinese Adversarial NLI challenge set (CANLI). We show that the pretrained
transformer model RoBERTa, fine-tuned on an existing large-scale Chinese NLI benchmark dataset, performs poorly on CANLI.
We also employ Word Sense Disambiguation as a probing task to investigate to what extent the CPH feature is captured in
the models internal representation. We find that the models performance on CANLI does not correspond to its internal
representation of CPH, which is the crucial linguistic ability central to the CANLI dataset.
### Languages
Chinese Mandarin
# Citation Information
@inproceedings{xu-markert-2022-chinese,
title = "The {C}hinese Causative-Passive Homonymy Disambiguation: an adversarial Dataset for {NLI} and a Probing Task",
author = "Xu, Shanshan and Markert, Katja",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.460",
pages = "4316--4323",
}
| sxu/CANLI | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:cn",
"license:afl-3.0",
"region:us"
] | 2022-04-26T12:31:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["cn"], "license": "afl-3.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"]} | 2023-01-06T13:23:58+00:00 |
7455d89e3da5e569b49d6ae1005fd52e89eb5087 |
# Dataset Card for "scientific_papers"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/armancohan/long-summarization](https://github.com/armancohan/long-summarization)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 8591.93 MB
- **Size of the generated dataset:** 9622.19 MB
- **Total amount of disk used:** 18214.12 MB
### Dataset Summary
Scientific papers datasets contains two sets of long and structured documents.
The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, pagragraphs seperated by "/n".
- abstract: the abstract of the document, pagragraphs seperated by "/n".
- section_names: titles of sections, seperated by "/n".
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
### Data Instances
#### arxiv
- **Size of downloaded dataset files:** 4295.97 MB
- **Size of the generated dataset:** 7231.70 MB
- **Total amount of disk used:** 11527.66 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
"article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
"section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
}
```
#### pubmed
- **Size of downloaded dataset files:** 4295.97 MB
- **Size of the generated dataset:** 2390.49 MB
- **Total amount of disk used:** 6686.46 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\" background and aim : there is lack of substantial indian data on venous thromboembolism ( vte ) . \\n the aim of this study was...",
"article": "\"approximately , one - third of patients with symptomatic vte manifests pe , whereas two - thirds manifest dvt alone .\\nboth dvt...",
"section_names": "\"Introduction\\nSubjects and Methods\\nResults\\nDemographics and characteristics of venous thromboembolism patients\\nRisk factors ..."
}
```
### Data Fields
The data fields are the same among all splits.
#### arxiv
- `article`: a `string` feature.
- `abstract`: a `string` feature.
- `section_names`: a `string` feature.
#### pubmed
- `article`: a `string` feature.
- `abstract`: a `string` feature.
- `section_names`: a `string` feature.
### Data Splits
| name |train |validation|test|
|------|-----:|---------:|---:|
|arxiv |203037| 6436|6440|
|pubmed|119924| 6633|6658|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
| ENM/dataset-prueba | [
"language:en",
"region:us"
] | 2022-04-26T17:11:02+00:00 | {"language": ["en"], "pretty_name": "ScientificPapers"} | 2022-10-25T09:12:20+00:00 |
5fc63ea7788cd5b4edb6aeba801cdc7083cf07e9 |
# Dataset Card for the-reddit-nft-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-nft-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditnftdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditnftdataset)
### Dataset Summary
A comprehensive dataset of Reddit's NFT discussion.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| SocialGrep/the-reddit-nft-dataset | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-04-26T18:52:29+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T16:52:49+00:00 |
739825f9dbb674e44f71019730d403f626aac4be | POS tagging on the Universal Dependencies dataset
| aakanksha/udpos | [
"region:us"
] | 2022-04-27T00:16:51+00:00 | {} | 2022-04-27T18:21:57+00:00 |
a4ce90c2d3cd20978a678e6a108119716f235310 | fut501/ds1 | [
"license:apache-2.0",
"region:us"
] | 2022-04-27T01:30:03+00:00 | {"license": "apache-2.0"} | 2022-05-10T00:23:18+00:00 |
|
eb634bef6c528fb9df2acf63b56fdf82f0d41684 | zaraTahhhir/urduprusdataset | [
"license:mit",
"region:us"
] | 2022-04-27T06:18:05+00:00 | {"license": "mit"} | 2022-04-27T06:18:05+00:00 |
|
c340865212e37f0f37823ffb6cc4ed1c8a960c0e | Zaratahir123/urduprusdataset | [
"license:mit",
"region:us"
] | 2022-04-27T06:25:53+00:00 | {"license": "mit"} | 2022-04-27T06:52:02+00:00 |
|
578d877dd50601749b406d53805a4bd332b63091 | annotations_creators:
- found
language_creators:
- found
languages:
- zh
licenses:
- other-my-license
multilinguality:
- monolingual
pretty_name: symptom
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- word-sense-disambiguation | junliang/symptom | [
"region:us"
] | 2022-04-27T06:47:35+00:00 | {} | 2022-05-11T11:57:22+00:00 |
a3fc132b1a1b550f82e0801e9ded2ae475b659ea |
# Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
https://github.com/facebookresearch/LAMA
- **Repository:**
https://github.com/facebookresearch/LAMA
- **Paper:**
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
### Dataset Summary
This dataset provides the data for LAMA. This dataset only contains TRex
(subset of wikidata triples).
The dataset includes some cleanup, and addition of a masked sentence
and associated answers for the [MASK] token. The accuracy in
predicting the [MASK] token shows how well the language model knows
facts and common sense information. The [MASK] tokens are only for the
"object" slots.
This version also contains questions instead of templates that can be used to probe also non-masking models.
See the paper for more details. For more information, also see:
https://github.com/facebookresearch/LAMA
### Languages
en
## Dataset Structure
### Data Instances
The trex config has the following fields:
``
{'uuid': 'a37257ae-4cbb-4309-a78a-623036c96797', 'sub_label': 'Pianos Become the Teeth', 'predicate_id': 'P740', 'obj_label': 'Baltimore', 'template': '[X] was founded in [Y] .', 'type': 'N-1', 'question': 'Where was [X] founded?'}
34039
``
### Data Splits
There are no data splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to probe what language models understand.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created gathered from various other datasets with cleanups for probing.
#### Who are the source language producers?
The LAMA authors and the original authors of the various configs.
### Annotations
#### Annotation process
Human annotations under the original datasets (conceptnet), and various machine annotations.
#### Who are the annotators?
Human annotations and machine annotations.
### Personal and Sensitive Information
Unkown, but likely names of famous people.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is from human annotators, there is likely to be baises.
[More Information Needed]
### Other Known Limitations
The original documentation for the datafields are limited.
## Additional Information
### Dataset Curators
The authors of LAMA at Facebook and the authors of the original datasets.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE
### Citation Information
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
| janck/bigscience-lama | [
"task_categories:text-retrieval",
"task_categories:text-classification",
"task_ids:fact-checking-retrieval",
"task_ids:text-scoring",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"probing",
"region:us"
] | 2022-04-27T08:20:12+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": {"trex": ["1M<n<10M"]}, "task_categories": ["text-retrieval", "text-classification"], "task_ids": ["fact-checking-retrieval", "text-scoring"], "paperswithcode_id": "lama", "pretty_name": "LAMA: LAnguage Model Analysis - BigScience version", "tags": ["probing"]} | 2022-10-21T07:16:23+00:00 |
bd2ce2316b2b9ae38325ae9fec7bcb5aa02e4149 | StanBienaives/jade-considerants | [
"language:fr",
"region:us"
] | 2022-04-27T09:28:58+00:00 | {"language": ["fr"]} | 2024-01-15T10:10:19+00:00 |
|
60f3c3a4a3340dd7f3e8e7895e064c4790d38239 | Zaratahir123/groupData | [
"license:mit",
"region:us"
] | 2022-04-27T09:43:22+00:00 | {"license": "mit"} | 2022-04-28T15:33:38+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.