sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
3e3042ddeb4bf66e98a5a78181afcdd5663ff6fa
# Dataset Card for "coarse_discourse" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/google-research-datasets/coarse-discourse - **Paper:** [Characterizing Online Discussion Using Coarse Discourse Sequences](https://research.google/pubs/pub46055/) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 4.63 MB - **Size of the generated dataset:** 45.45 MB - **Total amount of disk used:** 50.08 MB ### Dataset Summary A large corpus of discourse annotations and relations on ~10K forum threads. We collect and release a corpus of over 9,000 threads comprising over 100,000 comments manually annotated via paid crowdsourcing with discourse acts and randomly sampled from the site Reddit. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 4.63 MB - **Size of the generated dataset:** 45.45 MB - **Total amount of disk used:** 50.08 MB An example of 'train' looks as follows. ``` { "annotations": { "annotator": ["fc96a15ab87f02dd1998ff55a64f6478", "e9e4b3ab355135fa954badcc06bfccc6", "31ac59c1734c1547d4d0723ff254c247"], "link_to_post": ["", "", ""], "main_type": ["elaboration", "elaboration", "elaboration"] }, "id_post": "t1_c9b30i1", "in_reply_to": "t1_c9b2nyd", "is_first_post": false, "is_self_post": true, "majority_link": "t1_c9b2nyd", "majority_type": "elaboration", "post_depth": 2, "subreddit": "100movies365days", "title": "DTX120: #87 - Nashville", "url": "https://www.reddit.com/r/100movies365days/comments/1bx6qw/dtx120_87_nashville/" } ``` ### Data Fields The data fields are the same among all splits. #### default - `title`: a `string` feature. - `is_self_post`: a `bool` feature. - `subreddit`: a `string` feature. - `url`: a `string` feature. - `majority_link`: a `string` feature. - `is_first_post`: a `bool` feature. - `majority_type`: a `string` feature. - `id_post`: a `string` feature. - `post_depth`: a `int32` feature. - `in_reply_to`: a `string` feature. - `annotations`: a dictionary feature containing: - `annotator`: a `string` feature. - `link_to_post`: a `string` feature. - `main_type`: a `string` feature. ### Data Splits | name |train | |-------|-----:| |default|116357| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{coarsediscourse, title={Characterizing Online Discussion Using Coarse Discourse Sequences}, author={Zhang, Amy X. and Culbertson, Bryan and Paritosh, Praveen}, booktitle={Proceedings of the 11th International AAAI Conference on Weblogs and Social Media}, series={ICWSM '17}, year={2017}, location = {Montreal, Canada} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu) for adding this dataset.
coarse_discourse
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "paperswithcode_id": "coarse-discourse", "pretty_name": "Coarse Discourse", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "is_self_post", "dtype": "bool"}, {"name": "subreddit", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "majority_link", "dtype": "string"}, {"name": "is_first_post", "dtype": "bool"}, {"name": "majority_type", "dtype": "string"}, {"name": "id_post", "dtype": "string"}, {"name": "post_depth", "dtype": "int32"}, {"name": "in_reply_to", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "annotator", "dtype": "string"}, {"name": "link_to_post", "dtype": "string"}, {"name": "main_type", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 45097556, "num_examples": 116357}], "download_size": 4256575, "dataset_size": 45097556}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-18T15:32:32+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
Dataset Card for "coarse\_discourse" ==================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: Characterizing Online Discussion Using Coarse Discourse Sequences * Point of Contact: * Size of downloaded dataset files: 4.63 MB * Size of the generated dataset: 45.45 MB * Total amount of disk used: 50.08 MB ### Dataset Summary A large corpus of discourse annotations and relations on ~10K forum threads. We collect and release a corpus of over 9,000 threads comprising over 100,000 comments manually annotated via paid crowdsourcing with discourse acts and randomly sampled from the site Reddit. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 4.63 MB * Size of the generated dataset: 45.45 MB * Total amount of disk used: 50.08 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'title': a 'string' feature. * 'is\_self\_post': a 'bool' feature. * 'subreddit': a 'string' feature. * 'url': a 'string' feature. * 'majority\_link': a 'string' feature. * 'is\_first\_post': a 'bool' feature. * 'majority\_type': a 'string' feature. * 'id\_post': a 'string' feature. * 'post\_depth': a 'int32' feature. * 'in\_reply\_to': a 'string' feature. * 'annotations': a dictionary feature containing: + 'annotator': a 'string' feature. + 'link\_to\_post': a 'string' feature. + 'main\_type': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @lewtun, @jplu for adding this dataset.
[ "### Dataset Summary\n\n\nA large corpus of discourse annotations and relations on ~10K forum threads.\n\n\nWe collect and release a corpus of over 9,000 threads comprising over 100,000 comments manually annotated via paid crowdsourcing with discourse acts and randomly sampled from the site Reddit.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 4.63 MB\n* Size of the generated dataset: 45.45 MB\n* Total amount of disk used: 50.08 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'title': a 'string' feature.\n* 'is\\_self\\_post': a 'bool' feature.\n* 'subreddit': a 'string' feature.\n* 'url': a 'string' feature.\n* 'majority\\_link': a 'string' feature.\n* 'is\\_first\\_post': a 'bool' feature.\n* 'majority\\_type': a 'string' feature.\n* 'id\\_post': a 'string' feature.\n* 'post\\_depth': a 'int32' feature.\n* 'in\\_reply\\_to': a 'string' feature.\n* 'annotations': a dictionary feature containing:\n\t+ 'annotator': a 'string' feature.\n\t+ 'link\\_to\\_post': a 'string' feature.\n\t+ 'main\\_type': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @jplu for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nA large corpus of discourse annotations and relations on ~10K forum threads.\n\n\nWe collect and release a corpus of over 9,000 threads comprising over 100,000 comments manually annotated via paid crowdsourcing with discourse acts and randomly sampled from the site Reddit.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 4.63 MB\n* Size of the generated dataset: 45.45 MB\n* Total amount of disk used: 50.08 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'title': a 'string' feature.\n* 'is\\_self\\_post': a 'bool' feature.\n* 'subreddit': a 'string' feature.\n* 'url': a 'string' feature.\n* 'majority\\_link': a 'string' feature.\n* 'is\\_first\\_post': a 'bool' feature.\n* 'majority\\_type': a 'string' feature.\n* 'id\\_post': a 'string' feature.\n* 'post\\_depth': a 'int32' feature.\n* 'in\\_reply\\_to': a 'string' feature.\n* 'annotations': a dictionary feature containing:\n\t+ 'annotator': a 'string' feature.\n\t+ 'link\\_to\\_post': a 'string' feature.\n\t+ 'main\\_type': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @jplu for adding this dataset." ]
[ 91, 68, 10, 11, 6, 51, 17, 211, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 25 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n### Dataset Summary\n\n\nA large corpus of discourse annotations and relations on ~10K forum threads.\n\n\nWe collect and release a corpus of over 9,000 threads comprising over 100,000 comments manually annotated via paid crowdsourcing with discourse acts and randomly sampled from the site Reddit.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 4.63 MB\n* Size of the generated dataset: 45.45 MB\n* Total amount of disk used: 50.08 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'title': a 'string' feature.\n* 'is\\_self\\_post': a 'bool' feature.\n* 'subreddit': a 'string' feature.\n* 'url': a 'string' feature.\n* 'majority\\_link': a 'string' feature.\n* 'is\\_first\\_post': a 'bool' feature.\n* 'majority\\_type': a 'string' feature.\n* 'id\\_post': a 'string' feature.\n* 'post\\_depth': a 'int32' feature.\n* 'in\\_reply\\_to': a 'string' feature.\n* 'annotations': a dictionary feature containing:\n\t+ 'annotator': a 'string' feature.\n\t+ 'link\\_to\\_post': a 'string' feature.\n\t+ 'main\\_type': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?" ]
4b0e0e7f33b3ae8cd8e69bcdfa09af9611d8f3bd
# Dataset Card for COmmonsense Dataset Adversarially-authored by Humans ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]() - **Repository:** https://github.com/Websail-NU/CODAH - **Paper:** https://aclanthology.org/W19-2008/ - **Paper:** https://arxiv.org/abs/1904.04365 ### Dataset Summary The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The CODAH dataset is made available under the Open Data Commons Attribution License: http://opendatacommons.org/licenses/by/1.0/ ### Citation Information ``` @inproceedings{chen-etal-2019-codah, title = "{CODAH}: An Adversarially-Authored Question Answering Dataset for Common Sense", author = "Chen, Michael and D{'}Arcy, Mike and Liu, Alisa and Fernandez, Jared and Downey, Doug", editor = "Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Goldberg, Yoav", booktitle = "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for {NLP}", month = jun, year = "2019", address = "Minneapolis, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W19-2008", doi = "10.18653/v1/W19-2008", pages = "63--69", abstract = "Commonsense reasoning is a critical AI capability, but it is difficult to construct challenging datasets that test common sense. Recent neural question answering systems, based on large pre-trained models of language, have already achieved near-human-level performance on commonsense knowledge benchmarks. These systems do not possess human-level common sense, but are able to exploit limitations of the datasets to achieve human-level scores. We introduce the CODAH dataset, an adversarially-constructed evaluation dataset for testing common sense. CODAH forms a challenging extension to the recently-proposed SWAG dataset, which tests commonsense knowledge using sentence-completion questions that describe situations observed in video. To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems. Workers are rewarded for submissions that models fail to answer correctly both before and after fine-tuning (in cross-validation). We create 2.8k questions via this procedure and evaluate the performance of multiple state-of-the-art question answering systems on our dataset. We observe a significant gap between human performance, which is 95.3{\%}, and the performance of the best baseline accuracy of 65.3{\%} by the OpenAI GPT model.", } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
codah
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:odc-by", "arxiv:1904.04365", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": "odc-by", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "codah", "pretty_name": "COmmonsense Dataset Adversarially-authored by Humans", "dataset_info": [{"config_name": "codah", "features": [{"name": "id", "dtype": "int32"}, {"name": "question_category", "dtype": {"class_label": {"names": {"0": "Idioms", "1": "Reference", "2": "Polysemy", "3": "Negation", "4": "Quantitative", "5": "Others"}}}}, {"name": "question_propmt", "dtype": "string"}, {"name": "candidate_answers", "sequence": "string"}, {"name": "correct_answer_idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 571196, "num_examples": 2776}], "download_size": 352902, "dataset_size": 571196}, {"config_name": "fold_0", "features": [{"name": "id", "dtype": "int32"}, {"name": "question_category", "dtype": {"class_label": {"names": {"0": "Idioms", "1": "Reference", "2": "Polysemy", "3": "Negation", "4": "Quantitative", "5": "Others"}}}}, {"name": "question_propmt", "dtype": "string"}, {"name": "candidate_answers", "sequence": "string"}, {"name": "correct_answer_idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 344900, "num_examples": 1665}, {"name": "validation", "num_bytes": 114199, "num_examples": 556}, {"name": "test", "num_bytes": 112097, "num_examples": 555}], "download_size": 379179, "dataset_size": 571196}, {"config_name": "fold_1", "features": [{"name": "id", "dtype": "int32"}, {"name": "question_category", "dtype": {"class_label": {"names": {"0": "Idioms", "1": "Reference", "2": "Polysemy", "3": "Negation", "4": "Quantitative", "5": "Others"}}}}, {"name": "question_propmt", "dtype": "string"}, {"name": "candidate_answers", "sequence": "string"}, {"name": "correct_answer_idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 340978, "num_examples": 1665}, {"name": "validation", "num_bytes": 114199, "num_examples": 556}, {"name": "test", "num_bytes": 116019, "num_examples": 555}], "download_size": 379728, "dataset_size": 571196}, {"config_name": "fold_2", "features": [{"name": "id", "dtype": "int32"}, {"name": "question_category", "dtype": {"class_label": {"names": {"0": "Idioms", "1": "Reference", "2": "Polysemy", "3": "Negation", "4": "Quantitative", "5": "Others"}}}}, {"name": "question_propmt", "dtype": "string"}, {"name": "candidate_answers", "sequence": "string"}, {"name": "correct_answer_idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 342281, "num_examples": 1665}, {"name": "validation", "num_bytes": 114199, "num_examples": 556}, {"name": "test", "num_bytes": 114716, "num_examples": 555}], "download_size": 379126, "dataset_size": 571196}, {"config_name": "fold_3", "features": [{"name": "id", "dtype": "int32"}, {"name": "question_category", "dtype": {"class_label": {"names": {"0": "Idioms", "1": "Reference", "2": "Polysemy", "3": "Negation", "4": "Quantitative", "5": "Others"}}}}, {"name": "question_propmt", "dtype": "string"}, {"name": "candidate_answers", "sequence": "string"}, {"name": "correct_answer_idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 342832, "num_examples": 1665}, {"name": "validation", "num_bytes": 114199, "num_examples": 556}, {"name": "test", "num_bytes": 114165, "num_examples": 555}], "download_size": 379178, "dataset_size": 571196}, {"config_name": "fold_4", "features": [{"name": "id", "dtype": "int32"}, {"name": "question_category", "dtype": {"class_label": {"names": {"0": "Idioms", "1": "Reference", "2": "Polysemy", "3": "Negation", "4": "Quantitative", "5": "Others"}}}}, {"name": "question_propmt", "dtype": "string"}, {"name": "candidate_answers", "sequence": "string"}, {"name": "correct_answer_idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 342832, "num_examples": 1665}, {"name": "validation", "num_bytes": 114165, "num_examples": 555}, {"name": "test", "num_bytes": 114199, "num_examples": 556}], "download_size": 379178, "dataset_size": 571196}], "configs": [{"config_name": "codah", "data_files": [{"split": "train", "path": "codah/train-*"}]}, {"config_name": "fold_0", "data_files": [{"split": "train", "path": "fold_0/train-*"}, {"split": "validation", "path": "fold_0/validation-*"}, {"split": "test", "path": "fold_0/test-*"}]}, {"config_name": "fold_1", "data_files": [{"split": "train", "path": "fold_1/train-*"}, {"split": "validation", "path": "fold_1/validation-*"}, {"split": "test", "path": "fold_1/test-*"}]}, {"config_name": "fold_2", "data_files": [{"split": "train", "path": "fold_2/train-*"}, {"split": "validation", "path": "fold_2/validation-*"}, {"split": "test", "path": "fold_2/test-*"}]}, {"config_name": "fold_3", "data_files": [{"split": "train", "path": "fold_3/train-*"}, {"split": "validation", "path": "fold_3/validation-*"}, {"split": "test", "path": "fold_3/test-*"}]}, {"config_name": "fold_4", "data_files": [{"split": "train", "path": "fold_4/train-*"}, {"split": "validation", "path": "fold_4/validation-*"}, {"split": "test", "path": "fold_4/test-*"}]}]}
2024-01-19T10:16:56+00:00
[ "1904.04365" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-odc-by #arxiv-1904.04365 #region-us
# Dataset Card for COmmonsense Dataset Adversarially-authored by Humans ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]() - Repository: URL - Paper: URL - Paper: URL ### Dataset Summary The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information The CODAH dataset is made available under the Open Data Commons Attribution License: URL ### Contributions Thanks to @patil-suraj for adding this dataset.
[ "# Dataset Card for COmmonsense Dataset Adversarially-authored by Humans", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: URL\n- Paper: URL\n- Paper: URL", "### Dataset Summary\n\nThe COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense\nquestion-answering in the sentence completion style of SWAG. As opposed to other automatically generated\nNLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model\nand use this information to design challenging commonsense questions.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe CODAH dataset is made available under the Open Data Commons Attribution License: URL", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-odc-by #arxiv-1904.04365 #region-us \n", "# Dataset Card for COmmonsense Dataset Adversarially-authored by Humans", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: URL\n- Paper: URL\n- Paper: URL", "### Dataset Summary\n\nThe COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense\nquestion-answering in the sentence completion style of SWAG. As opposed to other automatically generated\nNLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model\nand use this information to design challenging commonsense questions.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe CODAH dataset is made available under the Open Data Commons Attribution License: URL", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ 103, 22, 120, 47, 94, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 23, 19 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-odc-by #arxiv-1904.04365 #region-us \n# Dataset Card for COmmonsense Dataset Adversarially-authored by Humans## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: URL\n- Paper: URL\n- Paper: URL### Dataset Summary\n\nThe COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense\nquestion-answering in the sentence completion style of SWAG. As opposed to other automatically generated\nNLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model\nand use this information to design challenging commonsense questions.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset" ]
fdc6a9e39575768c27eb8a2a5f702bf846eb4759
# Dataset Card for CodeSearchNet corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://wandb.ai/github/CodeSearchNet/benchmark - **Repository:** https://github.com/github/CodeSearchNet - **Paper:** https://arxiv.org/abs/1909.09436 - **Data:** https://doi.org/10.5281/zenodo.7908468 - **Leaderboard:** https://wandb.ai/github/CodeSearchNet/benchmark/leaderboard ### Dataset Summary CodeSearchNet corpus is a dataset of 2 milllion (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. CodeSearchNet corpus was gathered to support the [CodeSearchNet challenge](https://wandb.ai/github/CodeSearchNet/benchmark), to explore the problem of code retrieval using natural language. ### Supported Tasks and Leaderboards - `language-modeling`: The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages. ### Languages - Go **programming** language - Java **programming** language - Javascript **programming** language - PHP **programming** language - Python **programming** language - Ruby **programming** language ## Dataset Structure ### Data Instances A data point consists of a function code along with its documentation. Each data point also contains meta data on the function, such as the repository it was extracted from. ``` { 'id': '0', 'repository_name': 'organisation/repository', 'func_path_in_repository': 'src/path/to/file.py', 'func_name': 'func', 'whole_func_string': 'def func(args):\n"""Docstring"""\n [...]', 'language': 'python', 'func_code_string': '[...]', 'func_code_tokens': ['def', 'func', '(', 'args', ')', ...], 'func_documentation_string': 'Docstring', 'func_documentation_string_tokens': ['Docstring'], 'split_name': 'train', 'func_code_url': 'https://github.com/<org>/<repo>/blob/<hash>/src/path/to/file.py#L111-L150' } ``` ### Data Fields - `id`: Arbitrary number - `repository_name`: name of the GitHub repository - `func_path_in_repository`: tl;dr: path to the file which holds the function in the repository - `func_name`: name of the function in the file - `whole_func_string`: Code + documentation of the function - `language`: Programming language in whoch the function is written - `func_code_string`: Function code - `func_code_tokens`: Tokens yielded by Treesitter - `func_documentation_string`: Function documentation - `func_documentation_string_tokens`: Tokens yielded by Treesitter - `split_name`: Name of the split to which the example belongs (one of train, test or valid) - `func_code_url`: URL to the function code on Github ### Data Splits Three splits are available: - train - test - valid ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization All information can be retrieved in the [original technical review](https://arxiv.org/pdf/1909.09436.pdf) **Corpus collection**: Corpus has been collected from publicly available open-source non-fork GitHub repositories, using libraries.io to identify all projects which are used by at least one other project, and sort them by “popularity” as indicated by the number of stars and forks. Then, any projects that do not have a license or whose license does not explicitly permit the re-distribution of parts of the project were removed. Treesitter - GitHub's universal parser - has been used to then tokenize all Go, Java, JavaScript, Python, PHP and Ruby functions (or methods) using and, where available, their respective documentation text using a heuristic regular expression. **Corpus filtering**: Functions without documentation are removed from the corpus. This yields a set of pairs ($c_i$, $d_i$) where ci is some function documented by di. Pairs ($c_i$, $d_i$) are passed through the folllowing preprocessing tasks: - Documentation $d_i$ is truncated to the first full paragraph to remove in-depth discussion of function arguments and return values - Pairs in which $d_i$ is shorter than three tokens are removed - Functions $c_i$ whose implementation is shorter than three lines are removed - Functions whose name contains the substring “test” are removed - Constructors and standard extenion methods (eg `__str__` in Python or `toString` in Java) are removed - Duplicates and near duplicates functions are removed, in order to keep only one version of the function #### Who are the source language producers? OpenSource contributors produced the code and documentations. The dataset was gatherered and preprocessed automatically. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Each example in the dataset has is extracted from a GitHub repository, and each repository has its own license. Example-wise license information is not (yet) included in this dataset: you will need to find out yourself which license the code is using. ### Citation Information @article{husain2019codesearchnet, title={{CodeSearchNet} challenge: Evaluating the state of semantic code search}, author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, journal={arXiv preprint arXiv:1909.09436}, year={2019} } ### Contributions Thanks to [@SBrandeis](https://github.com/SBrandeis) for adding this dataset.
code_search_net
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:code", "license:other", "arxiv:1909.09436", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["other"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M", "10K<n<100K", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "codesearchnet", "pretty_name": "CodeSearchNet", "config_names": ["all", "go", "java", "javascript", "php", "python", "ruby"], "dataset_info": [{"config_name": "all", "features": [{"name": "repository_name", "dtype": "string"}, {"name": "func_path_in_repository", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "whole_func_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "func_code_string", "dtype": "string"}, {"name": "func_code_tokens", "sequence": "string"}, {"name": "func_documentation_string", "dtype": "string"}, {"name": "func_documentation_tokens", "sequence": "string"}, {"name": "split_name", "dtype": "string"}, {"name": "func_code_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5850604083, "num_examples": 1880853}, {"name": "test", "num_bytes": 308626333, "num_examples": 100529}, {"name": "validation", "num_bytes": 274564382, "num_examples": 89154}], "download_size": 5117370511, "dataset_size": 6433794798}, {"config_name": "java", "features": [{"name": "repository_name", "dtype": "string"}, {"name": "func_path_in_repository", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "whole_func_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "func_code_string", "dtype": "string"}, {"name": "func_code_tokens", "sequence": "string"}, {"name": "func_documentation_string", "dtype": "string"}, {"name": "func_documentation_tokens", "sequence": "string"}, {"name": "split_name", "dtype": "string"}, {"name": "func_code_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1429272535, "num_examples": 454451}, {"name": "test", "num_bytes": 82377246, "num_examples": 26909}, {"name": "validation", "num_bytes": 42358315, "num_examples": 15328}], "download_size": 1060569153, "dataset_size": 1554008096}, {"config_name": "go", "features": [{"name": "repository_name", "dtype": "string"}, {"name": "func_path_in_repository", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "whole_func_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "func_code_string", "dtype": "string"}, {"name": "func_code_tokens", "sequence": "string"}, {"name": "func_documentation_string", "dtype": "string"}, {"name": "func_documentation_tokens", "sequence": "string"}, {"name": "split_name", "dtype": "string"}, {"name": "func_code_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 738153234, "num_examples": 317832}, {"name": "test", "num_bytes": 32286998, "num_examples": 14291}, {"name": "validation", "num_bytes": 26888527, "num_examples": 14242}], "download_size": 487525935, "dataset_size": 797328759}, {"config_name": "python", "features": [{"name": "repository_name", "dtype": "string"}, {"name": "func_path_in_repository", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "whole_func_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "func_code_string", "dtype": "string"}, {"name": "func_code_tokens", "sequence": "string"}, {"name": "func_documentation_string", "dtype": "string"}, {"name": "func_documentation_tokens", "sequence": "string"}, {"name": "split_name", "dtype": "string"}, {"name": "func_code_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1559645310, "num_examples": 412178}, {"name": "test", "num_bytes": 84342064, "num_examples": 22176}, {"name": "validation", "num_bytes": 92154786, "num_examples": 23107}], "download_size": 940909997, "dataset_size": 1736142160}, {"config_name": "javascript", "features": [{"name": "repository_name", "dtype": "string"}, {"name": "func_path_in_repository", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "whole_func_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "func_code_string", "dtype": "string"}, {"name": "func_code_tokens", "sequence": "string"}, {"name": "func_documentation_string", "dtype": "string"}, {"name": "func_documentation_tokens", "sequence": "string"}, {"name": "split_name", "dtype": "string"}, {"name": "func_code_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 480286523, "num_examples": 123889}, {"name": "test", "num_bytes": 24056972, "num_examples": 6483}, {"name": "validation", "num_bytes": 30168242, "num_examples": 8253}], "download_size": 1664713350, "dataset_size": 534511737}, {"config_name": "ruby", "features": [{"name": "repository_name", "dtype": "string"}, {"name": "func_path_in_repository", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "whole_func_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "func_code_string", "dtype": "string"}, {"name": "func_code_tokens", "sequence": "string"}, {"name": "func_documentation_string", "dtype": "string"}, {"name": "func_documentation_tokens", "sequence": "string"}, {"name": "split_name", "dtype": "string"}, {"name": "func_code_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 110681715, "num_examples": 48791}, {"name": "test", "num_bytes": 5359280, "num_examples": 2279}, {"name": "validation", "num_bytes": 4830744, "num_examples": 2209}], "download_size": 111758028, "dataset_size": 120871739}, {"config_name": "php", "features": [{"name": "repository_name", "dtype": "string"}, {"name": "func_path_in_repository", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "whole_func_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "func_code_string", "dtype": "string"}, {"name": "func_code_tokens", "sequence": "string"}, {"name": "func_documentation_string", "dtype": "string"}, {"name": "func_documentation_tokens", "sequence": "string"}, {"name": "split_name", "dtype": "string"}, {"name": "func_code_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1532564870, "num_examples": 523712}, {"name": "test", "num_bytes": 80203877, "num_examples": 28391}, {"name": "validation", "num_bytes": 78163924, "num_examples": 26015}], "download_size": 851894048, "dataset_size": 1690932671}]}
2024-01-18T09:19:12+00:00
[ "1909.09436" ]
[ "code" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-code #license-other #arxiv-1909.09436 #region-us
# Dataset Card for CodeSearchNet corpus ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Data: URL - Leaderboard: URL ### Dataset Summary CodeSearchNet corpus is a dataset of 2 milllion (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. CodeSearchNet corpus was gathered to support the CodeSearchNet challenge, to explore the problem of code retrieval using natural language. ### Supported Tasks and Leaderboards - 'language-modeling': The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages. ### Languages - Go programming language - Java programming language - Javascript programming language - PHP programming language - Python programming language - Ruby programming language ## Dataset Structure ### Data Instances A data point consists of a function code along with its documentation. Each data point also contains meta data on the function, such as the repository it was extracted from. ### Data Fields - 'id': Arbitrary number - 'repository_name': name of the GitHub repository - 'func_path_in_repository': tl;dr: path to the file which holds the function in the repository - 'func_name': name of the function in the file - 'whole_func_string': Code + documentation of the function - 'language': Programming language in whoch the function is written - 'func_code_string': Function code - 'func_code_tokens': Tokens yielded by Treesitter - 'func_documentation_string': Function documentation - 'func_documentation_string_tokens': Tokens yielded by Treesitter - 'split_name': Name of the split to which the example belongs (one of train, test or valid) - 'func_code_url': URL to the function code on Github ### Data Splits Three splits are available: - train - test - valid ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization All information can be retrieved in the original technical review Corpus collection: Corpus has been collected from publicly available open-source non-fork GitHub repositories, using URL to identify all projects which are used by at least one other project, and sort them by “popularity” as indicated by the number of stars and forks. Then, any projects that do not have a license or whose license does not explicitly permit the re-distribution of parts of the project were removed. Treesitter - GitHub's universal parser - has been used to then tokenize all Go, Java, JavaScript, Python, PHP and Ruby functions (or methods) using and, where available, their respective documentation text using a heuristic regular expression. Corpus filtering: Functions without documentation are removed from the corpus. This yields a set of pairs ($c_i$, $d_i$) where ci is some function documented by di. Pairs ($c_i$, $d_i$) are passed through the folllowing preprocessing tasks: - Documentation $d_i$ is truncated to the first full paragraph to remove in-depth discussion of function arguments and return values - Pairs in which $d_i$ is shorter than three tokens are removed - Functions $c_i$ whose implementation is shorter than three lines are removed - Functions whose name contains the substring “test” are removed - Constructors and standard extenion methods (eg '__str__' in Python or 'toString' in Java) are removed - Duplicates and near duplicates functions are removed, in order to keep only one version of the function #### Who are the source language producers? OpenSource contributors produced the code and documentations. The dataset was gatherered and preprocessed automatically. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Each example in the dataset has is extracted from a GitHub repository, and each repository has its own license. Example-wise license information is not (yet) included in this dataset: you will need to find out yourself which license the code is using. @article{husain2019codesearchnet, title={{CodeSearchNet} challenge: Evaluating the state of semantic code search}, author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, journal={arXiv preprint arXiv:1909.09436}, year={2019} } ### Contributions Thanks to @SBrandeis for adding this dataset.
[ "# Dataset Card for CodeSearchNet corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Data: URL\n- Leaderboard: URL", "### Dataset Summary\n\nCodeSearchNet corpus is a dataset of 2 milllion (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages.\n\nCodeSearchNet corpus was gathered to support the CodeSearchNet challenge, to explore the problem of code retrieval using natural language.", "### Supported Tasks and Leaderboards\n\n- 'language-modeling': The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.", "### Languages\n\n- Go programming language\n- Java programming language\n- Javascript programming language\n- PHP programming language\n- Python programming language\n- Ruby programming language", "## Dataset Structure", "### Data Instances\n\nA data point consists of a function code along with its documentation. Each data point also contains meta data on the function, such as the repository it was extracted from.", "### Data Fields\n\n- 'id': Arbitrary number\n- 'repository_name': name of the GitHub repository\n- 'func_path_in_repository': tl;dr: path to the file which holds the function in the repository\n- 'func_name': name of the function in the file\n- 'whole_func_string': Code + documentation of the function\n- 'language': Programming language in whoch the function is written\n- 'func_code_string': Function code\n- 'func_code_tokens': Tokens yielded by Treesitter\n- 'func_documentation_string': Function documentation\n- 'func_documentation_string_tokens': Tokens yielded by Treesitter\n- 'split_name': Name of the split to which the example belongs (one of train, test or valid)\n- 'func_code_url': URL to the function code on Github", "### Data Splits\n\nThree splits are available:\n- train\n- test\n- valid", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nAll information can be retrieved in the original technical review\n\nCorpus collection:\n\nCorpus has been collected from publicly available open-source non-fork GitHub repositories, using URL to identify all projects which are used by at least one other project, and sort them by “popularity” as indicated by the number of stars and forks. \n\nThen, any projects that do not have a license or whose license does not explicitly permit the re-distribution of parts of the project were removed. Treesitter - GitHub's universal parser - has been used to then tokenize all Go, Java, JavaScript, Python, PHP and Ruby functions (or methods) using and, where available, their respective documentation text using a heuristic regular expression.\n\nCorpus filtering:\n\nFunctions without documentation are removed from the corpus. This yields a set of pairs ($c_i$, $d_i$) where ci is some function documented by di. Pairs ($c_i$, $d_i$) are passed through the folllowing preprocessing tasks:\n\n- Documentation $d_i$ is truncated to the first full paragraph to remove in-depth discussion of function arguments and return values\n- Pairs in which $d_i$ is shorter than three tokens are removed\n- Functions $c_i$ whose implementation is shorter than three lines are removed\n- Functions whose name contains the substring “test” are removed\n- Constructors and standard extenion methods (eg '__str__' in Python or 'toString' in Java) are removed\n- Duplicates and near duplicates functions are removed, in order to keep only one version of the function", "#### Who are the source language producers?\n\nOpenSource contributors produced the code and documentations.\n\nThe dataset was gatherered and preprocessed automatically.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nEach example in the dataset has is extracted from a GitHub repository, and each repository has its own license. Example-wise license information is not (yet) included in this dataset: you will need to find out yourself which license the code is using.\n\n\n\n@article{husain2019codesearchnet,\n title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},\n author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\n journal={arXiv preprint arXiv:1909.09436},\n year={2019}\n}", "### Contributions\n\nThanks to @SBrandeis for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-code #license-other #arxiv-1909.09436 #region-us \n", "# Dataset Card for CodeSearchNet corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Data: URL\n- Leaderboard: URL", "### Dataset Summary\n\nCodeSearchNet corpus is a dataset of 2 milllion (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages.\n\nCodeSearchNet corpus was gathered to support the CodeSearchNet challenge, to explore the problem of code retrieval using natural language.", "### Supported Tasks and Leaderboards\n\n- 'language-modeling': The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.", "### Languages\n\n- Go programming language\n- Java programming language\n- Javascript programming language\n- PHP programming language\n- Python programming language\n- Ruby programming language", "## Dataset Structure", "### Data Instances\n\nA data point consists of a function code along with its documentation. Each data point also contains meta data on the function, such as the repository it was extracted from.", "### Data Fields\n\n- 'id': Arbitrary number\n- 'repository_name': name of the GitHub repository\n- 'func_path_in_repository': tl;dr: path to the file which holds the function in the repository\n- 'func_name': name of the function in the file\n- 'whole_func_string': Code + documentation of the function\n- 'language': Programming language in whoch the function is written\n- 'func_code_string': Function code\n- 'func_code_tokens': Tokens yielded by Treesitter\n- 'func_documentation_string': Function documentation\n- 'func_documentation_string_tokens': Tokens yielded by Treesitter\n- 'split_name': Name of the split to which the example belongs (one of train, test or valid)\n- 'func_code_url': URL to the function code on Github", "### Data Splits\n\nThree splits are available:\n- train\n- test\n- valid", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nAll information can be retrieved in the original technical review\n\nCorpus collection:\n\nCorpus has been collected from publicly available open-source non-fork GitHub repositories, using URL to identify all projects which are used by at least one other project, and sort them by “popularity” as indicated by the number of stars and forks. \n\nThen, any projects that do not have a license or whose license does not explicitly permit the re-distribution of parts of the project were removed. Treesitter - GitHub's universal parser - has been used to then tokenize all Go, Java, JavaScript, Python, PHP and Ruby functions (or methods) using and, where available, their respective documentation text using a heuristic regular expression.\n\nCorpus filtering:\n\nFunctions without documentation are removed from the corpus. This yields a set of pairs ($c_i$, $d_i$) where ci is some function documented by di. Pairs ($c_i$, $d_i$) are passed through the folllowing preprocessing tasks:\n\n- Documentation $d_i$ is truncated to the first full paragraph to remove in-depth discussion of function arguments and return values\n- Pairs in which $d_i$ is shorter than three tokens are removed\n- Functions $c_i$ whose implementation is shorter than three lines are removed\n- Functions whose name contains the substring “test” are removed\n- Constructors and standard extenion methods (eg '__str__' in Python or 'toString' in Java) are removed\n- Duplicates and near duplicates functions are removed, in order to keep only one version of the function", "#### Who are the source language producers?\n\nOpenSource contributors produced the code and documentations.\n\nThe dataset was gatherered and preprocessed automatically.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nEach example in the dataset has is extracted from a GitHub repository, and each repository has its own license. Example-wise license information is not (yet) included in this dataset: you will need to find out yourself which license the code is using.\n\n\n\n@article{husain2019codesearchnet,\n title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},\n author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\n journal={arXiv preprint arXiv:1909.09436},\n year={2019}\n}", "### Contributions\n\nThanks to @SBrandeis for adding this dataset." ]
[ 144, 9, 120, 27, 80, 49, 34, 6, 44, 225, 17, 5, 7, 4, 384, 34, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 159, 17 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-code #license-other #arxiv-1909.09436 #region-us \n# Dataset Card for CodeSearchNet corpus## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Data: URL\n- Leaderboard: URL### Dataset Summary\n\nCodeSearchNet corpus is a dataset of 2 milllion (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages.\n\nCodeSearchNet corpus was gathered to support the CodeSearchNet challenge, to explore the problem of code retrieval using natural language.### Supported Tasks and Leaderboards\n\n- 'language-modeling': The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.### Languages\n\n- Go programming language\n- Java programming language\n- Javascript programming language\n- PHP programming language\n- Python programming language\n- Ruby programming language## Dataset Structure", "passage: ### Data Instances\n\nA data point consists of a function code along with its documentation. Each data point also contains meta data on the function, such as the repository it was extracted from.### Data Fields\n\n- 'id': Arbitrary number\n- 'repository_name': name of the GitHub repository\n- 'func_path_in_repository': tl;dr: path to the file which holds the function in the repository\n- 'func_name': name of the function in the file\n- 'whole_func_string': Code + documentation of the function\n- 'language': Programming language in whoch the function is written\n- 'func_code_string': Function code\n- 'func_code_tokens': Tokens yielded by Treesitter\n- 'func_documentation_string': Function documentation\n- 'func_documentation_string_tokens': Tokens yielded by Treesitter\n- 'split_name': Name of the split to which the example belongs (one of train, test or valid)\n- 'func_code_url': URL to the function code on Github### Data Splits\n\nThree splits are available:\n- train\n- test\n- valid## Dataset Creation### Curation Rationale### Source Data" ]
706a9c957bd57cc44d86068aa3b539d871ca9dbb
# Dataset Card for "code_x_glue_cc_clone_detection_big_clone_bench" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench ### Dataset Summary CodeXGLUE Clone-detection-BigCloneBench dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score. The dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree. ### Supported Tasks and Leaderboards - `semantic-similarity-classification`: The dataset can be used to train a model for classifying if two given java methods are cloens of each other. ### Languages - Java **programming** language ## Dataset Structure ### Data Instances An example of 'test' looks as follows. ``` { "func1": " @Test(expected = GadgetException.class)\n public void malformedGadgetSpecIsCachedAndThrows() throws Exception {\n HttpRequest request = createCacheableRequest();\n expect(pipeline.execute(request)).andReturn(new HttpResponse(\"malformed junk\")).once();\n replay(pipeline);\n try {\n specFactory.getGadgetSpec(createContext(SPEC_URL, false));\n fail(\"No exception thrown on bad parse\");\n } catch (GadgetException e) {\n }\n specFactory.getGadgetSpec(createContext(SPEC_URL, false));\n }\n", "func2": " public InputStream getInputStream() throws TGBrowserException {\n try {\n if (!this.isFolder()) {\n URL url = new URL(this.url);\n InputStream stream = url.openStream();\n return stream;\n }\n } catch (Throwable throwable) {\n throw new TGBrowserException(throwable);\n }\n return null;\n }\n", "id": 0, "id1": 2381663, "id2": 4458076, "label": false } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default |field name| type | description | |----------|------|---------------------------------------------------| |id |int32 | Index of the sample | |id1 |int32 | The first function id | |id2 |int32 | The second function id | |func1 |string| The full text of the first function | |func2 |string| The full text of the second function | |label |bool | 1 is the functions are not equivalent, 0 otherwise| ### Data Splits | name |train |validation| test | |-------|-----:|---------:|-----:| |default|901028| 415416|415416| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Data was mined from the IJaDataset 2.0 dataset. [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process Data was manually labeled by three judges by automatically identifying potential clones using search heuristics. [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases Most of the clones are type 1 and 2 with type 3 and especially type 4 being rare. [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @inproceedings{svajlenko2014towards, title={Towards a big data curated benchmark of inter-project code clones}, author={Svajlenko, Jeffrey and Islam, Judith F and Keivanloo, Iman and Roy, Chanchal K and Mia, Mohammad Mamun}, booktitle={2014 IEEE International Conference on Software Maintenance and Evolution}, pages={476--480}, year={2014}, organization={IEEE} } @inproceedings{wang2020detecting, title={Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree}, author={Wang, Wenhan and Li, Ge and Ma, Bo and Xia, Xin and Jin, Zhi}, booktitle={2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER)}, pages={261--271}, year={2020}, organization={IEEE} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_clone_detection_big_clone_bench
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:code", "license:c-uda", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code"], "license": ["c-uda"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-classification"], "pretty_name": "CodeXGlueCcCloneDetectionBigCloneBench", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "id1", "dtype": "int32"}, {"name": "id2", "dtype": "int32"}, {"name": "func1", "dtype": "string"}, {"name": "func2", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 2888035029, "num_examples": 901028}, {"name": "validation", "num_bytes": 1371399358, "num_examples": 415416}, {"name": "test", "num_bytes": 1220662565, "num_examples": 415416}], "download_size": 1279275281, "dataset_size": 5480096952}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-24T14:19:56+00:00
[]
[ "code" ]
TAGS #task_categories-text-classification #task_ids-semantic-similarity-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-code #license-c-uda #region-us
Dataset Card for "code\_x\_glue\_cc\_clone\_detection\_big\_clone\_bench" ========================================================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL ### Dataset Summary CodeXGLUE Clone-detection-BigCloneBench dataset, available at URL Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score. The dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree. ### Supported Tasks and Leaderboards * 'semantic-similarity-classification': The dataset can be used to train a model for classifying if two given java methods are cloens of each other. ### Languages * Java programming language Dataset Structure ----------------- ### Data Instances An example of 'test' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default field name: id, type: int32, description: Index of the sample field name: id1, type: int32, description: The first function id field name: id2, type: int32, description: The second function id field name: func1, type: string, description: The full text of the first function field name: func2, type: string, description: The full text of the second function field name: label, type: bool, description: 1 is the functions are not equivalent, 0 otherwise ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Data was mined from the IJaDataset 2.0 dataset. #### Who are the source language producers? ### Annotations #### Annotation process Data was manually labeled by three judges by automatically identifying potential clones using search heuristics. #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases Most of the clones are type 1 and 2 with type 3 and especially type 4 being rare. ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE Clone-detection-BigCloneBench dataset, available at URL\n\n\nGiven two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.\nThe dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree.", "### Supported Tasks and Leaderboards\n\n\n* 'semantic-similarity-classification': The dataset can be used to train a model for classifying if two given java methods are cloens of each other.", "### Languages\n\n\n* Java programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'test' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: id1, type: int32, description: The first function id\nfield name: id2, type: int32, description: The second function id\nfield name: func1, type: string, description: The full text of the first function\nfield name: func2, type: string, description: The full text of the second function\nfield name: label, type: bool, description: 1 is the functions are not equivalent, 0 otherwise", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData was mined from the IJaDataset 2.0 dataset.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nData was manually labeled by three judges by automatically identifying potential clones using search heuristics.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nMost of the clones are type 1 and 2 with type 3 and especially type 4 being rare.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-semantic-similarity-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-code #license-c-uda #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE Clone-detection-BigCloneBench dataset, available at URL\n\n\nGiven two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.\nThe dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree.", "### Supported Tasks and Leaderboards\n\n\n* 'semantic-similarity-classification': The dataset can be used to train a model for classifying if two given java methods are cloens of each other.", "### Languages\n\n\n* Java programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'test' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: id1, type: int32, description: The first function id\nfield name: id2, type: int32, description: The second function id\nfield name: func1, type: string, description: The full text of the first function\nfield name: func2, type: string, description: The full text of the second function\nfield name: label, type: bool, description: 1 is the functions are not equivalent, 0 otherwise", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData was mined from the IJaDataset 2.0 dataset.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nData was manually labeled by three judges by automatically identifying potential clones using search heuristics.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nMost of the clones are type 1 and 2 with type 3 and especially type 4 being rare.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 89, 113, 49, 16, 17, 32, 119, 11, 7, 4, 24, 10, 5, 28, 9, 18, 7, 28, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-semantic-similarity-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-code #license-c-uda #region-us \n### Dataset Summary\n\n\nCodeXGLUE Clone-detection-BigCloneBench dataset, available at URL\n\n\nGiven two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.\nThe dataset we use is BigCloneBench and filtered following the paper Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree.### Supported Tasks and Leaderboards\n\n\n* 'semantic-similarity-classification': The dataset can be used to train a model for classifying if two given java methods are cloens of each other.### Languages\n\n\n* Java programming language\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'test' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: id1, type: int32, description: The first function id\nfield name: id2, type: int32, description: The second function id\nfield name: func1, type: string, description: The full text of the first function\nfield name: func2, type: string, description: The full text of the second function\nfield name: label, type: bool, description: 1 is the functions are not equivalent, 0 otherwise### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization\n\n\nData was mined from the IJaDataset 2.0 dataset.#### Who are the source language producers?### Annotations" ]
701869dece36b32174ee48277eada423cdcf699c
# Dataset Card for "code_x_glue_cc_clone_detection_poj_104" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104 ### Dataset Summary CodeXGLUE Clone-detection-POJ-104 dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-POJ-104 Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score. We use POJ-104 dataset on this task. ### Supported Tasks and Leaderboards - `document-retrieval`: The dataset can be used to train a model for retrieving top-k codes with the same semantics. ### Languages - C++ **programming** language ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { "code": "\nint f(int shu,int min)\n{ \n int k=1;\n if(shu < min)\n { \n k= 0; \n return k;\n } \n else\n {\n for(int i = min;i<shu;i++)\n { \n if(shu%i == 0)\n { \n k=k+ f(shu/i,i); \n } \n \n \n } \n return k; \n}\n} \n\nmain()\n{\n int n,i,a;\n scanf(\"%d\",&n);\n \n for(i=0;i<n;i++)\n {\n scanf(\"%d\",&a);\n \n if(i!=n-1) \n printf(\"%d\\n\",f(a,2));\n else\n printf(\"%d\",f(a,2)); \n \n \n \n } \n \n \n }", "id": 0, "label": "home" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default |field name| type | description | |----------|------|----------------------------------------------| |id |int32 | Index of the sample | |code |string| The full text of the function | |label |string| The id of problem that the source code solves| ### Data Splits | name |train|validation|test | |-------|----:|---------:|----:| |default|32000| 8000|12000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @inproceedings{mou2016convolutional, title={Convolutional neural networks over tree structures for programming language processing}, author={Mou, Lili and Li, Ge and Zhang, Lu and Wang, Tao and Jin, Zhi}, booktitle={Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence}, pages={1287--1293}, year={2016} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_clone_detection_poj104
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code"], "license": ["c-uda"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "CodeXGlueCcCloneDetectionPoj104", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "code", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20179075, "num_examples": 32500}, {"name": "validation", "num_bytes": 6382433, "num_examples": 8500}, {"name": "test", "num_bytes": 7227506, "num_examples": 12000}], "download_size": 13348734, "dataset_size": 33789014}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-24T13:57:30+00:00
[]
[ "code" ]
TAGS #task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #region-us
Dataset Card for "code\_x\_glue\_cc\_clone\_detection\_poj\_104" ================================================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL ### Dataset Summary CodeXGLUE Clone-detection-POJ-104 dataset, available at URL Given a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score. We use POJ-104 dataset on this task. ### Supported Tasks and Leaderboards * 'document-retrieval': The dataset can be used to train a model for retrieving top-k codes with the same semantics. ### Languages * C++ programming language Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default field name: id, type: int32, description: Index of the sample field name: code, type: string, description: The full text of the function field name: label, type: string, description: The id of problem that the source code solves ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE Clone-detection-POJ-104 dataset, available at URL\n\n\nGiven a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.\nWe use POJ-104 dataset on this task.", "### Supported Tasks and Leaderboards\n\n\n* 'document-retrieval': The dataset can be used to train a model for retrieving top-k codes with the same semantics.", "### Languages\n\n\n* C++ programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: code, type: string, description: The full text of the function\nfield name: label, type: string, description: The id of problem that the source code solves", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE Clone-detection-POJ-104 dataset, available at URL\n\n\nGiven a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.\nWe use POJ-104 dataset on this task.", "### Supported Tasks and Leaderboards\n\n\n* 'document-retrieval': The dataset can be used to train a model for retrieving top-k codes with the same semantics.", "### Languages\n\n\n* C++ programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: code, type: string, description: The full text of the function\nfield name: label, type: string, description: The id of problem that the source code solves", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 86, 77, 45, 17, 18, 32, 58, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #region-us \n### Dataset Summary\n\n\nCodeXGLUE Clone-detection-POJ-104 dataset, available at URL\n\n\nGiven a code and a collection of candidates as the input, the task is to return Top K codes with the same semantic. Models are evaluated by MAP score.\nWe use POJ-104 dataset on this task.### Supported Tasks and Leaderboards\n\n\n* 'document-retrieval': The dataset can be used to train a model for retrieving top-k codes with the same semantics.### Languages\n\n\n* C++ programming language\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: code, type: string, description: The full text of the function\nfield name: label, type: string, description: The id of problem that the source code solves### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nURL URL### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
010d14ff326ce72e24eed99110e54f82f54731c2
# Dataset Card for "code_x_glue_cc_cloze_testing_all" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/ClozeTesting-all ### Dataset Summary CodeXGLUE ClozeTesting-all dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/ClozeTesting-all Cloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem. Here we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word. The only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words. ### Supported Tasks and Leaderboards - `slot-filling`: The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test. ### Languages - Go **programming** language - Java **programming** language - Javascript **programming** language - PHP **programming** language - Python **programming** language - Ruby **programming** language ## Dataset Structure ### Data Instances #### go An example of 'train' looks as follows. ``` { "id": 0, "idx": "all-1", "nl_tokens": ["MarshalJSON", "supports", "json", ".", "Marshaler", "interface"], "pl_tokens": ["func", "(", "v", "ContextRealtimeData", ")", "MarshalJSON", "(", ")", "(", "[", "]", "byte", ",", "error", ")", "{", "w", ":=", "jwriter", ".", "<mask>", "{", "}", "\n", "easyjsonC5a4559bEncodeGithubComChromedpCdprotoWebaudio7", "(", "&", "w", ",", "v", ")", "\n", "return", "w", ".", "Buffer", ".", "BuildBytes", "(", ")", ",", "w", ".", "Error", "\n", "}"] } ``` #### java An example of 'train' looks as follows. ``` { "id": 0, "idx": "all-1", "nl_tokens": ["/", "*", "(", "non", "-", "Javadoc", ")"], "pl_tokens": ["@", "Override", "public", "int", "peekBit", "(", ")", "throws", "AACException", "{", "int", "ret", ";", "if", "(", "bitsCached", ">", "0", ")", "{", "ret", "=", "(", "cache", ">>", "(", "bitsCached", "-", "1", ")", ")", "&", "1", ";", "}", "else", "{", "final", "int", "word", "=", "readCache", "(", "true", ")", ";", "ret", "=", "(", "<mask>", ">>", "WORD_BITS", "-", "1", ")", "&", "1", ";", "}", "return", "ret", ";", "}"] } ``` #### javascript An example of 'train' looks as follows. ``` { "id": 0, "idx": "all-1", "nl_tokens": ["Cast", "query", "params", "according", "to", "type"], "pl_tokens": ["function", "castQueryParams", "(", "relId", ",", "data", ",", "{", "relationships", "}", ")", "{", "const", "relationship", "=", "relationships", "[", "relId", "]", "if", "(", "!", "relationship", ".", "query", ")", "{", "return", "{", "}", "}", "return", "Object", ".", "keys", "(", "relationship", ".", "query", ")", ".", "reduce", "(", "(", "params", ",", "<mask>", ")", "=>", "{", "const", "value", "=", "getField", "(", "data", ",", "relationship", ".", "query", "[", "key", "]", ")", "if", "(", "value", "===", "undefined", ")", "{", "throw", "new", "TypeError", "(", "'Missing value for query param'", ")", "}", "return", "{", "...", "params", ",", "[", "key", "]", ":", "value", "}", "}", ",", "{", "}", ")", "}"] } ``` #### php An example of 'train' looks as follows. ``` { "id": 0, "idx": "all-1", "nl_tokens": ["Get", "choices", "."], "pl_tokens": ["protected", "<mask>", "getChoices", "(", "FormFieldTranslation", "$", "translation", ")", "{", "$", "choices", "=", "preg_split", "(", "'/\\r\\n|\\r|\\n/'", ",", "$", "translation", "->", "getOption", "(", "'choices'", ")", ",", "-", "1", ",", "PREG_SPLIT_NO_EMPTY", ")", ";", "return", "array_combine", "(", "$", "choices", ",", "$", "choices", ")", ";", "}"] } ``` #### python An example of 'train' looks as follows. ``` { "id": 0, "idx": "all-1", "nl_tokens": ["Post", "a", "review"], "pl_tokens": ["def", "post_review", "(", "session", ",", "review", ")", ":", "# POST /api/projects/0.1/reviews/", "<mask>", "=", "make_post_request", "(", "session", ",", "'reviews'", ",", "json_data", "=", "review", ")", "json_data", "=", "response", ".", "json", "(", ")", "if", "response", ".", "status_code", "==", "200", ":", "return", "json_data", "[", "'status'", "]", "else", ":", "raise", "ReviewNotPostedException", "(", "message", "=", "json_data", "[", "'message'", "]", ",", "error_code", "=", "json_data", "[", "'error_code'", "]", ",", "request_id", "=", "json_data", "[", "'request_id'", "]", ")"] } ``` #### ruby An example of 'train' looks as follows. ``` { "id": 0, "idx": "all-1", "nl_tokens": ["By", "default", "taskers", "don", "t", "see", "the", "flor", "variables", "in", "the", "execution", ".", "If", "include_vars", "or", "exclude_vars", "is", "present", "in", "the", "configuration", "of", "the", "tasker", "some", "or", "all", "of", "the", "variables", "are", "passed", "."], "pl_tokens": ["def", "gather_vars", "(", "executor", ",", "tconf", ",", "message", ")", "# try to return before a potentially costly call to executor.vars(nid)", "return", "nil", "if", "(", "tconf", ".", "keys", "&", "%w[", "include_vars", "exclude_vars", "]", ")", ".", "empty?", "# default behaviour, don't pass variables to taskers", "iv", "=", "expand_filter", "(", "tconf", "[", "'include_vars'", "]", ")", "return", "nil", "if", "iv", "==", "false", "ev", "=", "expand_filter", "(", "tconf", "[", "'exclude_vars'", "]", ")", "return", "{", "}", "if", "ev", "==", "true", "vars", "=", "executor", ".", "vars", "(", "message", "[", "'nid'", "]", ")", "return", "vars", "if", "iv", "==", "true", "vars", "=", "vars", ".", "select", "{", "|", "k", ",", "v", "|", "var_match", "(", "k", ",", "iv", ")", "}", "if", "<mask>", "vars", "=", "vars", ".", "reject", "{", "|", "k", ",", "v", "|", "var_match", "(", "k", ",", "ev", ")", "}", "if", "ev", "vars", "end"] } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### go, java, javascript, php, python, ruby |field name| type | description | |----------|----------------|------------------------------| |id |int32 | Index of the sample | |idx |string | Original index in the dataset| |nl_tokens |Sequence[string]| Natural language tokens | |pl_tokens |Sequence[string]| Programming language tokens | ### Data Splits | name |train| |----------|----:| |go |25282| |java |40492| |javascript|13837| |php |51930| |python |40137| |ruby | 4437| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Data from CodeSearchNet Challenge dataset. [More Information Needed] #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{CodeXGLUE, title={CodeXGLUE: An Open Challenge for Code Intelligence}, journal={arXiv}, year={2020}, } @article{feng2020codebert, title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages}, author={Feng, Zhangyin and Guo, Daya and Tang, Duyu and Duan, Nan and Feng, Xiaocheng and Gong, Ming and Shou, Linjun and Qin, Bing and Liu, Ting and Jiang, Daxin and others}, journal={arXiv preprint arXiv:2002.08155}, year={2020} } @article{husain2019codesearchnet, title={CodeSearchNet Challenge: Evaluating the State of Semantic Code Search}, author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, journal={arXiv preprint arXiv:1909.09436}, year={2019} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_cloze_testing_all
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:slot-filling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:code", "license:c-uda", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code"], "license": ["c-uda"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["slot-filling"], "pretty_name": "CodeXGlueCcClozeTestingAll", "config_names": ["go", "java", "javascript", "php", "python", "ruby"], "dataset_info": [{"config_name": "go", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 22409705, "num_examples": 25282}], "download_size": 7317578, "dataset_size": 22409705}, {"config_name": "java", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 40392865, "num_examples": 40492}], "download_size": 13540081, "dataset_size": 40392865}, {"config_name": "javascript", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 16090142, "num_examples": 13837}], "download_size": 5380631, "dataset_size": 16090142}, {"config_name": "php", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 51328868, "num_examples": 51930}], "download_size": 16553882, "dataset_size": 51328868}, {"config_name": "python", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 40631113, "num_examples": 40137}], "download_size": 15081309, "dataset_size": 40631113}, {"config_name": "ruby", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3454884, "num_examples": 4437}], "download_size": 1301455, "dataset_size": 3454884}], "configs": [{"config_name": "go", "data_files": [{"split": "train", "path": "go/train-*"}]}, {"config_name": "java", "data_files": [{"split": "train", "path": "java/train-*"}]}, {"config_name": "javascript", "data_files": [{"split": "train", "path": "javascript/train-*"}]}, {"config_name": "php", "data_files": [{"split": "train", "path": "php/train-*"}]}, {"config_name": "python", "data_files": [{"split": "train", "path": "python/train-*"}]}, {"config_name": "ruby", "data_files": [{"split": "train", "path": "ruby/train-*"}]}]}
2024-01-24T13:52:44+00:00
[]
[ "code" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-slot-filling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-code #license-c-uda #region-us
Dataset Card for "code\_x\_glue\_cc\_cloze\_testing\_all" ========================================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL ### Dataset Summary CodeXGLUE ClozeTesting-all dataset, available at URL Cloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem. Here we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word. The only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words. ### Supported Tasks and Leaderboards * 'slot-filling': The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test. ### Languages * Go programming language * Java programming language * Javascript programming language * PHP programming language * Python programming language * Ruby programming language Dataset Structure ----------------- ### Data Instances #### go An example of 'train' looks as follows. #### java An example of 'train' looks as follows. #### javascript An example of 'train' looks as follows. #### php An example of 'train' looks as follows. #### python An example of 'train' looks as follows. #### ruby An example of 'train' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### go, java, javascript, php, python, ruby field name: id, type: int32, description: Index of the sample field name: idx, type: string, description: Original index in the dataset field name: nl\_tokens, type: Sequence[string], description: Natural language tokens field name: pl\_tokens, type: Sequence[string], description: Programming language tokens ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Data from CodeSearchNet Challenge dataset. #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE ClozeTesting-all dataset, available at URL\n\n\nCloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem.\nHere we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word.\nThe only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words.", "### Supported Tasks and Leaderboards\n\n\n* 'slot-filling': The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test.", "### Languages\n\n\n* Go programming language\n* Java programming language\n* Javascript programming language\n* PHP programming language\n* Python programming language\n* Ruby programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### go\n\n\nAn example of 'train' looks as follows.", "#### java\n\n\nAn example of 'train' looks as follows.", "#### javascript\n\n\nAn example of 'train' looks as follows.", "#### php\n\n\nAn example of 'train' looks as follows.", "#### python\n\n\nAn example of 'train' looks as follows.", "#### ruby\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### go, java, javascript, php, python, ruby\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: idx, type: string, description: Original index in the dataset\nfield name: nl\\_tokens, type: Sequence[string], description: Natural language tokens\nfield name: pl\\_tokens, type: Sequence[string], description: Programming language tokens", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData from CodeSearchNet Challenge dataset.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-slot-filling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-code #license-c-uda #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE ClozeTesting-all dataset, available at URL\n\n\nCloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem.\nHere we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word.\nThe only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words.", "### Supported Tasks and Leaderboards\n\n\n* 'slot-filling': The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test.", "### Languages\n\n\n* Go programming language\n* Java programming language\n* Javascript programming language\n* PHP programming language\n* Python programming language\n* Ruby programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### go\n\n\nAn example of 'train' looks as follows.", "#### java\n\n\nAn example of 'train' looks as follows.", "#### javascript\n\n\nAn example of 'train' looks as follows.", "#### php\n\n\nAn example of 'train' looks as follows.", "#### python\n\n\nAn example of 'train' looks as follows.", "#### ruby\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### go, java, javascript, php, python, ruby\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: idx, type: string, description: Original index in the dataset\nfield name: nl\\_tokens, type: Sequence[string], description: Natural language tokens\nfield name: pl\\_tokens, type: Sequence[string], description: Programming language tokens", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData from CodeSearchNet Challenge dataset.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 108, 187, 49, 41, 6, 15, 16, 15, 16, 16, 16, 32, 102, 11, 7, 4, 19, 15, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-slot-filling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-code #license-c-uda #region-us \n### Dataset Summary\n\n\nCodeXGLUE ClozeTesting-all dataset, available at URL\n\n\nCloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem.\nHere we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word.\nThe only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words.### Supported Tasks and Leaderboards\n\n\n* 'slot-filling': The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test.### Languages\n\n\n* Go programming language\n* Java programming language\n* Javascript programming language\n* PHP programming language\n* Python programming language\n* Ruby programming language\n\n\nDataset Structure\n-----------------### Data Instances#### go\n\n\nAn example of 'train' looks as follows.#### java\n\n\nAn example of 'train' looks as follows.#### javascript\n\n\nAn example of 'train' looks as follows.#### php\n\n\nAn example of 'train' looks as follows.#### python\n\n\nAn example of 'train' looks as follows.#### ruby\n\n\nAn example of 'train' looks as follows." ]
30c54fb231878a7b020be79e7f810687f8891f05
# Dataset Card for "code_x_glue_cc_cloze_testing_maxmin" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/ClozeTesting-maxmin ### Dataset Summary CodeXGLUE ClozeTesting-maxmin dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/ClozeTesting-maxmin Cloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem. Here we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word. The only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words. ### Supported Tasks and Leaderboards - `slot-filling`: The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test. ### Languages - Go **programming** language - Java **programming** language - Javascript **programming** language - PHP **programming** language - Python **programming** language - Ruby **programming** language ## Dataset Structure ### Data Instances #### go An example of 'train' looks as follows. ``` { "id": 0, "idx": "maxmin-1", "nl_tokens": ["SetMaxStructPoolSize", "sets", "the", "struct", "pools", "max", "size", ".", "this", "may", "be", "usefull", "for", "fine", "grained", "performance", "tuning", "towards", "your", "application", "however", "the", "default", "should", "be", "fine", "for", "nearly", "all", "cases", ".", "only", "increase", "if", "you", "have", "a", "deeply", "nested", "struct", "structure", ".", "NOTE", ":", "this", "method", "is", "not", "thread", "-", "safe", "NOTE", ":", "this", "is", "only", "here", "to", "keep", "compatibility", "with", "v5", "in", "v6", "the", "method", "will", "be", "removed"], "pl_tokens": ["func", "(", "v", "*", "Validate", ")", "SetMaxStructPoolSize", "(", "<mask>", "int", ")", "{", "structPool", "=", "&", "sync", ".", "Pool", "{", "New", ":", "newStructErrors", "}", "\n", "}"] } ``` #### java An example of 'train' looks as follows. ``` { "id": 0, "idx": "maxmin-1", "nl_tokens": ["Test", "whether", "find", "can", "be", "found", "at", "position", "startPos", "in", "the", "string", "src", "."], "pl_tokens": ["public", "static", "boolean", "startsWith", "(", "char", "[", "]", "src", ",", "char", "[", "]", "find", ",", "int", "startAt", ")", "{", "int", "startPos", "=", "startAt", ";", "boolean", "result", "=", "true", ";", "// Check ranges", "if", "(", "src", ".", "length", "<", "startPos", "+", "find", ".", "length", ")", "{", "result", "=", "false", ";", "}", "else", "{", "final", "int", "<mask>", "=", "find", ".", "length", ";", "for", "(", "int", "a", "=", "0", ";", "a", "<", "max", "&&", "result", ";", "a", "++", ")", "{", "if", "(", "src", "[", "startPos", "]", "!=", "find", "[", "a", "]", ")", "{", "result", "=", "false", ";", "}", "startPos", "++", ";", "}", "}", "return", "result", ";", "}"] } ``` #### javascript An example of 'train' looks as follows. ``` { "id": 0, "idx": "maxmin-1", "nl_tokens": ["string", ".", "max", "Maximum", "length", "of", "the", "string"], "pl_tokens": ["function", "(", "string", ")", "{", "// string.check check sting type and size", "return", "(", "(", "typeof", "string", "===", "'string'", "||", "string", "instanceof", "String", ")", "&&", "string", ".", "length", ">=", "this", ".", "<mask>", "&&", "string", ".", "length", "<=", "this", ".", "max", "&&", "(", "!", "this", ".", "match", "||", "string", ".", "match", "(", "this", ".", "match", ")", ")", ")", ";", "}"] } ``` #### php An example of 'train' looks as follows. ``` { "id": 0, "idx": "maxmin-1", "nl_tokens": ["Read", "the", "next", "character", "from", "the", "supplied", "string", ".", "Return", "null", "when", "we", "have", "run", "out", "of", "characters", "."], "pl_tokens": ["public", "function", "readOne", "(", ")", "{", "if", "(", "$", "this", "->", "pos", "<=", "$", "this", "->", "<mask>", ")", "{", "$", "value", "=", "$", "this", "->", "string", "[", "$", "this", "->", "pos", "]", ";", "$", "this", "->", "pos", "+=", "1", ";", "}", "else", "{", "$", "value", "=", "null", ";", "}", "return", "$", "value", ";", "}"] } ``` #### python An example of 'train' looks as follows. ``` { "id": 0, "idx": "maxmin-1", "nl_tokens": ["Returns", "intermediary", "colors", "for", "given", "list", "of", "colors", "."], "pl_tokens": ["def", "_interpolate", "(", "self", ",", "colors", ",", "n", "=", "100", ")", ":", "gradient", "=", "[", "]", "for", "i", "in", "_range", "(", "n", ")", ":", "l", "=", "len", "(", "colors", ")", "-", "1", "x", "=", "int", "(", "1.0", "*", "i", "/", "n", "*", "l", ")", "x", "=", "<mask>", "(", "x", "+", "0", ",", "l", ")", "y", "=", "min", "(", "x", "+", "1", ",", "l", ")", "base", "=", "1.0", "*", "n", "/", "l", "*", "x", "d", "=", "(", "i", "-", "base", ")", "/", "(", "1.0", "*", "n", "/", "l", ")", "r", "=", "colors", "[", "x", "]", ".", "r", "*", "(", "1", "-", "d", ")", "+", "colors", "[", "y", "]", ".", "r", "*", "d", "g", "=", "colors", "[", "x", "]", ".", "g", "*", "(", "1", "-", "d", ")", "+", "colors", "[", "y", "]", ".", "g", "*", "d", "b", "=", "colors", "[", "x", "]", ".", "b", "*", "(", "1", "-", "d", ")", "+", "colors", "[", "y", "]", ".", "b", "*", "d", "a", "=", "colors", "[", "x", "]", ".", "a", "*", "(", "1", "-", "d", ")", "+", "colors", "[", "y", "]", ".", "a", "*", "d", "gradient", ".", "append", "(", "color", "(", "r", ",", "g", ",", "b", ",", "a", ",", "mode", "=", "\"rgb\"", ")", ")", "gradient", ".", "append", "(", "colors", "[", "-", "1", "]", ")", "return", "gradient"] } ``` #### ruby An example of 'train' looks as follows. ``` { "id": 0, "idx": "maxmin-1", "nl_tokens": ["Delete", "all", "copies", "that", "are", "older", "than", "the", "max", "age", "provided", "in", "seconds", "."], "pl_tokens": ["def", "clean", "(", "<mask>", ":", "24", "*", "60", "*", "60", ")", "Futex", ".", "new", "(", "file", ",", "log", ":", "@log", ")", ".", "open", "do", "list", "=", "load", "list", ".", "reject!", "do", "|", "s", "|", "if", "s", "[", ":time", "]", ">=", "Time", ".", "now", "-", "max", "false", "else", "@log", ".", "debug", "(", "\"Copy ##{s[:name]}/#{s[:host]}:#{s[:port]} is too old, over #{Age.new(s[:time])}\"", ")", "true", "end", "end", "save", "(", "list", ")", "deleted", "=", "0", "files", ".", "each", "do", "|", "f", "|", "next", "unless", "list", ".", "find", "{", "|", "s", "|", "s", "[", ":name", "]", "==", "File", ".", "basename", "(", "f", ",", "Copies", "::", "EXT", ")", "}", ".", "nil?", "file", "=", "File", ".", "join", "(", "@dir", ",", "f", ")", "size", "=", "File", ".", "size", "(", "file", ")", "File", ".", "delete", "(", "file", ")", "@log", ".", "debug", "(", "\"Copy at #{f} deleted: #{Size.new(size)}\"", ")", "deleted", "+=", "1", "end", "list", ".", "select!", "do", "|", "s", "|", "cp", "=", "File", ".", "join", "(", "@dir", ",", "\"#{s[:name]}#{Copies::EXT}\"", ")", "wallet", "=", "Wallet", ".", "new", "(", "cp", ")", "begin", "wallet", ".", "refurbish", "raise", "\"Invalid protocol #{wallet.protocol} in #{cp}\"", "unless", "wallet", ".", "protocol", "==", "Zold", "::", "PROTOCOL", "true", "rescue", "StandardError", "=>", "e", "FileUtils", ".", "rm_rf", "(", "cp", ")", "@log", ".", "debug", "(", "\"Copy at #{cp} deleted: #{Backtrace.new(e)}\"", ")", "deleted", "+=", "1", "false", "end", "end", "save", "(", "list", ")", "deleted", "end", "end"] } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### go, java, javascript, php, python, ruby |field name| type | description | |----------|----------------|------------------------------| |id |int32 | Index of the sample | |idx |string | Original index in the dataset| |nl_tokens |Sequence[string]| Natural language tokens | |pl_tokens |Sequence[string]| Programming language tokens | ### Data Splits | name |train| |----------|----:| |go | 152| |java | 482| |javascript| 272| |php | 407| |python | 1264| |ruby | 38| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Data from CodeSearchNet Challenge dataset. [More Information Needed] #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{CodeXGLUE, title={CodeXGLUE: An Open Challenge for Code Intelligence}, journal={arXiv}, year={2020}, } @article{feng2020codebert, title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages}, author={Feng, Zhangyin and Guo, Daya and Tang, Duyu and Duan, Nan and Feng, Xiaocheng and Gong, Ming and Shou, Linjun and Qin, Bing and Liu, Ting and Jiang, Daxin and others}, journal={arXiv preprint arXiv:2002.08155}, year={2020} } @article{husain2019codesearchnet, title={CodeSearchNet Challenge: Evaluating the State of Semantic Code Search}, author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, journal={arXiv preprint arXiv:1909.09436}, year={2019} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_cloze_testing_maxmin
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:slot-filling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:code", "license:c-uda", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code"], "license": ["c-uda"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["slot-filling"], "pretty_name": "CodeXGlueCcClozeTestingMaxmin", "config_names": ["go", "java", "javascript", "php", "python", "ruby"], "dataset_info": [{"config_name": "go", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 204977, "num_examples": 152}], "download_size": 68965, "dataset_size": 204977}, {"config_name": "java", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 785734, "num_examples": 482}], "download_size": 250672, "dataset_size": 785734}, {"config_name": "javascript", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 594327, "num_examples": 272}], "download_size": 188271, "dataset_size": 594327}, {"config_name": "php", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 705457, "num_examples": 407}], "download_size": 211107, "dataset_size": 705457}, {"config_name": "python", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 2566333, "num_examples": 1264}], "download_size": 885488, "dataset_size": 2566333}, {"config_name": "ruby", "features": [{"name": "id", "dtype": "int32"}, {"name": "idx", "dtype": "string"}, {"name": "nl_tokens", "sequence": "string"}, {"name": "pl_tokens", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 48926, "num_examples": 38}], "download_size": 22528, "dataset_size": 48926}], "configs": [{"config_name": "go", "data_files": [{"split": "train", "path": "go/train-*"}]}, {"config_name": "java", "data_files": [{"split": "train", "path": "java/train-*"}]}, {"config_name": "javascript", "data_files": [{"split": "train", "path": "javascript/train-*"}]}, {"config_name": "php", "data_files": [{"split": "train", "path": "php/train-*"}]}, {"config_name": "python", "data_files": [{"split": "train", "path": "python/train-*"}]}, {"config_name": "ruby", "data_files": [{"split": "train", "path": "ruby/train-*"}]}]}
2024-01-24T13:26:40+00:00
[]
[ "code" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-slot-filling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-code #license-c-uda #region-us
Dataset Card for "code\_x\_glue\_cc\_cloze\_testing\_maxmin" ============================================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL ### Dataset Summary CodeXGLUE ClozeTesting-maxmin dataset, available at URL Cloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem. Here we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word. The only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words. ### Supported Tasks and Leaderboards * 'slot-filling': The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test. ### Languages * Go programming language * Java programming language * Javascript programming language * PHP programming language * Python programming language * Ruby programming language Dataset Structure ----------------- ### Data Instances #### go An example of 'train' looks as follows. #### java An example of 'train' looks as follows. #### javascript An example of 'train' looks as follows. #### php An example of 'train' looks as follows. #### python An example of 'train' looks as follows. #### ruby An example of 'train' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### go, java, javascript, php, python, ruby field name: id, type: int32, description: Index of the sample field name: idx, type: string, description: Original index in the dataset field name: nl\_tokens, type: Sequence[string], description: Natural language tokens field name: pl\_tokens, type: Sequence[string], description: Programming language tokens ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Data from CodeSearchNet Challenge dataset. #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE ClozeTesting-maxmin dataset, available at URL\n\n\nCloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem.\nHere we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word.\nThe only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words.", "### Supported Tasks and Leaderboards\n\n\n* 'slot-filling': The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test.", "### Languages\n\n\n* Go programming language\n* Java programming language\n* Javascript programming language\n* PHP programming language\n* Python programming language\n* Ruby programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### go\n\n\nAn example of 'train' looks as follows.", "#### java\n\n\nAn example of 'train' looks as follows.", "#### javascript\n\n\nAn example of 'train' looks as follows.", "#### php\n\n\nAn example of 'train' looks as follows.", "#### python\n\n\nAn example of 'train' looks as follows.", "#### ruby\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### go, java, javascript, php, python, ruby\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: idx, type: string, description: Original index in the dataset\nfield name: nl\\_tokens, type: Sequence[string], description: Natural language tokens\nfield name: pl\\_tokens, type: Sequence[string], description: Programming language tokens", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData from CodeSearchNet Challenge dataset.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-slot-filling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-code #license-c-uda #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE ClozeTesting-maxmin dataset, available at URL\n\n\nCloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem.\nHere we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word.\nThe only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words.", "### Supported Tasks and Leaderboards\n\n\n* 'slot-filling': The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test.", "### Languages\n\n\n* Go programming language\n* Java programming language\n* Javascript programming language\n* PHP programming language\n* Python programming language\n* Ruby programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### go\n\n\nAn example of 'train' looks as follows.", "#### java\n\n\nAn example of 'train' looks as follows.", "#### javascript\n\n\nAn example of 'train' looks as follows.", "#### php\n\n\nAn example of 'train' looks as follows.", "#### python\n\n\nAn example of 'train' looks as follows.", "#### ruby\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### go, java, javascript, php, python, ruby\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: idx, type: string, description: Original index in the dataset\nfield name: nl\\_tokens, type: Sequence[string], description: Natural language tokens\nfield name: pl\\_tokens, type: Sequence[string], description: Programming language tokens", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData from CodeSearchNet Challenge dataset.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 108, 188, 49, 41, 6, 15, 16, 15, 16, 16, 16, 32, 102, 11, 7, 4, 19, 15, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-slot-filling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-code #license-c-uda #region-us \n### Dataset Summary\n\n\nCodeXGLUE ClozeTesting-maxmin dataset, available at URL\n\n\nCloze tests are widely adopted in Natural Languages Processing to evaluate the performance of the trained language models. The task is aimed to predict the answers for the blank with the context of the blank, which can be formulated as a multi-choice classification problem.\nHere we present the two cloze testing datasets in code domain with six different programming languages: ClozeTest-maxmin and ClozeTest-all. Each instance in the dataset contains a masked code function, its docstring and the target word.\nThe only difference between ClozeTest-maxmin and ClozeTest-all is their selected words sets, where ClozeTest-maxmin only contains two words while ClozeTest-all contains 930 words.### Supported Tasks and Leaderboards\n\n\n* 'slot-filling': The dataset can be used to train a model for predicting the missing token from a piece of code, similar to the Cloze test.### Languages\n\n\n* Go programming language\n* Java programming language\n* Javascript programming language\n* PHP programming language\n* Python programming language\n* Ruby programming language\n\n\nDataset Structure\n-----------------### Data Instances#### go\n\n\nAn example of 'train' looks as follows.#### java\n\n\nAn example of 'train' looks as follows.#### javascript\n\n\nAn example of 'train' looks as follows.#### php\n\n\nAn example of 'train' looks as follows.#### python\n\n\nAn example of 'train' looks as follows.#### ruby\n\n\nAn example of 'train' looks as follows." ]
d480ae0bde7b9f18677131dce01a03d6d028e964
# Dataset Card for "code_x_glue_cc_code_completion_line" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line ### Dataset Summary CodeXGLUE CodeCompletion-line dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-line Complete the unfinished line given previous context. Models are evaluated by exact match and edit similarity. We propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code. Line level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion. ### Supported Tasks and Leaderboards - `slot-filling`: The dataset can be used to train a model for completing entire code lines. ### Languages - Java **programming** language - Python **programming** language ## Dataset Structure ### Data Instances #### java An example of 'train' looks as follows. ``` { "gt": "", "id": 0, "input": "<s> package org . rubypeople . rdt . internal . ui . rubyeditor ; import java . util . Iterator ; import org . eclipse . core . resources . IMarker ; import org . eclipse . ui . texteditor . MarkerAnnotation ; import org . eclipse . ui . texteditor . MarkerUtilities ; import org . rubypeople . rdt . core . IRubyElement ; import org . rubypeople . rdt . core . IRubyModelMarker ; import org . rubypeople . rdt . core . IRubyScript ; import org . rubypeople . rdt . core . RubyCore ; public class RubyMarkerAnnotation extends MarkerAnnotation implements IRubyAnnotation { public static final String RUBY_MARKER_TYPE_PREFIX = \"\" ; public static final String ERROR_ANNOTATION_TYPE = \"\" ; public static final String WARNING_ANNOTATION_TYPE = \"\" ; public static final String INFO_ANNOTATION_TYPE = \"\" ; public static final String TASK_ANNOTATION_TYPE = \"\" ; private IRubyAnnotation fOverlay ; public RubyMarkerAnnotation ( IMarker marker ) { super ( marker ) ; } public String [ ] getArguments ( ) { return null ; } public int getId ( ) { IMarker marker = getMarker ( ) ; if ( marker == null || ! marker . exists ( ) ) return - 1 ; if ( isProblem ( ) ) return marker . getAttribute ( IRubyModelMarker . ID , - 1 ) ; return - 1 ; } public boolean isProblem ( ) { String type = getType ( ) ; return WARNING_ANNOTATION_TYPE . equals ( type ) || ERROR_ANNOTATION_TYPE . equals" } ``` #### python An example of 'train' looks as follows. ``` { "gt": "", "id": 0, "input": "<s> from __future__ import absolute_import <EOL> import weakref <EOL> import operator <EOL> from . compat import threading , itertools_filterfalse <EOL> from . import py2k <EOL> import types <EOL> EMPTY_SET = frozenset ( ) <EOL> class KeyedTuple ( tuple ) : <EOL> def __new__ ( cls , vals , labels = None ) : <EOL> t = tuple . __new__ ( cls , vals ) <EOL> t . _labels = [ ] <EOL> if labels : <EOL> t . __dict__ . update ( zip ( labels , vals ) ) <EOL> t . _labels = labels <EOL> return t <EOL> def keys ( self ) : <EOL> return [ l for l in self . _labels if l is not None ] <EOL> @ property <EOL> def _fields ( self ) : <EOL> return tuple ( self . keys ( ) ) <EOL> def _asdict ( self ) : <EOL> return dict ( ( key , self . __dict__ [ key ] ) for key in self . keys ( ) ) <EOL> class ImmutableContainer ( object ) : <EOL> def _immutable ( self , * arg , ** kw ) : <EOL> raise TypeError ( \"\" % self . __class__ . __name__ ) <EOL> __delitem__ = __setitem__ = __setattr__ = _immutable <EOL> class immutabledict ( ImmutableContainer , dict ) : <EOL> clear = pop = popitem = setdefault = update = ImmutableContainer . _immutable <EOL> def __new__ ( cls , * args ) : <EOL> new = dict . __new__ ( cls ) <EOL> dict . __init__ ( new , * args ) <EOL> return new <EOL> def __init__ ( self , * args ) : <EOL> pass <EOL> def __reduce__ ( self ) : <EOL> return immutabledict , ( dict ( self ) , ) <EOL> def union ( self , d ) : <EOL> if not self : <EOL> return immutabledict ( d ) <EOL> else : <EOL> d2 = immutabledict ( self ) <EOL> dict . update ( d2 , d ) <EOL> return d2 <EOL> def __repr__ ( self ) : <EOL> return \"\" % dict . __repr__ ( self ) <EOL> class Properties ( object ) : <EOL> def __init__ ( self , data ) : <EOL> self . __dict__ [ '_data' ] = data <EOL> def __len__ ( self ) : <EOL> return len ( self . _data ) <EOL> def __iter__ ( self ) : <EOL> return iter ( list ( self . _data . values ( ) ) ) <EOL> def __add__ ( self , other ) : <EOL> return list ( self ) + list ( other ) <EOL> def __setitem__ ( self , key , object ) : <EOL> self . _data [ key ] = object <EOL> def __getitem__ ( self , key ) : <EOL> return self . _data [ key ] <EOL> def __delitem__ ( self , key ) : <EOL> del self . _data [ key ] <EOL> def __setattr__ ( self , key , object ) : <EOL> self . _data [ key ] = object <EOL> def __getstate__ ( self ) : <EOL> return { '_data' : self . __dict__ [ '_data' ] } <EOL> def __setstate__ ( self , state ) : <EOL> self . __dict__ [ '_data' ] = state [ '_data' ] <EOL> def __getattr__ ( self , key ) : <EOL> try : <EOL> return self . _data [ key ] <EOL> except KeyError : <EOL> raise AttributeError ( key ) <EOL> def __contains__ ( self , key ) : <EOL> return key in self . _data <EOL> def as_immutable ( self ) : <EOL> return ImmutableProperties ( self . _data ) <EOL> def update ( self , value ) : <EOL> self . _data . update ( value ) <EOL> def get ( self , key , default = None ) : <EOL> if key in self : <EOL> return self [ key ] <EOL> else : <EOL> return default <EOL> def keys ( self ) : <EOL> return list ( self . _data ) <EOL> def values ( self ) : <EOL> return list ( self . _data . values ( ) ) <EOL> def items ( self ) : <EOL> return list ( self . _data . items ( ) ) <EOL> def has_key ( self , key ) : <EOL> return key in self . _data <EOL> def clear ( self ) : <EOL> self . _data . clear ( ) <EOL> class OrderedProperties ( Properties ) : <EOL> def __init__ ( self ) : <EOL> Properties . __init__ ( self , OrderedDict ( ) ) <EOL> class ImmutableProperties ( ImmutableContainer , Properties ) : <EOL> class OrderedDict ( dict ) : <EOL> def __init__ ( self , ____sequence = None , ** kwargs ) : <EOL> self . _list = [ ] <EOL> if ____sequence is None : <EOL> if kwargs : <EOL> self . update ( ** kwargs ) <EOL> else : <EOL> self . update ( ____sequence , ** kwargs ) <EOL> def clear ( self ) : <EOL> self . _list = [ ] <EOL> dict . clear ( self ) <EOL> def copy ( self ) : <EOL> return self . __copy__ ( ) <EOL> def __copy__ ( self ) : <EOL> return OrderedDict ( self ) <EOL> def sort ( self , * arg , ** kw ) : <EOL> self . _list . sort ( * arg , ** kw ) <EOL> def update ( self , ____sequence = None , ** kwargs ) : <EOL> if ____sequence is not None : <EOL> if hasattr ( ____sequence , 'keys' ) : <EOL> for key in ____sequence . keys ( ) : <EOL> self . __setitem__ ( key , ____sequence [ key ] ) <EOL> else : <EOL> for key , value in ____sequence : <EOL> self [ key ] = value <EOL> if kwargs : <EOL> self . update ( kwargs ) <EOL> def setdefault ( self , key , value ) : <EOL> if key not in self : <EOL> self . __setitem__ ( key , value ) <EOL> return value <EOL> else : <EOL> return self . __getitem__ ( key ) <EOL> def __iter__ ( self ) : <EOL> return iter ( self . _list ) <EOL> def keys ( self ) : <EOL> return list ( self ) <EOL> def values ( self ) : <EOL> return [ self [ key ] for key in self . _list ] <EOL> def items ( self ) : <EOL> return [ ( key , self [ key ] ) for key in self . _list ] <EOL> if py2k : <EOL> def itervalues ( self ) : <EOL> return iter ( self . values ( ) ) <EOL> def iterkeys ( self ) : <EOL> return iter ( self ) <EOL> def iteritems ( self ) : <EOL> return iter ( self . items ( ) ) <EOL> def __setitem__ ( self , key , object ) : <EOL> if key not in self : <EOL> try : <EOL> self . _list . append ( key ) <EOL> except AttributeError : <EOL> self . _list = [ key ] <EOL> dict . __setitem__ ( self , key , object ) <EOL> def __delitem__ ( self , key ) : <EOL> dict . __delitem__ ( self , key ) <EOL> self . _list . remove ( key ) <EOL> def pop ( self , key , * default ) : <EOL> present = key in self <EOL> value = dict . pop ( self , key , * default ) <EOL> if present : <EOL> self . _list . remove ( key ) <EOL> return value <EOL> def popitem ( self ) : <EOL> item = dict . popitem ( self ) <EOL> self . _list . remove ( item [ 0 ] ) <EOL> return item <EOL> class OrderedSet ( set ) : <EOL> def __init__ ( self , d = None ) : <EOL> set . __init__ ( self ) <EOL> self . _list = [ ] <EOL> if d is not None : <EOL>" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### java, python |field name| type | description | |----------|------|----------------------------| |id |int32 | Index of the sample | |input |string| Input code string | |gt |string| Code string to be predicted| ### Data Splits | name |train| |------|----:| |java | 3000| |python|10000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{raychev2016probabilistic, title={Probabilistic Model for Code with Decision Trees}, author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin}, journal={ACM SIGPLAN Notices}, pages={731--747}, year={2016}, publisher={ACM New York, NY, USA} } @inproceedings{allamanis2013mining, title={Mining Source Code Repositories at Massive Scale using Language Modeling}, author={Allamanis, Miltiadis and Sutton, Charles}, booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)}, pages={207--216}, year={2013}, organization={IEEE} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_code_completion_line
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:slot-filling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:code", "license:c-uda", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code"], "license": ["c-uda"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K", "n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["slot-filling"], "pretty_name": "CodeXGlueCcCodeCompletionLine", "config_names": ["go", "java", "javascript", "php", "python", "ruby"], "dataset_info": [{"config_name": "java", "features": [{"name": "id", "dtype": "int32"}, {"name": "input", "dtype": "string"}, {"name": "gt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5454775, "num_examples": 3000}], "download_size": 1696679, "dataset_size": 5454775}, {"config_name": "python", "features": [{"name": "id", "dtype": "int32"}, {"name": "input", "dtype": "string"}, {"name": "gt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24021554, "num_examples": 10000}], "download_size": 8140670, "dataset_size": 24021554}], "configs": [{"config_name": "java", "data_files": [{"split": "train", "path": "java/train-*"}]}, {"config_name": "python", "data_files": [{"split": "train", "path": "python/train-*"}]}]}
2024-01-24T14:22:56+00:00
[]
[ "code" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-slot-filling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-code #license-c-uda #region-us
Dataset Card for "code\_x\_glue\_cc\_code\_completion\_line" ============================================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL ### Dataset Summary CodeXGLUE CodeCompletion-line dataset, available at URL Complete the unfinished line given previous context. Models are evaluated by exact match and edit similarity. We propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code. Line level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion. ### Supported Tasks and Leaderboards * 'slot-filling': The dataset can be used to train a model for completing entire code lines. ### Languages * Java programming language * Python programming language Dataset Structure ----------------- ### Data Instances #### java An example of 'train' looks as follows. #### python An example of 'train' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### java, python field name: id, type: int32, description: Index of the sample field name: input, type: string, description: Input code string field name: gt, type: string, description: Code string to be predicted ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE CodeCompletion-line dataset, available at URL\n\n\nComplete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.\nWe propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.\nLine level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.", "### Supported Tasks and Leaderboards\n\n\n* 'slot-filling': The dataset can be used to train a model for completing entire code lines.", "### Languages\n\n\n* Java programming language\n* Python programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### java\n\n\nAn example of 'train' looks as follows.", "#### python\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### java, python\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: input, type: string, description: Input code string\nfield name: gt, type: string, description: Code string to be predicted", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-slot-filling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-code #license-c-uda #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE CodeCompletion-line dataset, available at URL\n\n\nComplete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.\nWe propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.\nLine level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.", "### Supported Tasks and Leaderboards\n\n\n* 'slot-filling': The dataset can be used to train a model for completing entire code lines.", "### Languages\n\n\n* Java programming language\n* Python programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### java\n\n\nAn example of 'train' looks as follows.", "#### python\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### java, python\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: input, type: string, description: Input code string\nfield name: gt, type: string, description: Code string to be predicted", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 106, 198, 36, 21, 6, 16, 16, 32, 57, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-slot-filling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-code #license-c-uda #region-us \n### Dataset Summary\n\n\nCodeXGLUE CodeCompletion-line dataset, available at URL\n\n\nComplete the unfinished line given previous context. Models are evaluated by exact match and edit similarity.\nWe propose line completion task to test model's ability to autocomplete a line. Majority code completion systems behave well in token level completion, but fail in completing an unfinished line like a method call with specific parameters, a function signature, a loop condition, a variable definition and so on. When a software develop finish one or more tokens of the current line, the line level completion model is expected to generate the entire line of syntactically correct code.\nLine level code completion task shares the train/dev dataset with token level completion. After training a model on CodeCompletion-token, you could directly use it to test on line-level completion.### Supported Tasks and Leaderboards\n\n\n* 'slot-filling': The dataset can be used to train a model for completing entire code lines.### Languages\n\n\n* Java programming language\n* Python programming language\n\n\nDataset Structure\n-----------------### Data Instances#### java\n\n\nAn example of 'train' looks as follows.#### python\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.#### java, python\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: input, type: string, description: Input code string\nfield name: gt, type: string, description: Code string to be predicted### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale" ]
54bdbd7ee7e40c7f3c68ba5a5a20ab0b2d38d5b2
# Dataset Card for "code_x_glue_cc_code_completion_token" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token ### Dataset Summary CodeXGLUE CodeCompletion-token dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/CodeCompletion-token Predict next code token given context of previous tokens. Models are evaluated by token level accuracy. Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types. ### Supported Tasks and Leaderboards - `language-modeling`: The dataset can be used to train a model for completing single code tokens. ### Languages - Java **programming** language - Python **programming** language ## Dataset Structure ### Data Instances #### java An example of 'test' looks as follows. ``` { "code": ["<s>", "package", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "demo", ";", "import", "java", ".", "io", ".", "BufferedReader", ";", "import", "java", ".", "io", ".", "ByteArrayInputStream", ";", "import", "java", ".", "io", ".", "IOException", ";", "import", "java", ".", "io", ".", "InputStreamReader", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "Clara", ";", "import", "org", ".", "vaadin", ".", "teemu", ".", "clara", ".", "inflater", ".", "LayoutInflaterException", ";", "import", "com", ".", "vaadin", ".", "Application", ";", "import", "com", ".", "vaadin", ".", "terminal", ".", "ThemeResource", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Button", ".", "ClickEvent", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Component", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Embedded", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "HorizontalSplitPanel", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "TextArea", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "VerticalLayout", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ";", "import", "com", ".", "vaadin", ".", "ui", ".", "Window", ".", "Notification", ";", "@", "SuppressWarnings", "(", "\"serial\"", ")", "public", "class", "DemoApplication", "extends", "Application", "{", "private", "DemoController", "controller", ";", "private", "TextArea", "xmlArea", ";", "private", "HorizontalSplitPanel", "split", "=", "new", "HorizontalSplitPanel", "(", ")", ";", "private", "Window", "mainWindow", ";", "@", "Override", "public", "void", "init", "(", ")", "{", "setTheme", "(", "\"clara\"", ")", ";", "setMainWindow", "(", "mainWindow", "=", "new", "Window", "(", ")", ")", ";", "controller", "=", "new", "DemoController", "(", "mainWindow", ")", ";", "mainWindow", ".", "setContent", "(", "split", ")", ";", "VerticalLayout", "editor", "=", "new", "VerticalLayout", "(", ")", ";", "editor", ".", "setSpacing", "(", "true", ")", ";", "editor", ".", "setMargin", "(", "false", ",", "false", ",", "false", ",", "true", ")", ";", "editor", ".", "setHeight", "(", "\"100%\"", ")", ";", "editor", ".", "addComponent", "(", "xmlArea", "=", "createXmlArea", "(", ")", ")", ";", "editor", ".", "setExpandRatio", "(", "xmlArea", ",", "1.0f", ")", ";", "editor", ".", "addComponent", "(", "createUpdateButton", "(", ")", ")", ";", "HorizontalLayout", "wrapper", "=", "new", "HorizontalLayout", "(", ")", ";", "wrapper", ".", "setMargin", "(", "true", ")", ";", "wrapper", ".", "setSizeFull", "(", ")", ";", "wrapper", ".", "addComponent", "(", "createLogo", "(", ")", ")", ";", "wrapper", ".", "addComponent", "(", "editor", ")", ";", "wrapper", ".", "setExpandRatio", "(", "editor", ",", "1.0f", ")", ";", "split", ".", "setFirstComponent", "(", "wrapper", ")", ";", "updateLayout", "(", ")", ";", "}", "private", "Component", "createLogo", "(", ")", "{", "Embedded", "logo", "=", "new", "Embedded", "(", "null", ",", "new", "ThemeResource", "(", "\"\"", ")", ")", ";", "logo", ".", "setHeight", "(", "\"90px\"", ")", ";", "logo", ".", "setWidth", "(", "\"90px\"", ")", ";", "return", "logo", ";", "}", "private", "TextArea", "createXmlArea", "(", ")", "{", "TextArea", "area", "=", "new", "TextArea", "(", ")", ";", "area", ".", "setStyleName", "(", "\"xml-area\"", ")", ";", "area", ".", "setSizeFull", "(", ")", ";", "area", ".", "setValue", "(", "readStartingPoint", "(", ")", ")", ";", "return", "area", ";", "}", "private", "Button", "createUpdateButton", "(", ")", "{", "return", "new", "Button", "(", "\"Update\"", ",", "new", "Button", ".", "ClickListener", "(", ")", "{", "public", "void", "buttonClick", "(", "ClickEvent", "event", ")", "{", "updateLayout", "(", ")", ";", "}", "}", ")", ";", "}", "private", "String", "readStartingPoint", "(", ")", "{", "BufferedReader", "reader", "=", "null", ";", "try", "{", "reader", "=", "new", "BufferedReader", "(", "new", "InputStreamReader", "(", "getClass", "(", ")", ".", "getClassLoader", "(", ")", ".", "getResourceAsStream", "(", "\"\"", ")", ")", ")", ";", "StringBuilder", "xml", "=", "new", "StringBuilder", "(", ")", ";", "String", "line", ";", "while", "(", "(", "line", "=", "reader", ".", "readLine", "(", ")", ")", "!=", "null", ")", "{", "xml", ".", "append", "(", "line", ")", ";", "xml", ".", "append", "(", "\"n\"", ")", ";", "}", "return", "xml", ".", "toString", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "finally", "{", "if", "(", "reader", "!=", "null", ")", "{", "try", "{", "reader", ".", "close", "(", ")", ";", "}", "catch", "(", "IOException", "e", ")", "{", "e", ".", "printStackTrace", "(", ")", ";", "}", "}", "}", "return", "null", ";", "}", "private", "void", "updateLayout", "(", ")", "{", "try", "{", "Component", "c", "=", "Clara", ".", "create", "(", "new", "ByteArrayInputStream", "(", "xmlArea", ".", "getValue", "(", ")", ".", "toString", "(", ")", ".", "getBytes", "(", ")", ")", ",", "controller", ")", ";", "split", ".", "replaceComponent", "(", "split", ".", "getSecondComponent", "(", ")", ",", "c", ")", ";", "}", "catch", "(", "LayoutInflaterException", "e", ")", "{", "mainWindow", ".", "showNotification", "(", "e", ".", "getMessage", "(", ")", ",", "Notification", ".", "TYPE_ERROR_MESSAGE", ")", ";", "}", "}", "}", "</s>"], "id": 0 } ``` #### python An example of 'train' looks as follows. ``` { "code": ["<s>", "from", "bootstrap", "import", "Bootstrap", "<EOL>", "from", "fund", "import", "InstantPaymentNotificationHandler", "<EOL>", "from", "fund", "import", "ThankYouHandler", "<EOL>", "from", "view", "import", "*", "<EOL>", "mapping", "=", "[", "(", "<EOL>", "r'/'", ",", "<EOL>", "Index", "<EOL>", ")", ",", "(", "<EOL>", "r'/ipn'", ",", "<EOL>", "InstantPaymentNotificationHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/thank-you'", ",", "<EOL>", "ThankYouHandler", "<EOL>", ")", ",", "(", "<EOL>", "r'/about\\/?'", ",", "<EOL>", "About", "<EOL>", ")", ",", "(", "<EOL>", "r'/guide\\/?'", ",", "<EOL>", "Guide", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Download", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Standards", "<EOL>", ")", ",", "(", "<EOL>", "r'/community\\/?'", ",", "<EOL>", "Community", "<EOL>", ")", ",", "(", "<EOL>", "r'/news\\/?'", ",", "<EOL>", "News", "<EOL>", ")", ",", "(", "<EOL>", "r'/support\\/?'", ",", "<EOL>", "Support", "<EOL>", ")", ",", "(", "<EOL>", "r'/contact\\/?'", ",", "<EOL>", "Contact", "<EOL>", ")", ",", "(", "<EOL>", "r'/press\\/?'", ",", "<EOL>", "Press", "<EOL>", ")", ",", "(", "<EOL>", "r'/legal/terms'", ",", "<EOL>", "Terms", "<EOL>", ")", ",", "(", "<EOL>", "r'/library\\/?'", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Library", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Users", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "User", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Design", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectSuccess", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectError", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "RedirectAfterDelete", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Moderate", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Bootstrap", "<EOL>", ")", ",", "(", "<EOL>", "r'/activity'", ",", "<EOL>", "ActivityScreen", "<EOL>", ")", ",", "(", "<EOL>", "r'/txns'", ",", "<EOL>", "TxnList", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "Base64Blob", "<EOL>", ")", ",", "(", "<EOL>", "r''", ",", "<EOL>", "MessageStrings", "<EOL>", ")", ",", "(", "<EOL>", "r'/.*'", ",", "<EOL>", "NotFound", "<EOL>", ")", "<EOL>", "]", "</s>"], "id": 0, "path": "00/wikihouse/urls.py\n" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### java |field name| type | description | |----------|----------------|--------------------| |id |int32 | Index of the sample| |code |Sequence[string]| Code Tokens | #### python |field name| type | description | |----------|----------------|-----------------------------| |id |int32 | Index of the sample | |path |string | Original path in the dataset| |code |Sequence[string]| Code Tokens | ### Data Splits #### java | |train|validation|test| |----|----:|---------:|---:| |java|12934| 7189|8268| #### python | |train |test | |------|-----:|----:| |python|100000|50000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{raychev2016probabilistic, title={Probabilistic Model for Code with Decision Trees}, author={Raychev, Veselin and Bielik, Pavol and Vechev, Martin}, journal={ACM SIGPLAN Notices}, pages={731--747}, year={2016}, publisher={ACM New York, NY, USA} } @inproceedings{allamanis2013mining, title={Mining Source Code Repositories at Massive Scale using Language Modeling}, author={Allamanis, Miltiadis and Sutton, Charles}, booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)}, pages={207--216}, year={2013}, organization={IEEE} } ``` The data for "java" configuration comes from: ``` @dataset{rafael_michael_karampatsis_2020_3628665, author = {Rafael - Michael Karampatsis and Hlib Babii and Romain Robbes and Charles Sutton and Andrea Janes}, title = {Preprocessed Java Code Corpus}, month = jan, year = 2020, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.3628665}, url = {https://doi.org/10.5281/zenodo.3628665} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_code_completion_token
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code"], "license": ["c-uda"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "CodeXGlueCcCodeCompletionToken", "dataset_info": [{"config_name": "java", "features": [{"name": "id", "dtype": "int32"}, {"name": "code", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 128312045, "num_examples": 12934}, {"name": "validation", "num_bytes": 30259166, "num_examples": 7189}, {"name": "test", "num_bytes": 43027948, "num_examples": 8268}], "download_size": 31320339, "dataset_size": 201599159}, {"config_name": "python", "features": [{"name": "id", "dtype": "int32"}, {"name": "path", "dtype": "string"}, {"name": "code", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 684319455, "num_examples": 100000}, {"name": "test", "num_bytes": 333978028, "num_examples": 50000}], "download_size": 210143525, "dataset_size": 1018297483}], "configs": [{"config_name": "java", "data_files": [{"split": "train", "path": "java/train-*"}, {"split": "validation", "path": "java/validation-*"}, {"split": "test", "path": "java/test-*"}]}, {"config_name": "python", "data_files": [{"split": "train", "path": "python/train-*"}, {"split": "test", "path": "python/test-*"}]}]}
2024-01-24T14:47:39+00:00
[]
[ "code" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #region-us
Dataset Card for "code\_x\_glue\_cc\_code\_completion\_token" ============================================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL ### Dataset Summary CodeXGLUE CodeCompletion-token dataset, available at URL Predict next code token given context of previous tokens. Models are evaluated by token level accuracy. Code completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types. ### Supported Tasks and Leaderboards * 'language-modeling': The dataset can be used to train a model for completing single code tokens. ### Languages * Java programming language * Python programming language Dataset Structure ----------------- ### Data Instances #### java An example of 'test' looks as follows. #### python An example of 'train' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### java field name: id, type: int32, description: Index of the sample field name: code, type: Sequence[string], description: Code Tokens #### python field name: id, type: int32, description: Index of the sample field name: path, type: string, description: Original path in the dataset field name: code, type: Sequence[string], description: Code Tokens ### Data Splits #### java #### python Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. The data for "java" configuration comes from: ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE CodeCompletion-token dataset, available at URL\n\n\nPredict next code token given context of previous tokens. Models are evaluated by token level accuracy.\nCode completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.", "### Supported Tasks and Leaderboards\n\n\n* 'language-modeling': The dataset can be used to train a model for completing single code tokens.", "### Languages\n\n\n* Java programming language\n* Python programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### java\n\n\nAn example of 'test' looks as follows.", "#### python\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### java\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: code, type: Sequence[string], description: Code Tokens", "#### python\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: path, type: string, description: Original path in the dataset\nfield name: code, type: Sequence[string], description: Code Tokens", "### Data Splits", "#### java", "#### python\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.\n\n\nThe data for \"java\" configuration comes from:", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE CodeCompletion-token dataset, available at URL\n\n\nPredict next code token given context of previous tokens. Models are evaluated by token level accuracy.\nCode completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.", "### Supported Tasks and Leaderboards\n\n\n* 'language-modeling': The dataset can be used to train a model for completing single code tokens.", "### Languages\n\n\n* Java programming language\n* Python programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### java\n\n\nAn example of 'test' looks as follows.", "#### python\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### java\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: code, type: Sequence[string], description: Code Tokens", "#### python\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: path, type: string, description: Original path in the dataset\nfield name: code, type: Sequence[string], description: Code Tokens", "### Data Splits", "#### java", "#### python\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.\n\n\nThe data for \"java\" configuration comes from:", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 108, 147, 36, 21, 6, 15, 16, 32, 40, 57, 5, 4, 10, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 8, 30, 25 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #region-us \n### Dataset Summary\n\n\nCodeXGLUE CodeCompletion-token dataset, available at URL\n\n\nPredict next code token given context of previous tokens. Models are evaluated by token level accuracy.\nCode completion is a one of the most widely used features in software development through IDEs. An effective code completion tool could improve software developers' productivity. We provide code completion evaluation tasks in two granularities -- token level and line level. Here we introduce token level code completion. Token level task is analogous to language modeling. Models should have be able to predict the next token in arbitary types.### Supported Tasks and Leaderboards\n\n\n* 'language-modeling': The dataset can be used to train a model for completing single code tokens.### Languages\n\n\n* Java programming language\n* Python programming language\n\n\nDataset Structure\n-----------------### Data Instances#### java\n\n\nAn example of 'test' looks as follows.#### python\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.#### java\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: code, type: Sequence[string], description: Code Tokens#### python\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: path, type: string, description: Original path in the dataset\nfield name: code, type: Sequence[string], description: Code Tokens### Data Splits#### java#### python\n\n\n\nDataset Creation\n----------------### Curation Rationale" ]
07ab797a018d0d5c448b56eb26b5e11aa5ad7659
# Dataset Card for "code_x_glue_cc_code_refinement" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-refinement - **Paper:** https://arxiv.org/abs/2102.04664 ### Dataset Summary CodeXGLUE code-refinement dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-refinement We use the dataset released by this paper(https://arxiv.org/pdf/1812.08693.pdf). The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length. ### Supported Tasks and Leaderboards - `text2text-generation-other-debugging`: The dataset can be used to train a model for automatically fixing buggy code. ### Languages - Java **programming** language ## Dataset Structure ### Data Instances #### medium An example of 'train' looks as follows. ``` { "buggy": "public static TYPE_1 init ( java.lang.String name , java.util.Date date ) { TYPE_1 VAR_1 = new TYPE_1 ( ) ; VAR_1 . METHOD_1 ( name ) ; java.util.Calendar VAR_2 = java.util.Calendar.getInstance ( ) ; VAR_2 . METHOD_2 ( date ) ; VAR_1 . METHOD_3 ( VAR_2 ) ; return VAR_1 ; }\n", "fixed": "public static TYPE_1 init ( java.lang.String name , java.util.Date date ) { TYPE_1 VAR_1 = new TYPE_1 ( ) ; VAR_1 . METHOD_1 ( name ) ; java.util.Calendar VAR_2 = null ; if ( date != null ) { VAR_2 = java.util.Calendar.getInstance ( ) ; VAR_2 . METHOD_2 ( date ) ; } VAR_1 . METHOD_3 ( VAR_2 ) ; return VAR_1 ; }\n", "id": 0 } ``` #### small An example of 'validation' looks as follows. ``` { "buggy": "public java.util.List < TYPE_1 > METHOD_1 ( ) { java.util.ArrayList < TYPE_1 > VAR_1 = new java.util.ArrayList < TYPE_1 > ( ) ; for ( TYPE_2 VAR_2 : VAR_3 ) { VAR_1 . METHOD_2 ( VAR_2 . METHOD_1 ( ) ) ; } return VAR_1 ; } \n", "fixed": "public java.util.List < TYPE_1 > METHOD_1 ( ) { return VAR_1 ; } \n", "id": 0 } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### medium, small |field name| type | description | |----------|------|--------------------------------| |id |int32 | Index of the sample | |buggy |string| The buggy version of the code | |fixed |string| The correct version of the code| ### Data Splits | name |train|validation|test| |------|----:|---------:|---:| |medium|52364| 6546|6545| |small |46680| 5835|5835| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Downloaded from GitHub Archive every public GitHub event between March 2011 and October 2017 and used the Google BigQuery APIs. [More Information Needed] #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process Automatically annotated by filtering commit messages containing the pattern: ("fix" or "solve") and ("bug" or "issue" or "problem" or "error"). A statistically significant amount of samples (95% confidence level with 5% confidence interval) were manually evaluated by two authors to check if the filtered bug/fix pairs were correct. After all disagreements were settled, authors conclude that 97.6% were true positives. #### Who are the annotators? Heuristics and the authors of the paper. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{DBLP:journals/corr/abs-2102-04664, author = {Shuai Lu and Daya Guo and Shuo Ren and Junjie Huang and Alexey Svyatkovskiy and Ambrosio Blanco and Colin B. Clement and Dawn Drain and Daxin Jiang and Duyu Tang and Ge Li and Lidong Zhou and Linjun Shou and Long Zhou and Michele Tufano and Ming Gong and Ming Zhou and Nan Duan and Neel Sundaresan and Shao Kun Deng and Shengyu Fu and Shujie Liu}, title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding and Generation}, journal = {CoRR}, volume = {abs/2102.04664}, year = {2021} } @article{tufano2019empirical, title={An empirical study on learning bug-fixing patches in the wild via neural machine translation}, author={Tufano, Michele and Watson, Cody and Bavota, Gabriele and Penta, Massimiliano Di and White, Martin and Poshyvanyk, Denys}, journal={ACM Transactions on Software Engineering and Methodology (TOSEM)}, volume={28}, number={4}, pages={1--29}, year={2019}, publisher={ACM New York, NY, USA} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_code_refinement
[ "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda", "debugging", "arxiv:2102.04664", "arxiv:1812.08693", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["code"], "license": ["c-uda"], "multilinguality": ["other-programming-languages"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "CodeXGlueCcCodeRefinement", "tags": ["debugging"], "dataset_info": [{"config_name": "medium", "features": [{"name": "id", "dtype": "int32"}, {"name": "buggy", "dtype": "string"}, {"name": "fixed", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32614786, "num_examples": 52364}, {"name": "validation", "num_bytes": 4086733, "num_examples": 6546}, {"name": "test", "num_bytes": 4063665, "num_examples": 6545}], "download_size": 14929559, "dataset_size": 40765184}, {"config_name": "small", "features": [{"name": "id", "dtype": "int32"}, {"name": "buggy", "dtype": "string"}, {"name": "fixed", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13006679, "num_examples": 46680}, {"name": "validation", "num_bytes": 1629242, "num_examples": 5835}, {"name": "test", "num_bytes": 1619700, "num_examples": 5835}], "download_size": 5894462, "dataset_size": 16255621}], "configs": [{"config_name": "medium", "data_files": [{"split": "train", "path": "medium/train-*"}, {"split": "validation", "path": "medium/validation-*"}, {"split": "test", "path": "medium/test-*"}]}, {"config_name": "small", "data_files": [{"split": "train", "path": "small/train-*"}, {"split": "validation", "path": "small/validation-*"}, {"split": "test", "path": "small/test-*"}]}]}
2024-01-24T14:53:13+00:00
[ "2102.04664", "1812.08693" ]
[ "code" ]
TAGS #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-found #multilinguality-other-programming-languages #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #debugging #arxiv-2102.04664 #arxiv-1812.08693 #region-us
Dataset Card for "code\_x\_glue\_cc\_code\_refinement" ====================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Paper: URL ### Dataset Summary CodeXGLUE code-refinement dataset, available at URL We use the dataset released by this paper(URL The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length. ### Supported Tasks and Leaderboards * 'text2text-generation-other-debugging': The dataset can be used to train a model for automatically fixing buggy code. ### Languages * Java programming language Dataset Structure ----------------- ### Data Instances #### medium An example of 'train' looks as follows. #### small An example of 'validation' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### medium, small field name: id, type: int32, description: Index of the sample field name: buggy, type: string, description: The buggy version of the code field name: fixed, type: string, description: The correct version of the code ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Downloaded from GitHub Archive every public GitHub event between March 2011 and October 2017 and used the Google BigQuery APIs. #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process Automatically annotated by filtering commit messages containing the pattern: ("fix" or "solve") and ("bug" or "issue" or "problem" or "error"). A statistically significant amount of samples (95% confidence level with 5% confidence interval) were manually evaluated by two authors to check if the filtered bug/fix pairs were correct. After all disagreements were settled, authors conclude that 97.6% were true positives. #### Who are the annotators? Heuristics and the authors of the paper. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE code-refinement dataset, available at URL\n\n\nWe use the dataset released by this paper(URL The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length.", "### Supported Tasks and Leaderboards\n\n\n* 'text2text-generation-other-debugging': The dataset can be used to train a model for automatically fixing buggy code.", "### Languages\n\n\n* Java programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### medium\n\n\nAn example of 'train' looks as follows.", "#### small\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### medium, small\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: buggy, type: string, description: The buggy version of the code\nfield name: fixed, type: string, description: The correct version of the code", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDownloaded from GitHub Archive every public GitHub event between March 2011 and October 2017 and used the Google BigQuery APIs.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process\n\n\nAutomatically annotated by filtering commit messages containing the pattern: (\"fix\" or \"solve\") and (\"bug\" or \"issue\" or \"problem\" or \"error\"). A statistically significant amount of samples (95% confidence level with 5% confidence interval) were manually evaluated by two authors to check if the filtered bug/fix pairs were correct. After all disagreements were settled, authors conclude that 97.6% were true positives.", "#### Who are the annotators?\n\n\nHeuristics and the authors of the paper.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-found #multilinguality-other-programming-languages #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #debugging #arxiv-2102.04664 #arxiv-1812.08693 #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE code-refinement dataset, available at URL\n\n\nWe use the dataset released by this paper(URL The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length.", "### Supported Tasks and Leaderboards\n\n\n* 'text2text-generation-other-debugging': The dataset can be used to train a model for automatically fixing buggy code.", "### Languages\n\n\n* Java programming language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### medium\n\n\nAn example of 'train' looks as follows.", "#### small\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### medium, small\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: buggy, type: string, description: The buggy version of the code\nfield name: fixed, type: string, description: The correct version of the code", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDownloaded from GitHub Archive every public GitHub event between March 2011 and October 2017 and used the Google BigQuery APIs.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process\n\n\nAutomatically annotated by filtering commit messages containing the pattern: (\"fix\" or \"solve\") and (\"bug\" or \"issue\" or \"problem\" or \"error\"). A statistically significant amount of samples (95% confidence level with 5% confidence interval) were manually evaluated by two authors to check if the filtered bug/fix pairs were correct. After all disagreements were settled, authors conclude that 97.6% were true positives.", "#### Who are the annotators?\n\n\nHeuristics and the authors of the paper.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 104, 89, 44, 16, 6, 15, 16, 32, 58, 11, 7, 4, 39, 15, 5, 104, 20, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-found #multilinguality-other-programming-languages #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #debugging #arxiv-2102.04664 #arxiv-1812.08693 #region-us \n### Dataset Summary\n\n\nCodeXGLUE code-refinement dataset, available at URL\n\n\nWe use the dataset released by this paper(URL The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length.### Supported Tasks and Leaderboards\n\n\n* 'text2text-generation-other-debugging': The dataset can be used to train a model for automatically fixing buggy code.### Languages\n\n\n* Java programming language\n\n\nDataset Structure\n-----------------### Data Instances#### medium\n\n\nAn example of 'train' looks as follows.#### small\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.#### medium, small\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: buggy, type: string, description: The buggy version of the code\nfield name: fixed, type: string, description: The correct version of the code### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization\n\n\nDownloaded from GitHub Archive every public GitHub event between March 2011 and October 2017 and used the Google BigQuery APIs.#### Who are the source language producers?\n\n\nSoftware Engineering developers.### Annotations" ]
d5478a4e472b5aae1c160f1b540ae2eebb79b640
# Dataset Card for "code_x_glue_cc_code_to_code_trans" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans - **Paper:** https://arxiv.org/abs/2102.04664 ### Dataset Summary CodeXGLUE code-to-code-trans dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans The dataset is collected from several public repos, including Lucene(http://lucene.apache.org/), POI(http://poi.apache.org/), JGit(https://github.com/eclipse/jgit/) and Antlr(https://github.com/antlr/). We collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets. ### Supported Tasks and Leaderboards - `machine-translation`: The dataset can be used to train a model for translating code in Java to C# and vice versa. ### Languages - Java **programming** language - C# **programming** language ## Dataset Structure ### Data Instances An example of 'validation' looks as follows. ``` { "cs": "public DVRecord(RecordInputStream in1){_option_flags = in1.ReadInt();_promptTitle = ReadUnicodeString(in1);_errorTitle = ReadUnicodeString(in1);_promptText = ReadUnicodeString(in1);_errorText = ReadUnicodeString(in1);int field_size_first_formula = in1.ReadUShort();_not_used_1 = in1.ReadShort();_formula1 = NPOI.SS.Formula.Formula.Read(field_size_first_formula, in1);int field_size_sec_formula = in1.ReadUShort();_not_used_2 = in1.ReadShort();_formula2 = NPOI.SS.Formula.Formula.Read(field_size_sec_formula, in1);_regions = new CellRangeAddressList(in1);}\n", "id": 0, "java": "public DVRecord(RecordInputStream in) {_option_flags = in.readInt();_promptTitle = readUnicodeString(in);_errorTitle = readUnicodeString(in);_promptText = readUnicodeString(in);_errorText = readUnicodeString(in);int field_size_first_formula = in.readUShort();_not_used_1 = in.readShort();_formula1 = Formula.read(field_size_first_formula, in);int field_size_sec_formula = in.readUShort();_not_used_2 = in.readShort();_formula2 = Formula.read(field_size_sec_formula, in);_regions = new CellRangeAddressList(in);}\n" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default |field name| type | description | |----------|------|-----------------------------| |id |int32 | Index of the sample | |java |string| The java version of the code| |cs |string| The C# version of the code | ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|10300| 500|1000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{DBLP:journals/corr/abs-2102-04664, author = {Shuai Lu and Daya Guo and Shuo Ren and Junjie Huang and Alexey Svyatkovskiy and Ambrosio Blanco and Colin B. Clement and Dawn Drain and Daxin Jiang and Duyu Tang and Ge Li and Lidong Zhou and Linjun Shou and Long Zhou and Michele Tufano and Ming Gong and Ming Zhou and Nan Duan and Neel Sundaresan and Shao Kun Deng and Shengyu Fu and Shujie Liu}, title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding and Generation}, journal = {CoRR}, volume = {abs/2102.04664}, year = {2021} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_code_to_code_trans
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda", "code-to-code", "arxiv:2102.04664", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["code"], "license": ["c-uda"], "multilinguality": ["other-programming-languages"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CodeXGlueCcCodeToCodeTrans", "tags": ["code-to-code"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "java", "dtype": "string"}, {"name": "cs", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4372641, "num_examples": 10300}, {"name": "validation", "num_bytes": 226407, "num_examples": 500}, {"name": "test", "num_bytes": 418587, "num_examples": 1000}], "download_size": 2064764, "dataset_size": 5017635}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-24T14:54:48+00:00
[ "2102.04664" ]
[ "code" ]
TAGS #task_categories-translation #annotations_creators-expert-generated #language_creators-found #multilinguality-other-programming-languages #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #code-to-code #arxiv-2102.04664 #region-us
Dataset Card for "code\_x\_glue\_cc\_code\_to\_code\_trans" =========================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Paper: URL ### Dataset Summary CodeXGLUE code-to-code-trans dataset, available at URL The dataset is collected from several public repos, including Lucene(URL POI(URL JGit(URL and Antlr(URL We collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets. ### Supported Tasks and Leaderboards * 'machine-translation': The dataset can be used to train a model for translating code in Java to C# and vice versa. ### Languages * Java programming language * C# programming language Dataset Structure ----------------- ### Data Instances An example of 'validation' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default field name: id, type: int32, description: Index of the sample field name: java, type: string, description: The java version of the code field name: cs, type: string, description: The C# version of the code ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE code-to-code-trans dataset, available at URL\n\n\nThe dataset is collected from several public repos, including Lucene(URL POI(URL JGit(URL and Antlr(URL\n\n\nWe collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets.", "### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for translating code in Java to C# and vice versa.", "### Languages\n\n\n* Java programming language\n* C# programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: java, type: string, description: The java version of the code\nfield name: cs, type: string, description: The C# version of the code", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-found #multilinguality-other-programming-languages #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #code-to-code #arxiv-2102.04664 #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE code-to-code-trans dataset, available at URL\n\n\nThe dataset is collected from several public repos, including Lucene(URL POI(URL JGit(URL and Antlr(URL\n\n\nWe collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets.", "### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for translating code in Java to C# and vice versa.", "### Languages\n\n\n* Java programming language\n* C# programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: java, type: string, description: The java version of the code\nfield name: cs, type: string, description: The C# version of the code", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 93, 104, 42, 22, 19, 32, 57, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-found #multilinguality-other-programming-languages #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #code-to-code #arxiv-2102.04664 #region-us \n### Dataset Summary\n\n\nCodeXGLUE code-to-code-trans dataset, available at URL\n\n\nThe dataset is collected from several public repos, including Lucene(URL POI(URL JGit(URL and Antlr(URL\n\n\nWe collect both the Java and C# versions of the codes and find the parallel functions. After removing duplicates and functions with the empty body, we split the whole dataset into training, validation and test sets.### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for translating code in Java to C# and vice versa.### Languages\n\n\n* Java programming language\n* C# programming language\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: java, type: string, description: The java version of the code\nfield name: cs, type: string, description: The C# version of the code### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nURL URL### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License." ]
69bd48c03223c2104342acd9a807caf61ac3efb8
# Dataset Card for "code_x_glue_cc_defect_detection" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection ### Dataset Summary CodeXGLUE Defect-detection dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code. The dataset we use comes from the paper Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. We combine all projects and split 80%/10%/10% for training/dev/test. ### Supported Tasks and Leaderboards - `multi-class-classification`: The dataset can be used to train a model for detecting if code has a defect in it. ### Languages - C **programming** language ## Dataset Structure ### Data Instances An example of 'validation' looks as follows. ``` { "commit_id": "aa1530dec499f7525d2ccaa0e3a876dc8089ed1e", "func": "static void filter_mirror_setup(NetFilterState *nf, Error **errp)\n{\n MirrorState *s = FILTER_MIRROR(nf);\n Chardev *chr;\n chr = qemu_chr_find(s->outdev);\n if (chr == NULL) {\n error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,\n \"Device '%s' not found\", s->outdev);\n qemu_chr_fe_init(&s->chr_out, chr, errp);", "id": 8, "project": "qemu", "target": true } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default |field name| type | description | |----------|------|------------------------------------------| |id |int32 | Index of the sample | |func |string| The source code | |target |bool | 0 or 1 (vulnerability or not) | |project |string| Original project that contains this code | |commit_id |string| Commit identifier in the original project| ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|21854| 2732|2732| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @inproceedings{zhou2019devign, title={Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks}, author={Zhou, Yaqin and Liu, Shangqing and Siow, Jingkai and Du, Xiaoning and Liu, Yang}, booktitle={Advances in Neural Information Processing Systems}, pages={10197--10207}, year={2019} ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_cc_defect_detection
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "license:c-uda", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code"], "license": ["c-uda"], "multilinguality": ["other-programming-languages"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "CodeXGlueCcDefectDetection", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "func", "dtype": "string"}, {"name": "target", "dtype": "bool"}, {"name": "project", "dtype": "string"}, {"name": "commit_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 45723451, "num_examples": 21854}, {"name": "validation", "num_bytes": 5582533, "num_examples": 2732}, {"name": "test", "num_bytes": 5646740, "num_examples": 2732}], "download_size": 22289955, "dataset_size": 56952724}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-24T14:56:27+00:00
[]
[ "code" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #region-us
Dataset Card for "code\_x\_glue\_cc\_defect\_detection" ======================================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL ### Dataset Summary CodeXGLUE Defect-detection dataset, available at URL Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code. The dataset we use comes from the paper Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. We combine all projects and split 80%/10%/10% for training/dev/test. ### Supported Tasks and Leaderboards * 'multi-class-classification': The dataset can be used to train a model for detecting if code has a defect in it. ### Languages * C programming language Dataset Structure ----------------- ### Data Instances An example of 'validation' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default field name: id, type: int32, description: Index of the sample field name: func, type: string, description: The source code field name: target, type: bool, description: 0 or 1 (vulnerability or not) field name: project, type: string, description: Original project that contains this code field name: commit\_id, type: string, description: Commit identifier in the original project ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE Defect-detection dataset, available at URL\n\n\nGiven a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code.\nThe dataset we use comes from the paper Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. We combine all projects and split 80%/10%/10% for training/dev/test.", "### Supported Tasks and Leaderboards\n\n\n* 'multi-class-classification': The dataset can be used to train a model for detecting if code has a defect in it.", "### Languages\n\n\n* C programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: func, type: string, description: The source code\nfield name: target, type: bool, description: 0 or 1 (vulnerability or not)\nfield name: project, type: string, description: Original project that contains this code\nfield name: commit\\_id, type: string, description: Commit identifier in the original project", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE Defect-detection dataset, available at URL\n\n\nGiven a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code.\nThe dataset we use comes from the paper Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. We combine all projects and split 80%/10%/10% for training/dev/test.", "### Supported Tasks and Leaderboards\n\n\n* 'multi-class-classification': The dataset can be used to train a model for detecting if code has a defect in it.", "### Languages\n\n\n* C programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: func, type: string, description: The source code\nfield name: target, type: bool, description: 0 or 1 (vulnerability or not)\nfield name: project, type: string, description: Original project that contains this code\nfield name: commit\\_id, type: string, description: Commit identifier in the original project", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 90, 146, 41, 16, 19, 32, 99, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-10K<n<100K #source_datasets-original #language-code #license-c-uda #region-us \n### Dataset Summary\n\n\nCodeXGLUE Defect-detection dataset, available at URL\n\n\nGiven a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code.\nThe dataset we use comes from the paper Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. We combine all projects and split 80%/10%/10% for training/dev/test.### Supported Tasks and Leaderboards\n\n\n* 'multi-class-classification': The dataset can be used to train a model for detecting if code has a defect in it.### Languages\n\n\n* C programming language\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: func, type: string, description: The source code\nfield name: target, type: bool, description: 0 or 1 (vulnerability or not)\nfield name: project, type: string, description: Original project that contains this code\nfield name: commit\\_id, type: string, description: Commit identifier in the original project### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
2678080e78a4b20ad79d1f0a514f04d815912564
# Dataset Card for "code_x_glue_ct_code_to_text" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text ### Dataset Summary CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text The dataset we use comes from CodeSearchNet and we filter the dataset as the following: - Remove examples that codes cannot be parsed into an abstract syntax tree. - Remove examples that #tokens of documents is < 3 or >256 - Remove examples that documents contain special tokens (e.g. <img ...> or https:...) - Remove examples that documents are not English. ### Supported Tasks and Leaderboards - `machine-translation`: The dataset can be used to train a model for automatically generating **English** docstrings for code. ### Languages - Go **programming** language - Java **programming** language - Javascript **programming** language - PHP **programming** language - Python **programming** language - Ruby **programming** language - English **natural** language ## Dataset Structure ### Data Instances #### go An example of 'test' looks as follows. ``` { "code": "func NewSTM(c *v3.Client, apply func(STM) error, so ...stmOption) (*v3.TxnResponse, error) {\n\topts := &stmOptions{ctx: c.Ctx()}\n\tfor _, f := range so {\n\t\tf(opts)\n\t}\n\tif len(opts.prefetch) != 0 {\n\t\tf := apply\n\t\tapply = func(s STM) error {\n\t\t\ts.Get(opts.prefetch...)\n\t\t\treturn f(s)\n\t\t}\n\t}\n\treturn runSTM(mkSTM(c, opts), apply)\n}", "code_tokens": ["func", "NewSTM", "(", "c", "*", "v3", ".", "Client", ",", "apply", "func", "(", "STM", ")", "error", ",", "so", "...", "stmOption", ")", "(", "*", "v3", ".", "TxnResponse", ",", "error", ")", "{", "opts", ":=", "&", "stmOptions", "{", "ctx", ":", "c", ".", "Ctx", "(", ")", "}", "\n", "for", "_", ",", "f", ":=", "range", "so", "{", "f", "(", "opts", ")", "\n", "}", "\n", "if", "len", "(", "opts", ".", "prefetch", ")", "!=", "0", "{", "f", ":=", "apply", "\n", "apply", "=", "func", "(", "s", "STM", ")", "error", "{", "s", ".", "Get", "(", "opts", ".", "prefetch", "...", ")", "\n", "return", "f", "(", "s", ")", "\n", "}", "\n", "}", "\n", "return", "runSTM", "(", "mkSTM", "(", "c", ",", "opts", ")", ",", "apply", ")", "\n", "}"], "docstring": "// NewSTM initiates a new STM instance, using serializable snapshot isolation by default.", "docstring_tokens": ["NewSTM", "initiates", "a", "new", "STM", "instance", "using", "serializable", "snapshot", "isolation", "by", "default", "."], "func_name": "NewSTM", "id": 0, "language": "go", "original_string": "func NewSTM(c *v3.Client, apply func(STM) error, so ...stmOption) (*v3.TxnResponse, error) {\n\topts := &stmOptions{ctx: c.Ctx()}\n\tfor _, f := range so {\n\t\tf(opts)\n\t}\n\tif len(opts.prefetch) != 0 {\n\t\tf := apply\n\t\tapply = func(s STM) error {\n\t\t\ts.Get(opts.prefetch...)\n\t\t\treturn f(s)\n\t\t}\n\t}\n\treturn runSTM(mkSTM(c, opts), apply)\n}", "path": "clientv3/concurrency/stm.go", "repo": "etcd-io/etcd", "sha": "616592d9ba993e3fe9798eef581316016df98906", "url": "https://github.com/etcd-io/etcd/blob/616592d9ba993e3fe9798eef581316016df98906/clientv3/concurrency/stm.go#L89-L102" } ``` #### java An example of 'test' looks as follows. ``` { "code": "protected final void fastPathOrderedEmit(U value, boolean delayError, Disposable disposable) {\n final Observer<? super V> observer = downstream;\n final SimplePlainQueue<U> q = queue;\n\n if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n if (q.isEmpty()) {\n accept(observer, value);\n if (leave(-1) == 0) {\n return;\n }\n } else {\n q.offer(value);\n }\n } else {\n q.offer(value);\n if (!enter()) {\n return;\n }\n }\n QueueDrainHelper.drainLoop(q, observer, delayError, disposable, this);\n }", "code_tokens": ["protected", "final", "void", "fastPathOrderedEmit", "(", "U", "value", ",", "boolean", "delayError", ",", "Disposable", "disposable", ")", "{", "final", "Observer", "<", "?", "super", "V", ">", "observer", "=", "downstream", ";", "final", "SimplePlainQueue", "<", "U", ">", "q", "=", "queue", ";", "if", "(", "wip", ".", "get", "(", ")", "==", "0", "&&", "wip", ".", "compareAndSet", "(", "0", ",", "1", ")", ")", "{", "if", "(", "q", ".", "isEmpty", "(", ")", ")", "{", "accept", "(", "observer", ",", "value", ")", ";", "if", "(", "leave", "(", "-", "1", ")", "==", "0", ")", "{", "return", ";", "}", "}", "else", "{", "q", ".", "offer", "(", "value", ")", ";", "}", "}", "else", "{", "q", ".", "offer", "(", "value", ")", ";", "if", "(", "!", "enter", "(", ")", ")", "{", "return", ";", "}", "}", "QueueDrainHelper", ".", "drainLoop", "(", "q", ",", "observer", ",", "delayError", ",", "disposable", ",", "this", ")", ";", "}"], "docstring": "Makes sure the fast-path emits in order.\n@param value the value to emit or queue up\n@param delayError if true, errors are delayed until the source has terminated\n@param disposable the resource to dispose if the drain terminates", "docstring_tokens": ["Makes", "sure", "the", "fast", "-", "path", "emits", "in", "order", "."], "func_name": "QueueDrainObserver.fastPathOrderedEmit", "id": 0, "language": "java", "original_string": "protected final void fastPathOrderedEmit(U value, boolean delayError, Disposable disposable) {\n final Observer<? super V> observer = downstream;\n final SimplePlainQueue<U> q = queue;\n\n if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n if (q.isEmpty()) {\n accept(observer, value);\n if (leave(-1) == 0) {\n return;\n }\n } else {\n q.offer(value);\n }\n } else {\n q.offer(value);\n if (!enter()) {\n return;\n }\n }\n QueueDrainHelper.drainLoop(q, observer, delayError, disposable, this);\n }", "path": "src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java", "repo": "ReactiveX/RxJava", "sha": "ac84182aa2bd866b53e01c8e3fe99683b882c60e", "url": "https://github.com/ReactiveX/RxJava/blob/ac84182aa2bd866b53e01c8e3fe99683b882c60e/src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java#L88-L108" } ``` #### javascript An example of 'test' looks as follows. ``` { "code": "function createInstance(defaultConfig) {\n var context = new Axios(defaultConfig);\n var instance = bind(Axios.prototype.request, context);\n\n // Copy axios.prototype to instance\n utils.extend(instance, Axios.prototype, context);\n\n // Copy context to instance\n utils.extend(instance, context);\n\n return instance;\n}", "code_tokens": ["function", "createInstance", "(", "defaultConfig", ")", "{", "var", "context", "=", "new", "Axios", "(", "defaultConfig", ")", ";", "var", "instance", "=", "bind", "(", "Axios", ".", "prototype", ".", "request", ",", "context", ")", ";", "// Copy axios.prototype to instance", "utils", ".", "extend", "(", "instance", ",", "Axios", ".", "prototype", ",", "context", ")", ";", "// Copy context to instance", "utils", ".", "extend", "(", "instance", ",", "context", ")", ";", "return", "instance", ";", "}"], "docstring": "Create an instance of Axios\n\n@param {Object} defaultConfig The default config for the instance\n@return {Axios} A new instance of Axios", "docstring_tokens": ["Create", "an", "instance", "of", "Axios"], "func_name": "createInstance", "id": 0, "language": "javascript", "original_string": "function createInstance(defaultConfig) {\n var context = new Axios(defaultConfig);\n var instance = bind(Axios.prototype.request, context);\n\n // Copy axios.prototype to instance\n utils.extend(instance, Axios.prototype, context);\n\n // Copy context to instance\n utils.extend(instance, context);\n\n return instance;\n}", "path": "lib/axios.js", "repo": "axios/axios", "sha": "92d231387fe2092f8736bc1746d4caa766b675f5", "url": "https://github.com/axios/axios/blob/92d231387fe2092f8736bc1746d4caa766b675f5/lib/axios.js#L15-L26" } ``` #### php An example of 'train' looks as follows. ``` { "code": "public static function build($serviceAddress, $restConfigPath, array $config = [])\n {\n $config += [\n 'httpHandler' => null,\n ];\n list($baseUri, $port) = self::normalizeServiceAddress($serviceAddress);\n $requestBuilder = new RequestBuilder(\"$baseUri:$port\", $restConfigPath);\n $httpHandler = $config['httpHandler'] ?: self::buildHttpHandlerAsync();\n return new RestTransport($requestBuilder, $httpHandler);\n }", "code_tokens": ["public", "static", "function", "build", "(", "$", "serviceAddress", ",", "$", "restConfigPath", ",", "array", "$", "config", "=", "[", "]", ")", "{", "$", "config", "+=", "[", "'httpHandler'", "=>", "null", ",", "]", ";", "list", "(", "$", "baseUri", ",", "$", "port", ")", "=", "self", "::", "normalizeServiceAddress", "(", "$", "serviceAddress", ")", ";", "$", "requestBuilder", "=", "new", "RequestBuilder", "(", "\"$baseUri:$port\"", ",", "$", "restConfigPath", ")", ";", "$", "httpHandler", "=", "$", "config", "[", "'httpHandler'", "]", "?", ":", "self", "::", "buildHttpHandlerAsync", "(", ")", ";", "return", "new", "RestTransport", "(", "$", "requestBuilder", ",", "$", "httpHandler", ")", ";", "}"], "docstring": "Builds a RestTransport.\n\n@param string $serviceAddress\nThe address of the API remote host, for example \"example.googleapis.com\".\n@param string $restConfigPath\nPath to rest config file.\n@param array $config {\nConfig options used to construct the gRPC transport.\n\n@type callable $httpHandler A handler used to deliver PSR-7 requests.\n}\n@return RestTransport\n@throws ValidationException", "docstring_tokens": ["Builds", "a", "RestTransport", "."], "func_name": "RestTransport.build", "id": 0, "language": "php", "original_string": "public static function build($serviceAddress, $restConfigPath, array $config = [])\n {\n $config += [\n 'httpHandler' => null,\n ];\n list($baseUri, $port) = self::normalizeServiceAddress($serviceAddress);\n $requestBuilder = new RequestBuilder(\"$baseUri:$port\", $restConfigPath);\n $httpHandler = $config['httpHandler'] ?: self::buildHttpHandlerAsync();\n return new RestTransport($requestBuilder, $httpHandler);\n }", "path": "src/Transport/RestTransport.php", "repo": "googleapis/gax-php", "sha": "48387fb818c6882296710a2302a0aa973b99afb2", "url": "https://github.com/googleapis/gax-php/blob/48387fb818c6882296710a2302a0aa973b99afb2/src/Transport/RestTransport.php#L85-L94" } ``` #### python An example of 'validation' looks as follows. ``` { "code": "def save_act(self, path=None):\n \"\"\"Save model to a pickle located at `path`\"\"\"\n if path is None:\n path = os.path.join(logger.get_dir(), \"model.pkl\")\n\n with tempfile.TemporaryDirectory() as td:\n save_variables(os.path.join(td, \"model\"))\n arc_name = os.path.join(td, \"packed.zip\")\n with zipfile.ZipFile(arc_name, 'w') as zipf:\n for root, dirs, files in os.walk(td):\n for fname in files:\n file_path = os.path.join(root, fname)\n if file_path != arc_name:\n zipf.write(file_path, os.path.relpath(file_path, td))\n with open(arc_name, \"rb\") as f:\n model_data = f.read()\n with open(path, \"wb\") as f:\n cloudpickle.dump((model_data, self._act_params), f)", "code_tokens": ["def", "save_act", "(", "self", ",", "path", "=", "None", ")", ":", "if", "path", "is", "None", ":", "path", "=", "os", ".", "path", ".", "join", "(", "logger", ".", "get_dir", "(", ")", ",", "\"model.pkl\"", ")", "with", "tempfile", ".", "TemporaryDirectory", "(", ")", "as", "td", ":", "save_variables", "(", "os", ".", "path", ".", "join", "(", "td", ",", "\"model\"", ")", ")", "arc_name", "=", "os", ".", "path", ".", "join", "(", "td", ",", "\"packed.zip\"", ")", "with", "zipfile", ".", "ZipFile", "(", "arc_name", ",", "'w'", ")", "as", "zipf", ":", "for", "root", ",", "dirs", ",", "files", "in", "os", ".", "walk", "(", "td", ")", ":", "for", "fname", "in", "files", ":", "file_path", "=", "os", ".", "path", ".", "join", "(", "root", ",", "fname", ")", "if", "file_path", "!=", "arc_name", ":", "zipf", ".", "write", "(", "file_path", ",", "os", ".", "path", ".", "relpath", "(", "file_path", ",", "td", ")", ")", "with", "open", "(", "arc_name", ",", "\"rb\"", ")", "as", "f", ":", "model_data", "=", "f", ".", "read", "(", ")", "with", "open", "(", "path", ",", "\"wb\"", ")", "as", "f", ":", "cloudpickle", ".", "dump", "(", "(", "model_data", ",", "self", ".", "_act_params", ")", ",", "f", ")"], "docstring": "Save model to a pickle located at `path`", "docstring_tokens": ["Save", "model", "to", "a", "pickle", "located", "at", "path"], "func_name": "ActWrapper.save_act", "id": 0, "language": "python", "original_string": "def save_act(self, path=None):\n \"\"\"Save model to a pickle located at `path`\"\"\"\n if path is None:\n path = os.path.join(logger.get_dir(), \"model.pkl\")\n\n with tempfile.TemporaryDirectory() as td:\n save_variables(os.path.join(td, \"model\"))\n arc_name = os.path.join(td, \"packed.zip\")\n with zipfile.ZipFile(arc_name, 'w') as zipf:\n for root, dirs, files in os.walk(td):\n for fname in files:\n file_path = os.path.join(root, fname)\n if file_path != arc_name:\n zipf.write(file_path, os.path.relpath(file_path, td))\n with open(arc_name, \"rb\") as f:\n model_data = f.read()\n with open(path, \"wb\") as f:\n cloudpickle.dump((model_data, self._act_params), f)", "path": "baselines/deepq/deepq.py", "repo": "openai/baselines", "sha": "3301089b48c42b87b396e246ea3f56fa4bfc9678", "url": "https://github.com/openai/baselines/blob/3301089b48c42b87b396e246ea3f56fa4bfc9678/baselines/deepq/deepq.py#L55-L72" } ``` #### ruby An example of 'train' looks as follows. ``` { "code": "def render_body(context, options)\n if options.key?(:partial)\n [render_partial(context, options)]\n else\n StreamingTemplateRenderer.new(@lookup_context).render(context, options)\n end\n end", "code_tokens": ["def", "render_body", "(", "context", ",", "options", ")", "if", "options", ".", "key?", "(", ":partial", ")", "[", "render_partial", "(", "context", ",", "options", ")", "]", "else", "StreamingTemplateRenderer", ".", "new", "(", "@lookup_context", ")", ".", "render", "(", "context", ",", "options", ")", "end", "end"], "docstring": "Render but returns a valid Rack body. If fibers are defined, we return\n a streaming body that renders the template piece by piece.\n\n Note that partials are not supported to be rendered with streaming,\n so in such cases, we just wrap them in an array.", "docstring_tokens": ["Render", "but", "returns", "a", "valid", "Rack", "body", ".", "If", "fibers", "are", "defined", "we", "return", "a", "streaming", "body", "that", "renders", "the", "template", "piece", "by", "piece", "."], "func_name": "ActionView.Renderer.render_body", "id": 0, "language": "ruby", "original_string": "def render_body(context, options)\n if options.key?(:partial)\n [render_partial(context, options)]\n else\n StreamingTemplateRenderer.new(@lookup_context).render(context, options)\n end\n end", "path": "actionview/lib/action_view/renderer/renderer.rb", "repo": "rails/rails", "sha": "85a8bc644be69908f05740a5886ec19cd3679df5", "url": "https://github.com/rails/rails/blob/85a8bc644be69908f05740a5886ec19cd3679df5/actionview/lib/action_view/renderer/renderer.rb#L38-L44" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### go, java, javascript, php, python, ruby | field name | type | description | |----------------|----------------|-----------------------------------------------------------------------------------| |id |int32 | Index of the sample | |repo |string | repo: the owner/repo | |path |string | path: the full path to the original file | |func_name |string | func_name: the function or method name | |original_string |string | original_string: the raw string before tokenization or parsing | |language |string | language: the programming language name | |code |string | code/function: the part of the original_string that is code | |code_tokens |Sequence[string]| code_tokens/function_tokens: tokenized version of code | |docstring |string | docstring: the top-level comment or docstring, if it exists in the original string| |docstring_tokens|Sequence[string]| docstring_tokens: tokenized version of docstring | |sha |string | sha of the file | |url |string | url of the file | ### Data Splits | name |train |validation|test | |----------|-----:|---------:|----:| |go |167288| 7325| 8122| |java |164923| 5183|10955| |javascript| 58025| 3885| 3291| |php |241241| 12982|14014| |python |251820| 13914|14918| |ruby | 24927| 1400| 1261| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Data from CodeSearchNet Challenge dataset. [More Information Needed] #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{husain2019codesearchnet, title={Codesearchnet challenge: Evaluating the state of semantic code search}, author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, journal={arXiv preprint arXiv:1909.09436}, year={2019} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_ct_code_to_text
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:code", "language:en", "license:c-uda", "code-to-text", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code", "en"], "license": ["c-uda"], "multilinguality": ["other-programming-languages"], "size_categories": ["100K<n<1M", "10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CodeXGlueCtCodeToText", "config_names": ["go", "java", "javascript", "php", "python", "ruby"], "tags": ["code-to-text"], "dataset_info": [{"config_name": "go", "features": [{"name": "id", "dtype": "int32"}, {"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 342243143, "num_examples": 167288}, {"name": "validation", "num_bytes": 13721860, "num_examples": 7325}, {"name": "test", "num_bytes": 16328406, "num_examples": 8122}], "download_size": 121341698, "dataset_size": 372293409}, {"config_name": "java", "features": [{"name": "id", "dtype": "int32"}, {"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 452553835, "num_examples": 164923}, {"name": "validation", "num_bytes": 13366344, "num_examples": 5183}, {"name": "test", "num_bytes": 29080753, "num_examples": 10955}], "download_size": 154701399, "dataset_size": 495000932}, {"config_name": "javascript", "features": [{"name": "id", "dtype": "int32"}, {"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 160860431, "num_examples": 58025}, {"name": "validation", "num_bytes": 10337344, "num_examples": 3885}, {"name": "test", "num_bytes": 10190713, "num_examples": 3291}], "download_size": 65788314, "dataset_size": 181388488}, {"config_name": "php", "features": [{"name": "id", "dtype": "int32"}, {"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 614654499, "num_examples": 241241}, {"name": "validation", "num_bytes": 33283045, "num_examples": 12982}, {"name": "test", "num_bytes": 35374993, "num_examples": 14014}], "download_size": 219692158, "dataset_size": 683312537}, {"config_name": "python", "features": [{"name": "id", "dtype": "int32"}, {"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 813663148, "num_examples": 251820}, {"name": "validation", "num_bytes": 46888564, "num_examples": 13914}, {"name": "test", "num_bytes": 50659688, "num_examples": 14918}], "download_size": 325551862, "dataset_size": 911211400}, {"config_name": "ruby", "features": [{"name": "id", "dtype": "int32"}, {"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51956439, "num_examples": 24927}, {"name": "validation", "num_bytes": 2821037, "num_examples": 1400}, {"name": "test", "num_bytes": 2671551, "num_examples": 1261}], "download_size": 21921316, "dataset_size": 57449027}], "configs": [{"config_name": "go", "data_files": [{"split": "train", "path": "go/train-*"}, {"split": "validation", "path": "go/validation-*"}, {"split": "test", "path": "go/test-*"}]}, {"config_name": "java", "data_files": [{"split": "train", "path": "java/train-*"}, {"split": "validation", "path": "java/validation-*"}, {"split": "test", "path": "java/test-*"}]}, {"config_name": "javascript", "data_files": [{"split": "train", "path": "javascript/train-*"}, {"split": "validation", "path": "javascript/validation-*"}, {"split": "test", "path": "javascript/test-*"}]}, {"config_name": "php", "data_files": [{"split": "train", "path": "php/train-*"}, {"split": "validation", "path": "php/validation-*"}, {"split": "test", "path": "php/test-*"}]}, {"config_name": "python", "data_files": [{"split": "train", "path": "python/train-*"}, {"split": "validation", "path": "python/validation-*"}, {"split": "test", "path": "python/test-*"}]}, {"config_name": "ruby", "data_files": [{"split": "train", "path": "ruby/train-*"}, {"split": "validation", "path": "ruby/validation-*"}, {"split": "test", "path": "ruby/test-*"}]}]}
2024-01-24T15:09:09+00:00
[]
[ "code", "en" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-code #language-English #license-c-uda #code-to-text #region-us
Dataset Card for "code\_x\_glue\_ct\_code\_to\_text" ==================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL ### Dataset Summary CodeXGLUE code-to-text dataset, available at URL The dataset we use comes from CodeSearchNet and we filter the dataset as the following: * Remove examples that codes cannot be parsed into an abstract syntax tree. * Remove examples that #tokens of documents is < 3 or >256 * Remove examples that documents contain special tokens (e.g. <img ...> or https:...) * Remove examples that documents are not English. ### Supported Tasks and Leaderboards * 'machine-translation': The dataset can be used to train a model for automatically generating English docstrings for code. ### Languages * Go programming language * Java programming language * Javascript programming language * PHP programming language * Python programming language * Ruby programming language * English natural language Dataset Structure ----------------- ### Data Instances #### go An example of 'test' looks as follows. #### java An example of 'test' looks as follows. #### javascript An example of 'test' looks as follows. #### php An example of 'train' looks as follows. #### python An example of 'validation' looks as follows. #### ruby An example of 'train' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### go, java, javascript, php, python, ruby field name: id, type: int32, description: Index of the sample field name: repo, type: string, description: repo: the owner/repo field name: path, type: string, description: path: the full path to the original file field name: func\_name, type: string, description: func\_name: the function or method name field name: original\_string, type: string, description: original\_string: the raw string before tokenization or parsing field name: language, type: string, description: language: the programming language name field name: code, type: string, description: code/function: the part of the original\_string that is code field name: code\_tokens, type: Sequence[string], description: code\_tokens/function\_tokens: tokenized version of code field name: docstring, type: string, description: docstring: the top-level comment or docstring, if it exists in the original string field name: docstring\_tokens, type: Sequence[string], description: docstring\_tokens: tokenized version of docstring field name: sha, type: string, description: sha of the file field name: url, type: string, description: url of the file ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Data from CodeSearchNet Challenge dataset. #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE code-to-text dataset, available at URL\n\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n\n\n* Remove examples that codes cannot be parsed into an abstract syntax tree.\n* Remove examples that #tokens of documents is < 3 or >256\n* Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n* Remove examples that documents are not English.", "### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for automatically generating English docstrings for code.", "### Languages\n\n\n* Go programming language\n* Java programming language\n* Javascript programming language\n* PHP programming language\n* Python programming language\n* Ruby programming language\n* English natural language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### go\n\n\nAn example of 'test' looks as follows.", "#### java\n\n\nAn example of 'test' looks as follows.", "#### javascript\n\n\nAn example of 'test' looks as follows.", "#### php\n\n\nAn example of 'train' looks as follows.", "#### python\n\n\nAn example of 'validation' looks as follows.", "#### ruby\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### go, java, javascript, php, python, ruby\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: repo, type: string, description: repo: the owner/repo\nfield name: path, type: string, description: path: the full path to the original file\nfield name: func\\_name, type: string, description: func\\_name: the function or method name\nfield name: original\\_string, type: string, description: original\\_string: the raw string before tokenization or parsing\nfield name: language, type: string, description: language: the programming language name\nfield name: code, type: string, description: code/function: the part of the original\\_string that is code\nfield name: code\\_tokens, type: Sequence[string], description: code\\_tokens/function\\_tokens: tokenized version of code\nfield name: docstring, type: string, description: docstring: the top-level comment or docstring, if it exists in the original string\nfield name: docstring\\_tokens, type: Sequence[string], description: docstring\\_tokens: tokenized version of docstring\nfield name: sha, type: string, description: sha of the file\nfield name: url, type: string, description: url of the file", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData from CodeSearchNet Challenge dataset.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-code #language-English #license-c-uda #code-to-text #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE code-to-text dataset, available at URL\n\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n\n\n* Remove examples that codes cannot be parsed into an abstract syntax tree.\n* Remove examples that #tokens of documents is < 3 or >256\n* Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n* Remove examples that documents are not English.", "### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for automatically generating English docstrings for code.", "### Languages\n\n\n* Go programming language\n* Java programming language\n* Javascript programming language\n* PHP programming language\n* Python programming language\n* Ruby programming language\n* English natural language\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### go\n\n\nAn example of 'test' looks as follows.", "#### java\n\n\nAn example of 'test' looks as follows.", "#### javascript\n\n\nAn example of 'test' looks as follows.", "#### php\n\n\nAn example of 'train' looks as follows.", "#### python\n\n\nAn example of 'validation' looks as follows.", "#### ruby\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### go, java, javascript, php, python, ruby\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: repo, type: string, description: repo: the owner/repo\nfield name: path, type: string, description: path: the full path to the original file\nfield name: func\\_name, type: string, description: func\\_name: the function or method name\nfield name: original\\_string, type: string, description: original\\_string: the raw string before tokenization or parsing\nfield name: language, type: string, description: language: the programming language name\nfield name: code, type: string, description: code/function: the part of the original\\_string that is code\nfield name: code\\_tokens, type: Sequence[string], description: code\\_tokens/function\\_tokens: tokenized version of code\nfield name: docstring, type: string, description: docstring: the top-level comment or docstring, if it exists in the original string\nfield name: docstring\\_tokens, type: Sequence[string], description: docstring\\_tokens: tokenized version of docstring\nfield name: sha, type: string, description: sha of the file\nfield name: url, type: string, description: url of the file", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData from CodeSearchNet Challenge dataset.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 98, 109, 39, 45, 6, 14, 15, 14, 16, 17, 16, 32, 308, 11, 7, 4, 19, 15, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-code #language-English #license-c-uda #code-to-text #region-us \n### Dataset Summary\n\n\nCodeXGLUE code-to-text dataset, available at URL\n\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n\n\n* Remove examples that codes cannot be parsed into an abstract syntax tree.\n* Remove examples that #tokens of documents is < 3 or >256\n* Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n* Remove examples that documents are not English.### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for automatically generating English docstrings for code.### Languages\n\n\n* Go programming language\n* Java programming language\n* Javascript programming language\n* PHP programming language\n* Python programming language\n* Ruby programming language\n* English natural language\n\n\nDataset Structure\n-----------------### Data Instances#### go\n\n\nAn example of 'test' looks as follows.#### java\n\n\nAn example of 'test' looks as follows.#### javascript\n\n\nAn example of 'test' looks as follows.#### php\n\n\nAn example of 'train' looks as follows.#### python\n\n\nAn example of 'validation' looks as follows.#### ruby\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits." ]
9d83d86fe015b4d2fc99c2de30f0f897f9df4909
# Dataset Card for "code_x_glue_tc_nl_code_search_adv" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv - **Paper:** https://arxiv.org/abs/2102.04664 ### Dataset Summary CodeXGLUE NL-code-search-Adv dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-Adv The dataset we use comes from CodeSearchNet and we filter the dataset as the following: - Remove examples that codes cannot be parsed into an abstract syntax tree. - Remove examples that #tokens of documents is < 3 or >256 - Remove examples that documents contain special tokens (e.g. <img ...> or https:...) - Remove examples that documents are not English. ### Supported Tasks and Leaderboards - `document-retrieval`: The dataset can be used to train a model for retrieving top-k codes from a given **English** natural language query. ### Languages - Python **programming** language - English **natural** language ## Dataset Structure ### Data Instances An example of 'validation' looks as follows. ``` { "argument_list": "", "code": "def Func(arg_0, arg_1='.', arg_2=True, arg_3=False, **arg_4):\n \"\"\"Downloads Dailymotion videos by URL.\n \"\"\"\n\n arg_5 = get_content(rebuilt_url(arg_0))\n arg_6 = json.loads(match1(arg_5, r'qualities\":({.+?}),\"'))\n arg_7 = match1(arg_5, r'\"video_title\"\\s*:\\s*\"([^\"]+)\"') or \\\n match1(arg_5, r'\"title\"\\s*:\\s*\"([^\"]+)\"')\n arg_7 = unicodize(arg_7)\n\n for arg_8 in ['1080','720','480','380','240','144','auto']:\n try:\n arg_9 = arg_6[arg_8][1][\"url\"]\n if arg_9:\n break\n except KeyError:\n pass\n\n arg_10, arg_11, arg_12 = url_info(arg_9)\n\n print_info(site_info, arg_7, arg_10, arg_12)\n if not arg_3:\n download_urls([arg_9], arg_7, arg_11, arg_12, arg_1=arg_1, arg_2=arg_2)", "code_tokens": ["def", "Func", "(", "arg_0", ",", "arg_1", "=", "'.'", ",", "arg_2", "=", "True", ",", "arg_3", "=", "False", ",", "**", "arg_4", ")", ":", "arg_5", "=", "get_content", "(", "rebuilt_url", "(", "arg_0", ")", ")", "arg_6", "=", "json", ".", "loads", "(", "match1", "(", "arg_5", ",", "r'qualities\":({.+?}),\"'", ")", ")", "arg_7", "=", "match1", "(", "arg_5", ",", "r'\"video_title\"\\s*:\\s*\"([^\"]+)\"'", ")", "or", "match1", "(", "arg_5", ",", "r'\"title\"\\s*:\\s*\"([^\"]+)\"'", ")", "arg_7", "=", "unicodize", "(", "arg_7", ")", "for", "arg_8", "in", "[", "'1080'", ",", "'720'", ",", "'480'", ",", "'380'", ",", "'240'", ",", "'144'", ",", "'auto'", "]", ":", "try", ":", "arg_9", "=", "arg_6", "[", "arg_8", "]", "[", "1", "]", "[", "\"url\"", "]", "if", "arg_9", ":", "break", "except", "KeyError", ":", "pass", "arg_10", ",", "arg_11", ",", "arg_12", "=", "url_info", "(", "arg_9", ")", "print_info", "(", "site_info", ",", "arg_7", ",", "arg_10", ",", "arg_12", ")", "if", "not", "arg_3", ":", "download_urls", "(", "[", "arg_9", "]", ",", "arg_7", ",", "arg_11", ",", "arg_12", ",", "arg_1", "=", "arg_1", ",", "arg_2", "=", "arg_2", ")"], "docstring": "Downloads Dailymotion videos by URL.", "docstring_summary": "Downloads Dailymotion videos by URL.", "docstring_tokens": ["Downloads", "Dailymotion", "videos", "by", "URL", "."], "func_name": "", "id": 0, "identifier": "dailymotion_download", "language": "python", "nwo": "soimort/you-get", "original_string": "", "parameters": "(url, output_dir='.', merge=True, info_only=False, **kwargs)", "path": "src/you_get/extractors/dailymotion.py", "repo": "", "return_statement": "", "score": 0.9997601509094238, "sha": "b746ac01c9f39de94cac2d56f665285b0523b974", "url": "https://github.com/soimort/you-get/blob/b746ac01c9f39de94cac2d56f665285b0523b974/src/you_get/extractors/dailymotion.py#L13-L35" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default | field name | type | description | |-----------------|-----------------------|-----------------------------------------------------------------------------------| |id |int32 | Index of the sample | |repo |string | repo: the owner/repo | |path |string | path: the full path to the original file | |func_name |string | func_name: the function or method name | |original_string |string | original_string: the raw string before tokenization or parsing | |language |string | language: the programming language | |code |string | code/function: the part of the original_string that is code | |code_tokens |Sequence[string] | code_tokens/function_tokens: tokenized version of code | |docstring |string | docstring: the top-level comment or docstring, if it exists in the original string| |docstring_tokens |Sequence[string] | docstring_tokens: tokenized version of docstring | |sha |string | sha of the file | |url |string | url of the file | |docstring_summary|string | Summary of the docstring | |parameters |string | parameters of the function | |return_statement |string | return statement | |argument_list |string | list of arguments of the function | |identifier |string | identifier | |nwo |string | nwo | |score |datasets.Value("float"]| score for this search | ### Data Splits | name |train |validation|test | |-------|-----:|---------:|----:| |default|251820| 9604|19210| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Data from CodeSearchNet Challenge dataset. [More Information Needed] #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{DBLP:journals/corr/abs-2102-04664, author = {Shuai Lu and Daya Guo and Shuo Ren and Junjie Huang and Alexey Svyatkovskiy and Ambrosio Blanco and Colin B. Clement and Dawn Drain and Daxin Jiang and Duyu Tang and Ge Li and Lidong Zhou and Linjun Shou and Long Zhou and Michele Tufano and Ming Gong and Ming Zhou and Nan Duan and Neel Sundaresan and Shao Kun Deng and Shengyu Fu and Shujie Liu}, title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding and Generation}, journal = {CoRR}, volume = {abs/2102.04664}, year = {2021} } @article{husain2019codesearchnet, title={Codesearchnet challenge: Evaluating the state of semantic code search}, author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, journal={arXiv preprint arXiv:1909.09436}, year={2019} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_tc_nl_code_search_adv
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:found", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:100K<n<1M", "source_datasets:original", "language:code", "language:en", "license:c-uda", "arxiv:2102.04664", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code", "en"], "license": ["c-uda"], "multilinguality": ["other-programming-languages"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "CodeXGlueTcNlCodeSearchAdv", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "repo", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "original_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "code_tokens", "sequence": "string"}, {"name": "docstring", "dtype": "string"}, {"name": "docstring_tokens", "sequence": "string"}, {"name": "sha", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "docstring_summary", "dtype": "string"}, {"name": "parameters", "dtype": "string"}, {"name": "return_statement", "dtype": "string"}, {"name": "argument_list", "dtype": "string"}, {"name": "identifier", "dtype": "string"}, {"name": "nwo", "dtype": "string"}, {"name": "score", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 820714108, "num_examples": 251820}, {"name": "validation", "num_bytes": 23468758, "num_examples": 9604}, {"name": "test", "num_bytes": 47433608, "num_examples": 19210}], "download_size": 316235421, "dataset_size": 891616474}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-24T15:15:07+00:00
[ "2102.04664" ]
[ "code", "en" ]
TAGS #task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-100K<n<1M #source_datasets-original #language-code #language-English #license-c-uda #arxiv-2102.04664 #region-us
Dataset Card for "code\_x\_glue\_tc\_nl\_code\_search\_adv" =========================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Paper: URL ### Dataset Summary CodeXGLUE NL-code-search-Adv dataset, available at URL The dataset we use comes from CodeSearchNet and we filter the dataset as the following: * Remove examples that codes cannot be parsed into an abstract syntax tree. * Remove examples that #tokens of documents is < 3 or >256 * Remove examples that documents contain special tokens (e.g. <img ...> or https:...) * Remove examples that documents are not English. ### Supported Tasks and Leaderboards * 'document-retrieval': The dataset can be used to train a model for retrieving top-k codes from a given English natural language query. ### Languages * Python programming language * English natural language Dataset Structure ----------------- ### Data Instances An example of 'validation' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default field name: id, type: int32, description: Index of the sample field name: repo, type: string, description: repo: the owner/repo field name: path, type: string, description: path: the full path to the original file field name: func\_name, type: string, description: func\_name: the function or method name field name: original\_string, type: string, description: original\_string: the raw string before tokenization or parsing field name: language, type: string, description: language: the programming language field name: code, type: string, description: code/function: the part of the original\_string that is code field name: code\_tokens, type: Sequence[string], description: code\_tokens/function\_tokens: tokenized version of code field name: docstring, type: string, description: docstring: the top-level comment or docstring, if it exists in the original string field name: docstring\_tokens, type: Sequence[string], description: docstring\_tokens: tokenized version of docstring field name: sha, type: string, description: sha of the file field name: url, type: string, description: url of the file field name: docstring\_summary, type: string, description: Summary of the docstring field name: parameters, type: string, description: parameters of the function field name: return\_statement, type: string, description: return statement field name: argument\_list, type: string, description: list of arguments of the function field name: identifier, type: string, description: identifier field name: nwo, type: string, description: nwo field name: score, type: datasets.Value("float"], description: score for this search ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Data from CodeSearchNet Challenge dataset. #### Who are the source language producers? Software Engineering developers. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE NL-code-search-Adv dataset, available at URL\n\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n\n\n* Remove examples that codes cannot be parsed into an abstract syntax tree.\n* Remove examples that #tokens of documents is < 3 or >256\n* Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n* Remove examples that documents are not English.", "### Supported Tasks and Leaderboards\n\n\n* 'document-retrieval': The dataset can be used to train a model for retrieving top-k codes from a given English natural language query.", "### Languages\n\n\n* Python programming language\n* English natural language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: repo, type: string, description: repo: the owner/repo\nfield name: path, type: string, description: path: the full path to the original file\nfield name: func\\_name, type: string, description: func\\_name: the function or method name\nfield name: original\\_string, type: string, description: original\\_string: the raw string before tokenization or parsing\nfield name: language, type: string, description: language: the programming language\nfield name: code, type: string, description: code/function: the part of the original\\_string that is code\nfield name: code\\_tokens, type: Sequence[string], description: code\\_tokens/function\\_tokens: tokenized version of code\nfield name: docstring, type: string, description: docstring: the top-level comment or docstring, if it exists in the original string\nfield name: docstring\\_tokens, type: Sequence[string], description: docstring\\_tokens: tokenized version of docstring\nfield name: sha, type: string, description: sha of the file\nfield name: url, type: string, description: url of the file\nfield name: docstring\\_summary, type: string, description: Summary of the docstring\nfield name: parameters, type: string, description: parameters of the function\nfield name: return\\_statement, type: string, description: return statement\nfield name: argument\\_list, type: string, description: list of arguments of the function\nfield name: identifier, type: string, description: identifier\nfield name: nwo, type: string, description: nwo\nfield name: score, type: datasets.Value(\"float\"], description: score for this search", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData from CodeSearchNet Challenge dataset.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-100K<n<1M #source_datasets-original #language-code #language-English #license-c-uda #arxiv-2102.04664 #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE NL-code-search-Adv dataset, available at URL\n\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n\n\n* Remove examples that codes cannot be parsed into an abstract syntax tree.\n* Remove examples that #tokens of documents is < 3 or >256\n* Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n* Remove examples that documents are not English.", "### Supported Tasks and Leaderboards\n\n\n* 'document-retrieval': The dataset can be used to train a model for retrieving top-k codes from a given English natural language query.", "### Languages\n\n\n* Python programming language\n* English natural language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: repo, type: string, description: repo: the owner/repo\nfield name: path, type: string, description: path: the full path to the original file\nfield name: func\\_name, type: string, description: func\\_name: the function or method name\nfield name: original\\_string, type: string, description: original\\_string: the raw string before tokenization or parsing\nfield name: language, type: string, description: language: the programming language\nfield name: code, type: string, description: code/function: the part of the original\\_string that is code\nfield name: code\\_tokens, type: Sequence[string], description: code\\_tokens/function\\_tokens: tokenized version of code\nfield name: docstring, type: string, description: docstring: the top-level comment or docstring, if it exists in the original string\nfield name: docstring\\_tokens, type: Sequence[string], description: docstring\\_tokens: tokenized version of docstring\nfield name: sha, type: string, description: sha of the file\nfield name: url, type: string, description: url of the file\nfield name: docstring\\_summary, type: string, description: Summary of the docstring\nfield name: parameters, type: string, description: parameters of the function\nfield name: return\\_statement, type: string, description: return statement\nfield name: argument\\_list, type: string, description: list of arguments of the function\nfield name: identifier, type: string, description: identifier\nfield name: nwo, type: string, description: nwo\nfield name: score, type: datasets.Value(\"float\"], description: score for this search", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nData from CodeSearchNet Challenge dataset.", "#### Who are the source language producers?\n\n\nSoftware Engineering developers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 102, 113, 47, 20, 19, 32, 426, 11, 7, 4, 19, 15, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-100K<n<1M #source_datasets-original #language-code #language-English #license-c-uda #arxiv-2102.04664 #region-us \n### Dataset Summary\n\n\nCodeXGLUE NL-code-search-Adv dataset, available at URL\n\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n\n\n* Remove examples that codes cannot be parsed into an abstract syntax tree.\n* Remove examples that #tokens of documents is < 3 or >256\n* Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n* Remove examples that documents are not English.### Supported Tasks and Leaderboards\n\n\n* 'document-retrieval': The dataset can be used to train a model for retrieving top-k codes from a given English natural language query.### Languages\n\n\n* Python programming language\n* English natural language\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits." ]
c9f4dd832329726491a591b4b00a6f02da498c76
# Dataset Card for "code_x_glue_tc_text_to_code" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code ### Dataset Summary CodeXGLUE text-to-code dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at https://github.com/MicrosoftDocs/. ### Supported Tasks and Leaderboards - `machine-translation`: The dataset can be used to train a model for generating Java code from an **English** natural language description. ### Languages - Java **programming** language ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { "code": "boolean function ( ) { return isParsed ; }", "id": 0, "nl": "check if details are parsed . concode_field_sep Container parent concode_elem_sep boolean isParsed concode_elem_sep long offset concode_elem_sep long contentStartPosition concode_elem_sep ByteBuffer deadBytes concode_elem_sep boolean isRead concode_elem_sep long memMapSize concode_elem_sep Logger LOG concode_elem_sep byte[] userType concode_elem_sep String type concode_elem_sep ByteBuffer content concode_elem_sep FileChannel fileChannel concode_field_sep Container getParent concode_elem_sep byte[] getUserType concode_elem_sep void readContent concode_elem_sep long getOffset concode_elem_sep long getContentSize concode_elem_sep void getContent concode_elem_sep void setDeadBytes concode_elem_sep void parse concode_elem_sep void getHeader concode_elem_sep long getSize concode_elem_sep void parseDetails concode_elem_sep String getType concode_elem_sep void _parseDetails concode_elem_sep String getPath concode_elem_sep boolean verify concode_elem_sep void setParent concode_elem_sep void getBox concode_elem_sep boolean isSmallBox" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default |field name| type | description | |----------|------|---------------------------------------------| |id |int32 | Index of the sample | |nl |string| The natural language description of the task| |code |string| The programming source code for the task | ### Data Splits | name |train |validation|test| |-------|-----:|---------:|---:| |default|100000| 2000|2000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{iyer2018mapping, title={Mapping language to code in programmatic context}, author={Iyer, Srinivasan and Konstas, Ioannis and Cheung, Alvin and Zettlemoyer, Luke}, journal={arXiv preprint arXiv:1808.09588}, year={2018} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_tc_text_to_code
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:other-programming-languages", "size_categories:100K<n<1M", "source_datasets:original", "language:code", "language:en", "license:c-uda", "text-to-code", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["code", "en"], "license": ["c-uda"], "multilinguality": ["other-programming-languages"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CodeXGlueTcTextToCode", "tags": ["text-to-code"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "nl", "dtype": "string"}, {"name": "code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 96225531, "num_examples": 100000}, {"name": "validation", "num_bytes": 1749743, "num_examples": 2000}, {"name": "test", "num_bytes": 1609298, "num_examples": 2000}], "download_size": 34258354, "dataset_size": 99584572}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-24T15:16:39+00:00
[]
[ "code", "en" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-100K<n<1M #source_datasets-original #language-code #language-English #license-c-uda #text-to-code #region-us
Dataset Card for "code\_x\_glue\_tc\_text\_to\_code" ==================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL ### Dataset Summary CodeXGLUE text-to-code dataset, available at URL The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at URL ### Supported Tasks and Leaderboards * 'machine-translation': The dataset can be used to train a model for generating Java code from an English natural language description. ### Languages * Java programming language Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### default field name: id, type: int32, description: Index of the sample field name: nl, type: string, description: The natural language description of the task field name: code, type: string, description: The programming source code for the task ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE text-to-code dataset, available at URL\n\n\nThe dataset we use is crawled and filtered from Microsoft Documentation, whose document located at URL", "### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for generating Java code from an English natural language description.", "### Languages\n\n\n* Java programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: nl, type: string, description: The natural language description of the task\nfield name: code, type: string, description: The programming source code for the task", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-100K<n<1M #source_datasets-original #language-code #language-English #license-c-uda #text-to-code #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE text-to-code dataset, available at URL\n\n\nThe dataset we use is crawled and filtered from Microsoft Documentation, whose document located at URL", "### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for generating Java code from an English natural language description.", "### Languages\n\n\n* Java programming language\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: nl, type: string, description: The natural language description of the task\nfield name: code, type: string, description: The programming source code for the task", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 86, 42, 40, 16, 18, 32, 58, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-other-programming-languages #size_categories-100K<n<1M #source_datasets-original #language-code #language-English #license-c-uda #text-to-code #region-us \n### Dataset Summary\n\n\nCodeXGLUE text-to-code dataset, available at URL\n\n\nThe dataset we use is crawled and filtered from Microsoft Documentation, whose document located at URL### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for generating Java code from an English natural language description.### Languages\n\n\n* Java programming language\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.#### default\n\n\nfield name: id, type: int32, description: Index of the sample\nfield name: nl, type: string, description: The natural language description of the task\nfield name: code, type: string, description: The programming source code for the task### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nURL URL### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
ae1a41ee2433091f40cf7e803e26170e3587836f
# Dataset Card for "code_x_glue_tt_text_to_text" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Text-Text/text-to-text - **Paper:** https://arxiv.org/abs/2102.04664 ### Dataset Summary CodeXGLUE text-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Text/text-to-text The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at https://github.com/MicrosoftDocs/. ### Supported Tasks and Leaderboards - `machine-translation`: The dataset can be used to train a model for translating Technical documentation between languages. ### Languages da_en, lv_en, no_en, zh_en ## Dataset Structure ### Data Instances #### da_en An example of 'test' looks as follows. ``` { "id": 0, "source": "4 . K\u00f8r modellen , og udgiv den som en webtjeneste .\n", "target": "4 . Run the model , and publish it as a web service .\n" } ``` #### lv_en An example of 'train' looks as follows. ``` { "id": 0, "source": "title : Pakalpojumu objektu izveide\n", "target": "title : Create service objects\n" } ``` #### no_en An example of 'validation' looks as follows. ``` { "id": 0, "source": "2 . \u00c5pne servicevaren du vil definere komponenter fra en stykkliste for .\n", "target": "2 . Open the service item for which you want to set up components from a BOM .\n" } ``` #### zh_en An example of 'validation' looks as follows. ``` { "id": 0, "source": "& # 124 ; MCDUserNotificationReadStateFilterAny & # 124 ; 0 & # 124 ; \u5305\u62ec \u901a\u77e5 , \u800c \u4e0d \u8003\u8651 \u8bfb\u53d6 \u72b6\u6001 \u3002 & # 124 ;\n", "target": "&#124; MCDUserNotificationReadStateFilterAny &#124; 0 &#124; Include notifications regardless of read state . &#124;\n" } ``` ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### da_en, lv_en, no_en, zh_en |field name| type | description | |----------|------|----------------------------------------| |id |int32 | The index of the sample | |source |string| The source language version of the text| |target |string| The target language version of the text| ### Data Splits |name |train|validation|test| |-----|----:|---------:|---:| |da_en|42701| 1000|1000| |lv_en|18749| 1000|1000| |no_en|44322| 1000|1000| |zh_en|50154| 1000|1000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators https://github.com/microsoft, https://github.com/madlag ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Citation Information ``` @article{DBLP:journals/corr/abs-2102-04664, author = {Shuai Lu and Daya Guo and Shuo Ren and Junjie Huang and Alexey Svyatkovskiy and Ambrosio Blanco and Colin B. Clement and Dawn Drain and Daxin Jiang and Duyu Tang and Ge Li and Lidong Zhou and Linjun Shou and Long Zhou and Michele Tufano and Ming Gong and Ming Zhou and Nan Duan and Neel Sundaresan and Shao Kun Deng and Shengyu Fu and Shujie Liu}, title = {CodeXGLUE: {A} Machine Learning Benchmark Dataset for Code Understanding and Generation}, journal = {CoRR}, volume = {abs/2102.04664}, year = {2021} } ``` ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
code_x_glue_tt_text_to_text
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:da", "language:en", "language:lv", "language:nb", "language:zh", "license:c-uda", "code-documentation-translation", "arxiv:2102.04664", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["da", "en", "lv", "nb", "zh"], "license": ["c-uda"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CodeXGlueTtTextToText", "tags": ["code-documentation-translation"], "dataset_info": [{"config_name": "da_en", "features": [{"name": "id", "dtype": "int32"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8163175, "num_examples": 42701}, {"name": "validation", "num_bytes": 190332, "num_examples": 1000}, {"name": "test", "num_bytes": 190772, "num_examples": 1000}], "download_size": 4322666, "dataset_size": 8544279}, {"config_name": "lv_en", "features": [{"name": "id", "dtype": "int32"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3644111, "num_examples": 18749}, {"name": "validation", "num_bytes": 192511, "num_examples": 1000}, {"name": "test", "num_bytes": 190867, "num_examples": 1000}], "download_size": 1997959, "dataset_size": 4027489}, {"config_name": "no_en", "features": [{"name": "id", "dtype": "int32"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8761755, "num_examples": 44322}, {"name": "validation", "num_bytes": 203815, "num_examples": 1000}, {"name": "test", "num_bytes": 197127, "num_examples": 1000}], "download_size": 4661188, "dataset_size": 9162697}, {"config_name": "zh_en", "features": [{"name": "id", "dtype": "int32"}, {"name": "source", "dtype": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9592148, "num_examples": 50154}, {"name": "validation", "num_bytes": 192147, "num_examples": 1000}, {"name": "test", "num_bytes": 195237, "num_examples": 1000}], "download_size": 4733144, "dataset_size": 9979532}], "configs": [{"config_name": "da_en", "data_files": [{"split": "train", "path": "da_en/train-*"}, {"split": "validation", "path": "da_en/validation-*"}, {"split": "test", "path": "da_en/test-*"}]}, {"config_name": "lv_en", "data_files": [{"split": "train", "path": "lv_en/train-*"}, {"split": "validation", "path": "lv_en/validation-*"}, {"split": "test", "path": "lv_en/test-*"}]}, {"config_name": "no_en", "data_files": [{"split": "train", "path": "no_en/train-*"}, {"split": "validation", "path": "no_en/validation-*"}, {"split": "test", "path": "no_en/test-*"}]}, {"config_name": "zh_en", "data_files": [{"split": "train", "path": "zh_en/train-*"}, {"split": "validation", "path": "zh_en/validation-*"}, {"split": "test", "path": "zh_en/test-*"}]}]}
2024-01-24T15:18:44+00:00
[ "2102.04664" ]
[ "da", "en", "lv", "nb", "zh" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Danish #language-English #language-Latvian #language-Norwegian Bokmål #language-Chinese #license-c-uda #code-documentation-translation #arxiv-2102.04664 #region-us
Dataset Card for "code\_x\_glue\_tt\_text\_to\_text" ==================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Paper: URL ### Dataset Summary CodeXGLUE text-to-text dataset, available at URL The dataset we use is crawled and filtered from Microsoft Documentation, whose document located at URL ### Supported Tasks and Leaderboards * 'machine-translation': The dataset can be used to train a model for translating Technical documentation between languages. ### Languages da\_en, lv\_en, no\_en, zh\_en Dataset Structure ----------------- ### Data Instances #### da\_en An example of 'test' looks as follows. #### lv\_en An example of 'train' looks as follows. #### no\_en An example of 'validation' looks as follows. #### zh\_en An example of 'validation' looks as follows. ### Data Fields In the following each data field in go is explained for each config. The data fields are the same among all splits. #### da\_en, lv\_en, no\_en, zh\_en field name: id, type: int32, description: The index of the sample field name: source, type: string, description: The source language version of the text field name: target, type: string, description: The target language version of the text ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators URL URL ### Licensing Information Computational Use of Data Agreement (C-UDA) License. ### Contributions Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
[ "### Dataset Summary\n\n\nCodeXGLUE text-to-text dataset, available at URL\n\n\nThe dataset we use is crawled and filtered from Microsoft Documentation, whose document located at URL", "### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for translating Technical documentation between languages.", "### Languages\n\n\nda\\_en, lv\\_en, no\\_en, zh\\_en\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### da\\_en\n\n\nAn example of 'test' looks as follows.", "#### lv\\_en\n\n\nAn example of 'train' looks as follows.", "#### no\\_en\n\n\nAn example of 'validation' looks as follows.", "#### zh\\_en\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### da\\_en, lv\\_en, no\\_en, zh\\_en\n\n\nfield name: id, type: int32, description: The index of the sample\nfield name: source, type: string, description: The source language version of the text\nfield name: target, type: string, description: The target language version of the text", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Danish #language-English #language-Latvian #language-Norwegian Bokmål #language-Chinese #license-c-uda #code-documentation-translation #arxiv-2102.04664 #region-us \n", "### Dataset Summary\n\n\nCodeXGLUE text-to-text dataset, available at URL\n\n\nThe dataset we use is crawled and filtered from Microsoft Documentation, whose document located at URL", "### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for translating Technical documentation between languages.", "### Languages\n\n\nda\\_en, lv\\_en, no\\_en, zh\\_en\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### da\\_en\n\n\nAn example of 'test' looks as follows.", "#### lv\\_en\n\n\nAn example of 'train' looks as follows.", "#### no\\_en\n\n\nAn example of 'validation' looks as follows.", "#### zh\\_en\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.", "#### da\\_en, lv\\_en, no\\_en, zh\\_en\n\n\nfield name: id, type: int32, description: The index of the sample\nfield name: source, type: string, description: The source language version of the text\nfield name: target, type: string, description: The target language version of the text", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL URL", "### Licensing Information\n\n\nComputational Use of Data Agreement (C-UDA) License.", "### Contributions\n\n\nThanks to @madlag (and partly also @ncoop57) for adding this dataset." ]
[ 112, 42, 39, 32, 6, 17, 19, 19, 20, 32, 77, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 8, 20, 25 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Danish #language-English #language-Latvian #language-Norwegian Bokmål #language-Chinese #license-c-uda #code-documentation-translation #arxiv-2102.04664 #region-us \n### Dataset Summary\n\n\nCodeXGLUE text-to-text dataset, available at URL\n\n\nThe dataset we use is crawled and filtered from Microsoft Documentation, whose document located at URL### Supported Tasks and Leaderboards\n\n\n* 'machine-translation': The dataset can be used to train a model for translating Technical documentation between languages.### Languages\n\n\nda\\_en, lv\\_en, no\\_en, zh\\_en\n\n\nDataset Structure\n-----------------### Data Instances#### da\\_en\n\n\nAn example of 'test' looks as follows.#### lv\\_en\n\n\nAn example of 'train' looks as follows.#### no\\_en\n\n\nAn example of 'validation' looks as follows.#### zh\\_en\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nIn the following each data field in go is explained for each config. The data fields are the same among all splits.#### da\\_en, lv\\_en, no\\_en, zh\\_en\n\n\nfield name: id, type: int32, description: The index of the sample\nfield name: source, type: string, description: The source language version of the text\nfield name: target, type: string, description: The target language version of the text### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset" ]
1da4a3dc28080f6613cf9001a6a380fd30ccfbdc
# Dataset Card for "com_qa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://qa.mpi-inf.mpg.de/comqa/](http://qa.mpi-inf.mpg.de/comqa/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** https://doi.org/10.18653/v1/N19-1027 - **Paper:** https://arxiv.org/abs/1809.09528 - **Point of Contact:** [Rishiraj Saha Roy](https://people.mpi-inf.mpg.de/~rsaharo/) - **Size of downloaded dataset files:** 1.67 MB - **Size of the generated dataset:** 1.10 MB - **Total amount of disk used:** 2.78 MB ### Dataset Summary ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website. By collecting questions from such a site we ensure that the information needs are ones of interest to actual users. Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them more interesting for driving future research compared to those collected from an engine's query log. The dataset contains questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives, superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.67 MB - **Size of the generated dataset:** 1.10 MB - **Total amount of disk used:** 2.78 MB An example of 'validation' looks as follows. ``` { "answers": ["https://en.wikipedia.org/wiki/north_sea"], "cluster_id": "cluster-922", "questions": ["what sea separates the scandinavia peninsula from britain?", "which sea separates britain from scandinavia?"] } ``` ### Data Fields The data fields are the same among all splits. #### default - `cluster_id`: a `string` feature. - `questions`: a `list` of `string` features. - `answers`: a `list` of `string` features. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default| 3966| 966|2243| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{abujabal-etal-2019-comqa, title = "{ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters", author = {Abujabal, Abdalghani and Saha Roy, Rishiraj and Yahya, Mohamed and Weikum, Gerhard}, booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)}, month = {jun}, year = {2019}, address = {Minneapolis, Minnesota}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/N19-1027}, doi = {10.18653/v1/N19-1027{, pages = {307--317}, } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
com_qa
[ "task_categories:question-answering", "language:en", "license:unknown", "arxiv:1809.09528", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": "unknown", "task_categories": ["question-answering"], "paperswithcode_id": "comqa", "pretty_name": "ComQA", "dataset_info": {"features": [{"name": "cluster_id", "dtype": "string"}, {"name": "questions", "sequence": "string"}, {"name": "answers", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 692932, "num_examples": 3966}, {"name": "test", "num_bytes": 271554, "num_examples": 2243}, {"name": "validation", "num_bytes": 131129, "num_examples": 966}], "download_size": 474169, "dataset_size": 1095615}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-02-07T17:22:44+00:00
[ "1809.09528" ]
[ "en" ]
TAGS #task_categories-question-answering #language-English #license-unknown #arxiv-1809.09528 #region-us
Dataset Card for "com\_qa" ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Paper: URL * Point of Contact: Rishiraj Saha Roy * Size of downloaded dataset files: 1.67 MB * Size of the generated dataset: 1.10 MB * Total amount of disk used: 2.78 MB ### Dataset Summary ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website. By collecting questions from such a site we ensure that the information needs are ones of interest to actual users. Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them more interesting for driving future research compared to those collected from an engine's query log. The dataset contains questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives, superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 1.67 MB * Size of the generated dataset: 1.10 MB * Total amount of disk used: 2.78 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'cluster\_id': a 'string' feature. * 'questions': a 'list' of 'string' features. * 'answers': a 'list' of 'string' features. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @lewtun, @thomwolf, @mariamabarham, @patrickvonplaten, @albertvillanova for adding this dataset.
[ "### Dataset Summary\n\n\nComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website.\nBy collecting questions from such a site we ensure that the information needs are ones of interest to actual users.\nMoreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them\nmore interesting for driving future research compared to those collected from an engine's query log. The dataset contains\nquestions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives,\nsuperlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and\nunanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions\nin ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated\nwith its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are\ntemporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 1.67 MB\n* Size of the generated dataset: 1.10 MB\n* Total amount of disk used: 2.78 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'cluster\\_id': a 'string' feature.\n* 'questions': a 'list' of 'string' features.\n* 'answers': a 'list' of 'string' features.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @thomwolf, @mariamabarham, @patrickvonplaten, @albertvillanova for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #language-English #license-unknown #arxiv-1809.09528 #region-us \n", "### Dataset Summary\n\n\nComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website.\nBy collecting questions from such a site we ensure that the information needs are ones of interest to actual users.\nMoreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them\nmore interesting for driving future research compared to those collected from an engine's query log. The dataset contains\nquestions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives,\nsuperlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and\nunanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions\nin ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated\nwith its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are\ntemporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 1.67 MB\n* Size of the generated dataset: 1.10 MB\n* Total amount of disk used: 2.78 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'cluster\\_id': a 'string' feature.\n* 'questions': a 'list' of 'string' features.\n* 'answers': a 'list' of 'string' features.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @thomwolf, @mariamabarham, @patrickvonplaten, @albertvillanova for adding this dataset." ]
[ 38, 270, 10, 11, 6, 50, 17, 50, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 40 ]
[ "passage: TAGS\n#task_categories-question-answering #language-English #license-unknown #arxiv-1809.09528 #region-us \n### Dataset Summary\n\n\nComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website.\nBy collecting questions from such a site we ensure that the information needs are ones of interest to actual users.\nMoreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them\nmore interesting for driving future research compared to those collected from an engine's query log. The dataset contains\nquestions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives,\nsuperlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and\nunanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions\nin ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated\nwith its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are\ntemporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 1.67 MB\n* Size of the generated dataset: 1.10 MB\n* Total amount of disk used: 2.78 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'cluster\\_id': a 'string' feature.\n* 'questions': a 'list' of 'string' features.\n* 'answers': a 'list' of 'string' features.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process" ]
792db818e8199aff8f77e183caf7d03de451b426
# Dataset Card for "common_gen" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://inklab.usc.edu/CommonGen/index.html](https://inklab.usc.edu/CommonGen/index.html) - **Repository:** https://github.com/INK-USC/CommonGen - **Paper:** [CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning](https://arxiv.org/abs/1911.03705) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.85 MB - **Size of the generated dataset:** 7.21 MB - **Total amount of disk used:** 9.06 MB ### Dataset Summary CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts; the task is to generate a coherent sentence describing an everyday scenario using these concepts. CommonGen is challenging because it inherently requires 1) relational reasoning using background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. Our dataset, constructed through a combination of crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and 50k sentences in total. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.85 MB - **Size of the generated dataset:** 7.21 MB - **Total amount of disk used:** 9.06 MB An example of 'train' looks as follows. ``` { "concept_set_idx": 0, "concepts": ["ski", "mountain", "skier"], "target": "Three skiers are skiing on a snowy mountain." } ``` ### Data Fields The data fields are the same among all splits. #### default - `concept_set_idx`: a `int32` feature. - `concepts`: a `list` of `string` features. - `target`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|67389| 4018|1497| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The dataset is licensed under [MIT License](https://github.com/INK-USC/CommonGen/blob/master/LICENSE). ### Citation Information ```bib @inproceedings{lin-etal-2020-commongen, title = "{C}ommon{G}en: A Constrained Text Generation Challenge for Generative Commonsense Reasoning", author = "Lin, Bill Yuchen and Zhou, Wangchunshu and Shen, Ming and Zhou, Pei and Bhagavatula, Chandra and Choi, Yejin and Ren, Xiang", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.165", doi = "10.18653/v1/2020.findings-emnlp.165", pages = "1823--1840" } ``` ### Contributions Thanks to [@JetRunner](https://github.com/JetRunner), [@yuchenlin](https://github.com/yuchenlin), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
allenai/common_gen
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "concepts-to-text", "arxiv:1911.03705", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found", "crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "commongen", "pretty_name": "CommonGen", "tags": ["concepts-to-text"], "dataset_info": {"features": [{"name": "concept_set_idx", "dtype": "int32"}, {"name": "concepts", "sequence": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6724166, "num_examples": 67389}, {"name": "validation", "num_bytes": 408740, "num_examples": 4018}, {"name": "test", "num_bytes": 77518, "num_examples": 1497}], "download_size": 3434865, "dataset_size": 7210424}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-04T07:34:57+00:00
[ "1911.03705" ]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #concepts-to-text #arxiv-1911.03705 #region-us
Dataset Card for "common\_gen" ============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning * Point of Contact: * Size of downloaded dataset files: 1.85 MB * Size of the generated dataset: 7.21 MB * Total amount of disk used: 9.06 MB ### Dataset Summary CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts; the task is to generate a coherent sentence describing an everyday scenario using these concepts. CommonGen is challenging because it inherently requires 1) relational reasoning using background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. Our dataset, constructed through a combination of crowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and 50k sentences in total. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 1.85 MB * Size of the generated dataset: 7.21 MB * Total amount of disk used: 9.06 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'concept\_set\_idx': a 'int32' feature. * 'concepts': a 'list' of 'string' features. * 'target': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The dataset is licensed under MIT License. ### Contributions Thanks to @JetRunner, @yuchenlin, @thomwolf, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nCommonGen is a constrained text generation task, associated with a benchmark dataset,\nto explicitly test machines for the ability of generative commonsense reasoning. Given\na set of common concepts; the task is to generate a coherent sentence describing an\neveryday scenario using these concepts.\n\n\nCommonGen is challenging because it inherently requires 1) relational reasoning using\nbackground commonsense knowledge, and 2) compositional generalization ability to work\non unseen concept combinations. Our dataset, constructed through a combination of\ncrowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and\n50k sentences in total.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 1.85 MB\n* Size of the generated dataset: 7.21 MB\n* Total amount of disk used: 9.06 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'concept\\_set\\_idx': a 'int32' feature.\n* 'concepts': a 'list' of 'string' features.\n* 'target': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset is licensed under MIT License.", "### Contributions\n\n\nThanks to @JetRunner, @yuchenlin, @thomwolf, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #concepts-to-text #arxiv-1911.03705 #region-us \n", "### Dataset Summary\n\n\nCommonGen is a constrained text generation task, associated with a benchmark dataset,\nto explicitly test machines for the ability of generative commonsense reasoning. Given\na set of common concepts; the task is to generate a coherent sentence describing an\neveryday scenario using these concepts.\n\n\nCommonGen is challenging because it inherently requires 1) relational reasoning using\nbackground commonsense knowledge, and 2) compositional generalization ability to work\non unseen concept combinations. Our dataset, constructed through a combination of\ncrowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and\n50k sentences in total.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 1.85 MB\n* Size of the generated dataset: 7.21 MB\n* Total amount of disk used: 9.06 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'concept\\_set\\_idx': a 'int32' feature.\n* 'concepts': a 'list' of 'string' features.\n* 'target': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset is licensed under MIT License.", "### Contributions\n\n\nThanks to @JetRunner, @yuchenlin, @thomwolf, @lhoestq for adding this dataset." ]
[ 105, 141, 10, 11, 6, 49, 17, 52, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 16, 33 ]
[ "passage: TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #concepts-to-text #arxiv-1911.03705 #region-us \n### Dataset Summary\n\n\nCommonGen is a constrained text generation task, associated with a benchmark dataset,\nto explicitly test machines for the ability of generative commonsense reasoning. Given\na set of common concepts; the task is to generate a coherent sentence describing an\neveryday scenario using these concepts.\n\n\nCommonGen is challenging because it inherently requires 1) relational reasoning using\nbackground commonsense knowledge, and 2) compositional generalization ability to work\non unseen concept combinations. Our dataset, constructed through a combination of\ncrowd-sourcing from AMT and existing caption corpora, consists of 30k concept-sets and\n50k sentences in total.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 1.85 MB\n* Size of the generated dataset: 7.21 MB\n* Total amount of disk used: 9.06 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'concept\\_set\\_idx': a 'int32' feature.\n* 'concepts': a 'list' of 'string' features.\n* 'target': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators" ]
16ea653dd7d6a92f8fd80839466b1c6be1df300a
# Dataset Card for common_language ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://zenodo.org/record/5036977 - **Repository:** https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonLanguage - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems. ### Supported Tasks and Leaderboards The baselines for language-id are available in the SpeechBrain toolkit (see recipes/CommonLanguage): https://github.com/speechbrain/speechbrain ### Languages List of included languages: ``` Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file, and its label `language`. Additional fields include `age`, `client_id`, `gender` and `sentence`. ```python { 'client_id': 'itln_trn_sp_175', 'path': '/path/common_voice_kpd/Italian/train/itln_trn_sp_175/common_voice_it_18279446.wav', 'audio': {'path': '/path/common_voice_kpd/Italian/train/itln_trn_sp_175/common_voice_it_18279446.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000}, 'sentence': 'Con gli studenti è leggermente simile.', 'age': 'not_defined', 'gender': 'not_defined', 'language': 22 } ``` ### Data Fields `client_id` (`string`): An id for which client (voice) made the recording `path` (`string`): The path to the audio file - `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. `language` (`ClassLabel`): The language of the recording (see the `Languages` section above) `sentence` (`string`): The sentence the user was prompted to speak `age` (`string`): The age of the speaker. `gender` (`string`): The gender of the speaker ### Data Splits The dataset is already balanced and split into train, dev (validation) and test sets. | Name | Train | Dev | Test | |:---------------------------------:|:------:|:------:|:-----:| | **# of utterances** | 177552 | 47104 | 47704 | | **# unique speakers** | 11189 | 1297 | 1322 | | **Total duration, hr** | 30.04 | 7.53 | 7.53 | | **Min duration, sec** | 0.86 | 0.98 | 0.89 | | **Mean duration, sec** | 4.87 | 4.61 | 4.55 | | **Max duration, sec** | 21.72 | 105.67 | 29.83 | | **Duration per language, min** | ~40 | ~10 | ~10 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations The Mongolian and Ukrainian languages are spelled as "Mangolian" and "Ukranian" in this version of the dataset. [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [Ganesh Sinisetty; Pavlo Ruban; Oleksandr Dymov; Mirco Ravanelli](https://zenodo.org/record/5036977#.YdTZ5hPMJ70) ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ``` @dataset{ganesh_sinisetty_2021_5036977, author = {Ganesh Sinisetty and Pavlo Ruban and Oleksandr Dymov and Mirco Ravanelli}, title = {CommonLanguage}, month = jun, year = 2021, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5036977}, url = {https://doi.org/10.5281/zenodo.5036977} } ``` ### Contributions Thanks to [@anton-l](https://github.com/anton-l) for adding this dataset.
common_language
[ "task_categories:audio-classification", "task_ids:speaker-identification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:extended|common_voice", "language:ar", "language:br", "language:ca", "language:cnh", "language:cs", "language:cv", "language:cy", "language:de", "language:dv", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fr", "language:fy", "language:ia", "language:id", "language:it", "language:ja", "language:ka", "language:kab", "language:ky", "language:lv", "language:mn", "language:mt", "language:nl", "language:pl", "language:pt", "language:rm", "language:ro", "language:ru", "language:rw", "language:sah", "language:sl", "language:sv", "language:ta", "language:tr", "language:tt", "language:uk", "language:zh", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ar", "br", "ca", "cnh", "cs", "cv", "cy", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fr", "fy", "ia", "id", "it", "ja", "ka", "kab", "ky", "lv", "mn", "mt", "nl", "pl", "pt", "rm", "ro", "ru", "rw", "sah", "sl", "sv", "ta", "tr", "tt", "uk", "zh"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|common_voice"], "task_categories": ["audio-classification"], "task_ids": ["speaker-identification"], "pretty_name": "Common Language", "language_bcp47": ["fy-NL", "rm-sursilv", "sv-SE", "zh-CN", "zh-HK", "zh-TW"], "dataset_info": {"features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "language", "dtype": {"class_label": {"names": {"0": "Arabic", "1": "Basque", "2": "Breton", "3": "Catalan", "4": "Chinese_China", "5": "Chinese_Hongkong", "6": "Chinese_Taiwan", "7": "Chuvash", "8": "Czech", "9": "Dhivehi", "10": "Dutch", "11": "English", "12": "Esperanto", "13": "Estonian", "14": "French", "15": "Frisian", "16": "Georgian", "17": "German", "18": "Greek", "19": "Hakha_Chin", "20": "Indonesian", "21": "Interlingua", "22": "Italian", "23": "Japanese", "24": "Kabyle", "25": "Kinyarwanda", "26": "Kyrgyz", "27": "Latvian", "28": "Maltese", "29": "Mangolian", "30": "Persian", "31": "Polish", "32": "Portuguese", "33": "Romanian", "34": "Romansh_Sursilvan", "35": "Russian", "36": "Sakha", "37": "Slovenian", "38": "Spanish", "39": "Swedish", "40": "Tamil", "41": "Tatar", "42": "Turkish", "43": "Ukranian", "44": "Welsh"}}}}], "config_name": "full", "splits": [{"name": "train", "num_bytes": 7116761, "num_examples": 22194}, {"name": "validation", "num_bytes": 1855233, "num_examples": 5888}, {"name": "test", "num_bytes": 1877970, "num_examples": 5963}], "download_size": 3761951178, "dataset_size": 10849964}}
2023-06-12T12:29:01+00:00
[]
[ "ar", "br", "ca", "cnh", "cs", "cv", "cy", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fr", "fy", "ia", "id", "it", "ja", "ka", "kab", "ky", "lv", "mn", "mt", "nl", "pl", "pt", "rm", "ro", "ru", "rw", "sah", "sl", "sv", "ta", "tr", "tt", "uk", "zh" ]
TAGS #task_categories-audio-classification #task_ids-speaker-identification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|common_voice #language-Arabic #language-Breton #language-Catalan #language-Hakha Chin #language-Czech #language-Chuvash #language-Welsh #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-French #language-Western Frisian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Italian #language-Japanese #language-Georgian #language-Kabyle #language-Kirghiz #language-Latvian #language-Mongolian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Slovenian #language-Swedish #language-Tamil #language-Turkish #language-Tatar #language-Ukrainian #language-Chinese #license-cc-by-4.0 #region-us
Dataset Card for common\_language ================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems. ### Supported Tasks and Leaderboards The baselines for language-id are available in the SpeechBrain toolkit (see recipes/CommonLanguage): URL ### Languages List of included languages: Dataset Structure ----------------- ### Data Instances A typical data point comprises the 'path' to the audio file, and its label 'language'. Additional fields include 'age', 'client\_id', 'gender' and 'sentence'. ### Data Fields 'client\_id' ('string'): An id for which client (voice) made the recording 'path' ('string'): The path to the audio file * 'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. 'language' ('ClassLabel'): The language of the recording (see the 'Languages' section above) 'sentence' ('string'): The sentence the user was prompted to speak 'age' ('string'): The age of the speaker. 'gender' ('string'): The gender of the speaker ### Data Splits The dataset is already balanced and split into train, dev (validation) and test sets. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases ### Other Known Limitations The Mongolian and Ukrainian languages are spelled as "Mangolian" and "Ukranian" in this version of the dataset. Additional Information ---------------------- ### Dataset Curators Ganesh Sinisetty; Pavlo Ruban; Oleksandr Dymov; Mirco Ravanelli ### Licensing Information Creative Commons Attribution 4.0 International ### Contributions Thanks to @anton-l for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.", "### Supported Tasks and Leaderboards\n\n\nThe baselines for language-id are available in the SpeechBrain toolkit (see recipes/CommonLanguage):\nURL", "### Languages\n\n\nList of included languages:\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the 'path' to the audio file, and its label 'language'. Additional fields include 'age', 'client\\_id', 'gender' and 'sentence'.", "### Data Fields\n\n\n'client\\_id' ('string'): An id for which client (voice) made the recording\n\n\n'path' ('string'): The path to the audio file\n\n\n* 'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n\n'language' ('ClassLabel'): The language of the recording (see the 'Languages' section above)\n\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n\n'age' ('string'): The age of the speaker.\n\n\n'gender' ('string'): The gender of the speaker", "### Data Splits\n\n\nThe dataset is already balanced and split into train, dev (validation) and test sets.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "### Discussion of Biases", "### Other Known Limitations\n\n\nThe Mongolian and Ukrainian languages are spelled as \"Mangolian\" and \"Ukranian\" in this version of the dataset.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nGanesh Sinisetty; Pavlo Ruban; Oleksandr Dymov; Mirco Ravanelli", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International", "### Contributions\n\n\nThanks to @anton-l for adding this dataset." ]
[ "TAGS\n#task_categories-audio-classification #task_ids-speaker-identification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|common_voice #language-Arabic #language-Breton #language-Catalan #language-Hakha Chin #language-Czech #language-Chuvash #language-Welsh #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-French #language-Western Frisian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Italian #language-Japanese #language-Georgian #language-Kabyle #language-Kirghiz #language-Latvian #language-Mongolian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Slovenian #language-Swedish #language-Tamil #language-Turkish #language-Tatar #language-Ukrainian #language-Chinese #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nThis dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.", "### Supported Tasks and Leaderboards\n\n\nThe baselines for language-id are available in the SpeechBrain toolkit (see recipes/CommonLanguage):\nURL", "### Languages\n\n\nList of included languages:\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the 'path' to the audio file, and its label 'language'. Additional fields include 'age', 'client\\_id', 'gender' and 'sentence'.", "### Data Fields\n\n\n'client\\_id' ('string'): An id for which client (voice) made the recording\n\n\n'path' ('string'): The path to the audio file\n\n\n* 'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n\n'language' ('ClassLabel'): The language of the recording (see the 'Languages' section above)\n\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n\n'age' ('string'): The age of the speaker.\n\n\n'gender' ('string'): The gender of the speaker", "### Data Splits\n\n\nThe dataset is already balanced and split into train, dev (validation) and test sets.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "### Discussion of Biases", "### Other Known Limitations\n\n\nThe Mongolian and Ukrainian languages are spelled as \"Mangolian\" and \"Ukranian\" in this version of the dataset.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nGanesh Sinisetty; Pavlo Ruban; Oleksandr Dymov; Mirco Ravanelli", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International", "### Contributions\n\n\nThanks to @anton-l for adding this dataset." ]
[ 348, 74, 38, 17, 55, 294, 34, 7, 4, 10, 10, 5, 5, 9, 52, 41, 8, 45, 27, 11, 18 ]
[ "passage: TAGS\n#task_categories-audio-classification #task_ids-speaker-identification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|common_voice #language-Arabic #language-Breton #language-Catalan #language-Hakha Chin #language-Czech #language-Chuvash #language-Welsh #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-French #language-Western Frisian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Italian #language-Japanese #language-Georgian #language-Kabyle #language-Kirghiz #language-Latvian #language-Mongolian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Slovenian #language-Swedish #language-Tamil #language-Turkish #language-Tatar #language-Ukrainian #language-Chinese #license-cc-by-4.0 #region-us \n### Dataset Summary\n\n\nThis dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.### Supported Tasks and Leaderboards\n\n\nThe baselines for language-id are available in the SpeechBrain toolkit (see recipes/CommonLanguage):\nURL### Languages\n\n\nList of included languages:\n\n\nDataset Structure\n-----------------", "passage: ### Data Instances\n\n\nA typical data point comprises the 'path' to the audio file, and its label 'language'. Additional fields include 'age', 'client\\_id', 'gender' and 'sentence'.### Data Fields\n\n\n'client\\_id' ('string'): An id for which client (voice) made the recording\n\n\n'path' ('string'): The path to the audio file\n\n\n* 'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n\n'language' ('ClassLabel'): The language of the recording (see the 'Languages' section above)\n\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n\n'age' ('string'): The age of the speaker.\n\n\n'gender' ('string'): The gender of the speaker### Data Splits\n\n\nThe dataset is already balanced and split into train, dev (validation) and test sets.\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.### Discussion of Biases" ]
1536952d0944a866207ab093a34618207a5c4b3d
# Dataset Card for common_voice <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Deprecated:</b> Dataset "common_voice" is deprecated and will soon be deleted. Use datasets under <a href="https://huggingface.co/mozilla-foundation">mozilla-foundation</a> organisation instead. For example, you can load <a href="https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0">Common Voice 13</a> dataset via <code>load_dataset("mozilla-foundation/common_voice_13_0", "en")</code></p> </div> ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://commonvoice.mozilla.org/en/datasets - **Repository:** https://github.com/common-voice/common-voice - **Paper:** https://commonvoice.mozilla.org/en/datasets - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines. The dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment. ` {'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': "''", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path': `nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000} ` ### Data Fields client_id: An id for which client (voice) made the recording path: The path to the audio file audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. sentence: The sentence the user was prompted to speak up_votes: How many upvotes the audio file has received from reviewers down_votes: How many downvotes the audio file has received from reviewers age: The age of the speaker. gender: The gender of the speaker accent: Accent of the speaker locale: The locale of the speaker segment: Usually empty field ### Data Splits The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other. The validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality. The invalidated data is data has been invalidated by reviewers and recieved downvotes that the data is of low quality. The reported data is data that has been reported, for different reasons. The other data is data that has not yet been reviewed. The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` @inproceedings{commonvoice:2020, author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.}, title = {Common Voice: A Massively-Multilingual Speech Corpus}, booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)}, pages = {4211--4215}, year = 2020 } ``` ### Contributions Thanks to [@BirgerMoell](https://github.com/BirgerMoell) for adding this dataset.
common_voice
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:extended|common_voice", "language:ab", "language:ar", "language:as", "language:br", "language:ca", "language:cnh", "language:cs", "language:cv", "language:cy", "language:de", "language:dv", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fr", "language:fy", "language:ga", "language:hi", "language:hsb", "language:hu", "language:ia", "language:id", "language:it", "language:ja", "language:ka", "language:kab", "language:ky", "language:lg", "language:lt", "language:lv", "language:mn", "language:mt", "language:nl", "language:or", "language:pa", "language:pl", "language:pt", "language:rm", "language:ro", "language:ru", "language:rw", "language:sah", "language:sl", "language:sv", "language:ta", "language:th", "language:tr", "language:tt", "language:uk", "language:vi", "language:vot", "language:zh", "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ab", "ar", "as", "br", "ca", "cnh", "cs", "cv", "cy", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "hi", "hsb", "hu", "ia", "id", "it", "ja", "ka", "kab", "ky", "lg", "lt", "lv", "mn", "mt", "nl", "or", "pa", "pl", "pt", "rm", "ro", "ru", "rw", "sah", "sl", "sv", "ta", "th", "tr", "tt", "uk", "vi", "vot", "zh"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M", "10K<n<100K", "1K<n<10K", "n<1K"], "source_datasets": ["extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice", "config_names": ["ab", "ar", "as", "br", "ca", "cnh", "cs", "cv", "cy", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy-NL", "ga-IE", "hi", "hsb", "hu", "ia", "id", "it", "ja", "ka", "kab", "ky", "lg", "lt", "lv", "mn", "mt", "nl", "or", "pa-IN", "pl", "pt", "rm-sursilv", "rm-vallader", "ro", "ru", "rw", "sah", "sl", "sv-SE", "ta", "th", "tr", "tt", "uk", "vi", "vot", "zh-CN", "zh-HK", "zh-TW"], "language_bcp47": ["fy-NL", "ga-IE", "pa-IN", "rm-sursilv", "rm-vallader", "sv-SE", "zh-CN", "zh-HK", "zh-TW"], "viewer": false, "dataset_info": [{"config_name": "ab", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1295622, "num_examples": 22}, {"name": "test", "num_bytes": 411844, "num_examples": 9}, {"name": "validation"}, {"name": "other", "num_bytes": 40023390, "num_examples": 752}, {"name": "validated", "num_bytes": 1707426, "num_examples": 31}, {"name": "invalidated", "num_bytes": 361626, "num_examples": 8}], "download_size": 41038412, "dataset_size": 43799908}, {"config_name": "ar", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 359335168, "num_examples": 14227}, {"name": "test", "num_bytes": 237546641, "num_examples": 7622}, {"name": "validation", "num_bytes": 209606861, "num_examples": 7517}, {"name": "other", "num_bytes": 515822404, "num_examples": 18283}, {"name": "validated", "num_bytes": 1182522872, "num_examples": 43291}, {"name": "invalidated", "num_bytes": 194805036, "num_examples": 6333}], "download_size": 1756264615, "dataset_size": 2699638982}, {"config_name": "as", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11442279, "num_examples": 270}, {"name": "test", "num_bytes": 5071343, "num_examples": 110}, {"name": "validation", "num_bytes": 5480156, "num_examples": 124}, {"name": "other"}, {"name": "validated", "num_bytes": 21993698, "num_examples": 504}, {"name": "invalidated", "num_bytes": 886145, "num_examples": 31}], "download_size": 22226465, "dataset_size": 44873621}, {"config_name": "br", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62238289, "num_examples": 2780}, {"name": "test", "num_bytes": 54461339, "num_examples": 2087}, {"name": "validation", "num_bytes": 46995570, "num_examples": 1997}, {"name": "other", "num_bytes": 269858143, "num_examples": 10912}, {"name": "validated", "num_bytes": 203503622, "num_examples": 8560}, {"name": "invalidated", "num_bytes": 20861017, "num_examples": 623}], "download_size": 465276982, "dataset_size": 657917980}, {"config_name": "ca", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12966939466, "num_examples": 285584}, {"name": "test", "num_bytes": 745761890, "num_examples": 15724}, {"name": "validation", "num_bytes": 716442038, "num_examples": 15724}, {"name": "other", "num_bytes": 2693542910, "num_examples": 64446}, {"name": "validated", "num_bytes": 18115833966, "num_examples": 416701}, {"name": "invalidated", "num_bytes": 850402888, "num_examples": 18846}], "download_size": 20743110341, "dataset_size": 36088923158}, {"config_name": "cnh", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18866674, "num_examples": 807}, {"name": "test", "num_bytes": 24675321, "num_examples": 752}, {"name": "validation", "num_bytes": 22162315, "num_examples": 756}, {"name": "other", "num_bytes": 84878963, "num_examples": 2934}, {"name": "validated", "num_bytes": 69330148, "num_examples": 2432}, {"name": "invalidated", "num_bytes": 13642724, "num_examples": 433}], "download_size": 161331331, "dataset_size": 233556145}, {"config_name": "cs", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 215205282, "num_examples": 5655}, {"name": "test", "num_bytes": 148499476, "num_examples": 4144}, {"name": "validation", "num_bytes": 148312130, "num_examples": 4118}, {"name": "other", "num_bytes": 282225475, "num_examples": 7475}, {"name": "validated", "num_bytes": 1019817024, "num_examples": 30431}, {"name": "invalidated", "num_bytes": 24717823, "num_examples": 685}], "download_size": 1271909933, "dataset_size": 1838777210}, {"config_name": "cv", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31649510, "num_examples": 931}, {"name": "test", "num_bytes": 32513061, "num_examples": 788}, {"name": "validation", "num_bytes": 28429779, "num_examples": 818}, {"name": "other", "num_bytes": 288294623, "num_examples": 6927}, {"name": "validated", "num_bytes": 126717875, "num_examples": 3496}, {"name": "invalidated", "num_bytes": 57923138, "num_examples": 1282}], "download_size": 439329081, "dataset_size": 565527986}, {"config_name": "cy", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 271642649, "num_examples": 6839}, {"name": "test", "num_bytes": 206865596, "num_examples": 4820}, {"name": "validation", "num_bytes": 201813388, "num_examples": 4776}, {"name": "other", "num_bytes": 688469886, "num_examples": 17919}, {"name": "validated", "num_bytes": 2763112391, "num_examples": 72984}, {"name": "invalidated", "num_bytes": 146874576, "num_examples": 3648}], "download_size": 3434474658, "dataset_size": 4278778486}, {"config_name": "de", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11463160619, "num_examples": 246525}, {"name": "test", "num_bytes": 744617681, "num_examples": 15588}, {"name": "validation", "num_bytes": 729559862, "num_examples": 15588}, {"name": "other", "num_bytes": 464513461, "num_examples": 10095}, {"name": "validated", "num_bytes": 22402489041, "num_examples": 565186}, {"name": "invalidated", "num_bytes": 1440604803, "num_examples": 32789}], "download_size": 23283812097, "dataset_size": 37244945467}, {"config_name": "dv", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 118576140, "num_examples": 2680}, {"name": "test", "num_bytes": 94281409, "num_examples": 2202}, {"name": "validation", "num_bytes": 94117088, "num_examples": 2077}, {"name": "other"}, {"name": "validated", "num_bytes": 528571107, "num_examples": 11866}, {"name": "invalidated", "num_bytes": 37694847, "num_examples": 840}], "download_size": 540488041, "dataset_size": 873240591}, {"config_name": "el", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80759076, "num_examples": 2316}, {"name": "test", "num_bytes": 53820491, "num_examples": 1522}, {"name": "validation", "num_bytes": 44818565, "num_examples": 1401}, {"name": "other", "num_bytes": 186861175, "num_examples": 5659}, {"name": "validated", "num_bytes": 204446790, "num_examples": 5996}, {"name": "invalidated", "num_bytes": 6023769, "num_examples": 185}], "download_size": 381570611, "dataset_size": 576729866}, {"config_name": "en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26088826658, "num_examples": 564337}, {"name": "test", "num_bytes": 758718688, "num_examples": 16164}, {"name": "validation", "num_bytes": 795638801, "num_examples": 16164}, {"name": "other", "num_bytes": 5796244022, "num_examples": 169895}, {"name": "validated", "num_bytes": 48425872575, "num_examples": 1224864}, {"name": "invalidated", "num_bytes": 9122973965, "num_examples": 189562}], "download_size": 60613063630, "dataset_size": 90988274709}, {"config_name": "eo", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 993655930, "num_examples": 19587}, {"name": "test", "num_bytes": 420153812, "num_examples": 8969}, {"name": "validation", "num_bytes": 391427586, "num_examples": 8987}, {"name": "other", "num_bytes": 142476819, "num_examples": 2946}, {"name": "validated", "num_bytes": 2603249289, "num_examples": 58094}, {"name": "invalidated", "num_bytes": 238105462, "num_examples": 4736}], "download_size": 2883560869, "dataset_size": 4789068898}, {"config_name": "es", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6918333205, "num_examples": 161813}, {"name": "test", "num_bytes": 754049291, "num_examples": 15089}, {"name": "validation", "num_bytes": 735558084, "num_examples": 15089}, {"name": "other", "num_bytes": 5528972205, "num_examples": 144791}, {"name": "validated", "num_bytes": 9623788388, "num_examples": 236314}, {"name": "invalidated", "num_bytes": 1664876264, "num_examples": 40640}], "download_size": 16188844718, "dataset_size": 25225577437}, {"config_name": "et", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 161124199, "num_examples": 2966}, {"name": "test", "num_bytes": 133183135, "num_examples": 2509}, {"name": "validation", "num_bytes": 137604813, "num_examples": 2507}, {"name": "other", "num_bytes": 30339130, "num_examples": 569}, {"name": "validated", "num_bytes": 573417188, "num_examples": 10683}, {"name": "invalidated", "num_bytes": 193019544, "num_examples": 3557}], "download_size": 767174465, "dataset_size": 1228688009}, {"config_name": "eu", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 317322801, "num_examples": 7505}, {"name": "test", "num_bytes": 238866501, "num_examples": 5172}, {"name": "validation", "num_bytes": 228150083, "num_examples": 5172}, {"name": "other", "num_bytes": 988079897, "num_examples": 23570}, {"name": "validated", "num_bytes": 2621488299, "num_examples": 63009}, {"name": "invalidated", "num_bytes": 208553909, "num_examples": 5387}], "download_size": 3664586106, "dataset_size": 4602461490}, {"config_name": "fa", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 239255087, "num_examples": 7593}, {"name": "test", "num_bytes": 217939210, "num_examples": 5213}, {"name": "validation", "num_bytes": 196558067, "num_examples": 5213}, {"name": "other", "num_bytes": 737017546, "num_examples": 22510}, {"name": "validated", "num_bytes": 8120181903, "num_examples": 251659}, {"name": "invalidated", "num_bytes": 499570226, "num_examples": 11698}], "download_size": 8884585819, "dataset_size": 10010522039}, {"config_name": "fi", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16017393, "num_examples": 460}, {"name": "test", "num_bytes": 16117529, "num_examples": 428}, {"name": "validation", "num_bytes": 15471757, "num_examples": 415}, {"name": "other", "num_bytes": 5836400, "num_examples": 149}, {"name": "validated", "num_bytes": 47669391, "num_examples": 1305}, {"name": "invalidated", "num_bytes": 2228215, "num_examples": 59}], "download_size": 49882909, "dataset_size": 103340685}, {"config_name": "fr", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12439892070, "num_examples": 298982}, {"name": "test", "num_bytes": 733943163, "num_examples": 15763}, {"name": "validation", "num_bytes": 703801114, "num_examples": 15763}, {"name": "other", "num_bytes": 117998889, "num_examples": 3222}, {"name": "validated", "num_bytes": 17921836252, "num_examples": 461004}, {"name": "invalidated", "num_bytes": 1794149368, "num_examples": 40351}], "download_size": 19130141984, "dataset_size": 33711620856}, {"config_name": "fy-NL", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 159116360, "num_examples": 3927}, {"name": "test", "num_bytes": 126913262, "num_examples": 3020}, {"name": "validation", "num_bytes": 112288554, "num_examples": 2790}, {"name": "other", "num_bytes": 893887467, "num_examples": 21569}, {"name": "validated", "num_bytes": 429651922, "num_examples": 10495}, {"name": "invalidated", "num_bytes": 38985422, "num_examples": 1031}], "download_size": 1237743070, "dataset_size": 1760842987}, {"config_name": "ga-IE", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15396820, "num_examples": 541}, {"name": "test", "num_bytes": 16611739, "num_examples": 506}, {"name": "validation", "num_bytes": 14897739, "num_examples": 497}, {"name": "other", "num_bytes": 61948768, "num_examples": 2130}, {"name": "validated", "num_bytes": 93371649, "num_examples": 3352}, {"name": "invalidated", "num_bytes": 10993268, "num_examples": 409}], "download_size": 156553447, "dataset_size": 213219983}, {"config_name": "hi", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4860737, "num_examples": 157}, {"name": "test", "num_bytes": 4728043, "num_examples": 127}, {"name": "validation", "num_bytes": 5569352, "num_examples": 135}, {"name": "other", "num_bytes": 4176110, "num_examples": 139}, {"name": "validated", "num_bytes": 15158052, "num_examples": 419}, {"name": "invalidated", "num_bytes": 2801051, "num_examples": 60}], "download_size": 21424045, "dataset_size": 37293345}, {"config_name": "hsb", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43049910, "num_examples": 808}, {"name": "test", "num_bytes": 20929094, "num_examples": 387}, {"name": "validation", "num_bytes": 8769458, "num_examples": 172}, {"name": "other", "num_bytes": 3173841, "num_examples": 62}, {"name": "validated", "num_bytes": 72748422, "num_examples": 1367}, {"name": "invalidated", "num_bytes": 5589972, "num_examples": 227}], "download_size": 79362060, "dataset_size": 154260697}, {"config_name": "hu", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 126163153, "num_examples": 3348}, {"name": "test", "num_bytes": 57056435, "num_examples": 1649}, {"name": "validation", "num_bytes": 50306925, "num_examples": 1434}, {"name": "other", "num_bytes": 12051094, "num_examples": 295}, {"name": "validated", "num_bytes": 234307671, "num_examples": 6457}, {"name": "invalidated", "num_bytes": 5881521, "num_examples": 169}], "download_size": 242758708, "dataset_size": 485766799}, {"config_name": "ia", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 96577153, "num_examples": 3477}, {"name": "test", "num_bytes": 33204678, "num_examples": 899}, {"name": "validation", "num_bytes": 67436779, "num_examples": 1601}, {"name": "other", "num_bytes": 30937041, "num_examples": 1095}, {"name": "validated", "num_bytes": 197248304, "num_examples": 5978}, {"name": "invalidated", "num_bytes": 6769573, "num_examples": 192}], "download_size": 226499645, "dataset_size": 432173528}, {"config_name": "id", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63515863, "num_examples": 2130}, {"name": "test", "num_bytes": 60711104, "num_examples": 1844}, {"name": "validation", "num_bytes": 56963520, "num_examples": 1835}, {"name": "other", "num_bytes": 206578628, "num_examples": 6782}, {"name": "validated", "num_bytes": 272570942, "num_examples": 8696}, {"name": "invalidated", "num_bytes": 16566129, "num_examples": 470}], "download_size": 475918233, "dataset_size": 676906186}, {"config_name": "it", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2555546829, "num_examples": 58015}, {"name": "test", "num_bytes": 656285877, "num_examples": 12928}, {"name": "validation", "num_bytes": 621955330, "num_examples": 12928}, {"name": "other", "num_bytes": 671213467, "num_examples": 14549}, {"name": "validated", "num_bytes": 4552252754, "num_examples": 102579}, {"name": "invalidated", "num_bytes": 564610354, "num_examples": 12189}], "download_size": 5585781573, "dataset_size": 9621864611}, {"config_name": "ja", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27600264, "num_examples": 722}, {"name": "test", "num_bytes": 26475556, "num_examples": 632}, {"name": "validation", "num_bytes": 22098940, "num_examples": 586}, {"name": "other", "num_bytes": 34588931, "num_examples": 885}, {"name": "validated", "num_bytes": 106916400, "num_examples": 3072}, {"name": "invalidated", "num_bytes": 17819020, "num_examples": 504}], "download_size": 152879796, "dataset_size": 235499111}, {"config_name": "ka", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 47790695, "num_examples": 1058}, {"name": "test", "num_bytes": 30301524, "num_examples": 656}, {"name": "validation", "num_bytes": 24951079, "num_examples": 527}, {"name": "other", "num_bytes": 2144603, "num_examples": 44}, {"name": "validated", "num_bytes": 104135978, "num_examples": 2275}, {"name": "invalidated", "num_bytes": 7004160, "num_examples": 139}], "download_size": 104280554, "dataset_size": 216328039}, {"config_name": "kab", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3219289101, "num_examples": 120530}, {"name": "test", "num_bytes": 446453041, "num_examples": 14622}, {"name": "validation", "num_bytes": 414159937, "num_examples": 14622}, {"name": "other", "num_bytes": 2282481767, "num_examples": 88021}, {"name": "validated", "num_bytes": 15310455176, "num_examples": 573718}, {"name": "invalidated", "num_bytes": 581587104, "num_examples": 18134}], "download_size": 17171606918, "dataset_size": 22254426126}, {"config_name": "ky", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75460488, "num_examples": 1955}, {"name": "test", "num_bytes": 57116561, "num_examples": 1503}, {"name": "validation", "num_bytes": 61393867, "num_examples": 1511}, {"name": "other", "num_bytes": 258081579, "num_examples": 7223}, {"name": "validated", "num_bytes": 355742823, "num_examples": 9236}, {"name": "invalidated", "num_bytes": 41007711, "num_examples": 926}], "download_size": 579440853, "dataset_size": 848803029}, {"config_name": "lg", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46910479, "num_examples": 1250}, {"name": "test", "num_bytes": 26951803, "num_examples": 584}, {"name": "validation", "num_bytes": 16709367, "num_examples": 384}, {"name": "other", "num_bytes": 111180838, "num_examples": 3110}, {"name": "validated", "num_bytes": 90606863, "num_examples": 2220}, {"name": "invalidated", "num_bytes": 14069959, "num_examples": 290}], "download_size": 208197149, "dataset_size": 306429309}, {"config_name": "lt", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 34605356, "num_examples": 931}, {"name": "test", "num_bytes": 19940391, "num_examples": 466}, {"name": "validation", "num_bytes": 10462851, "num_examples": 244}, {"name": "other", "num_bytes": 71150206, "num_examples": 1629}, {"name": "validated", "num_bytes": 65138550, "num_examples": 1644}, {"name": "invalidated", "num_bytes": 4414780, "num_examples": 102}], "download_size": 135299706, "dataset_size": 205712134}, {"config_name": "lv", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 67269173, "num_examples": 2552}, {"name": "test", "num_bytes": 56937435, "num_examples": 1882}, {"name": "validation", "num_bytes": 55289058, "num_examples": 2002}, {"name": "other", "num_bytes": 40259801, "num_examples": 1560}, {"name": "validated", "num_bytes": 179726893, "num_examples": 6444}, {"name": "invalidated", "num_bytes": 4383319, "num_examples": 143}], "download_size": 208307691, "dataset_size": 403865679}, {"config_name": "mn", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 89913910, "num_examples": 2183}, {"name": "test", "num_bytes": 86737041, "num_examples": 1862}, {"name": "validation", "num_bytes": 82343275, "num_examples": 1837}, {"name": "other", "num_bytes": 146365394, "num_examples": 3272}, {"name": "validated", "num_bytes": 327264827, "num_examples": 7487}, {"name": "invalidated", "num_bytes": 31764232, "num_examples": 667}], "download_size": 486369317, "dataset_size": 764388679}, {"config_name": "mt", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 73850815, "num_examples": 2036}, {"name": "test", "num_bytes": 66520195, "num_examples": 1617}, {"name": "validation", "num_bytes": 56412066, "num_examples": 1516}, {"name": "other", "num_bytes": 220666971, "num_examples": 5714}, {"name": "validated", "num_bytes": 218212969, "num_examples": 5747}, {"name": "invalidated", "num_bytes": 12328068, "num_examples": 314}], "download_size": 425114242, "dataset_size": 647991084}, {"config_name": "nl", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 321946148, "num_examples": 9460}, {"name": "test", "num_bytes": 205287443, "num_examples": 5708}, {"name": "validation", "num_bytes": 186095353, "num_examples": 4938}, {"name": "other", "num_bytes": 801418, "num_examples": 27}, {"name": "validated", "num_bytes": 1710636990, "num_examples": 52488}, {"name": "invalidated", "num_bytes": 115133112, "num_examples": 3308}], "download_size": 1741827548, "dataset_size": 2539900464}, {"config_name": "or", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16067910, "num_examples": 388}, {"name": "test", "num_bytes": 4270651, "num_examples": 98}, {"name": "validation", "num_bytes": 5485937, "num_examples": 129}, {"name": "other", "num_bytes": 177775963, "num_examples": 4302}, {"name": "validated", "num_bytes": 25824418, "num_examples": 615}, {"name": "invalidated", "num_bytes": 2701922, "num_examples": 62}], "download_size": 199077358, "dataset_size": 232126801}, {"config_name": "pa-IN", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7572499, "num_examples": 211}, {"name": "test", "num_bytes": 4375532, "num_examples": 116}, {"name": "validation", "num_bytes": 1702492, "num_examples": 44}, {"name": "other", "num_bytes": 56683312, "num_examples": 1411}, {"name": "validated", "num_bytes": 13650443, "num_examples": 371}, {"name": "invalidated", "num_bytes": 1690766, "num_examples": 43}], "download_size": 69748265, "dataset_size": 85675044}, {"config_name": "pl", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 273394509, "num_examples": 7468}, {"name": "test", "num_bytes": 205047541, "num_examples": 5153}, {"name": "validation", "num_bytes": 195917307, "num_examples": 5153}, {"name": "other", "num_bytes": 442144781, "num_examples": 12848}, {"name": "validated", "num_bytes": 3150860197, "num_examples": 90791}, {"name": "invalidated", "num_bytes": 180801918, "num_examples": 4601}], "download_size": 3537012341, "dataset_size": 4448166253}, {"config_name": "pt", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 231451724, "num_examples": 6514}, {"name": "test", "num_bytes": 180108694, "num_examples": 4641}, {"name": "validation", "num_bytes": 165966139, "num_examples": 4592}, {"name": "other", "num_bytes": 283497435, "num_examples": 8390}, {"name": "validated", "num_bytes": 1480529669, "num_examples": 41584}, {"name": "invalidated", "num_bytes": 67948392, "num_examples": 1740}], "download_size": 1704252567, "dataset_size": 2409502053}, {"config_name": "rm-sursilv", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62396326, "num_examples": 1384}, {"name": "test", "num_bytes": 51707733, "num_examples": 1194}, {"name": "validation", "num_bytes": 52114252, "num_examples": 1205}, {"name": "other", "num_bytes": 93351293, "num_examples": 2102}, {"name": "validated", "num_bytes": 166218231, "num_examples": 3783}, {"name": "invalidated", "num_bytes": 30593270, "num_examples": 639}], "download_size": 275950479, "dataset_size": 456381105}, {"config_name": "rm-vallader", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29528457, "num_examples": 574}, {"name": "test", "num_bytes": 18805466, "num_examples": 378}, {"name": "validation", "num_bytes": 17012341, "num_examples": 357}, {"name": "other", "num_bytes": 36890435, "num_examples": 727}, {"name": "validated", "num_bytes": 65711922, "num_examples": 1316}, {"name": "invalidated", "num_bytes": 9356204, "num_examples": 374}], "download_size": 108113989, "dataset_size": 177304825}, {"config_name": "ro", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 107235430, "num_examples": 3399}, {"name": "test", "num_bytes": 60106568, "num_examples": 1778}, {"name": "validation", "num_bytes": 30358457, "num_examples": 858}, {"name": "other", "num_bytes": 65805210, "num_examples": 1945}, {"name": "validated", "num_bytes": 197820619, "num_examples": 6039}, {"name": "invalidated", "num_bytes": 11108104, "num_examples": 485}], "download_size": 261978702, "dataset_size": 472434388}, {"config_name": "ru", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 686168722, "num_examples": 15481}, {"name": "test", "num_bytes": 385349488, "num_examples": 8007}, {"name": "validation", "num_bytes": 361164462, "num_examples": 7963}, {"name": "other", "num_bytes": 450644862, "num_examples": 10247}, {"name": "validated", "num_bytes": 3212213931, "num_examples": 74256}, {"name": "invalidated", "num_bytes": 145739451, "num_examples": 3056}], "download_size": 3655676916, "dataset_size": 5241280916}, {"config_name": "rw", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21645788973, "num_examples": 515197}, {"name": "test", "num_bytes": 707959382, "num_examples": 15724}, {"name": "validation", "num_bytes": 698662384, "num_examples": 15032}, {"name": "other", "num_bytes": 923146896, "num_examples": 22923}, {"name": "validated", "num_bytes": 35011249432, "num_examples": 832929}, {"name": "invalidated", "num_bytes": 7969286423, "num_examples": 206790}], "download_size": 42545189583, "dataset_size": 66956093490}, {"config_name": "sah", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68286985, "num_examples": 1442}, {"name": "test", "num_bytes": 38534020, "num_examples": 757}, {"name": "validation", "num_bytes": 17900397, "num_examples": 405}, {"name": "other", "num_bytes": 62594222, "num_examples": 1275}, {"name": "validated", "num_bytes": 124800352, "num_examples": 2606}, {"name": "invalidated", "num_bytes": 3594160, "num_examples": 66}], "download_size": 181245626, "dataset_size": 315710136}, {"config_name": "sl", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66122967, "num_examples": 2038}, {"name": "test", "num_bytes": 26872195, "num_examples": 881}, {"name": "validation", "num_bytes": 16353097, "num_examples": 556}, {"name": "other", "num_bytes": 79268518, "num_examples": 2502}, {"name": "validated", "num_bytes": 148371273, "num_examples": 4669}, {"name": "invalidated", "num_bytes": 3048301, "num_examples": 92}], "download_size": 222751292, "dataset_size": 340036351}, {"config_name": "sv-SE", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62727263, "num_examples": 2331}, {"name": "test", "num_bytes": 59127381, "num_examples": 2027}, {"name": "validation", "num_bytes": 53846355, "num_examples": 2019}, {"name": "other", "num_bytes": 109970049, "num_examples": 3043}, {"name": "validated", "num_bytes": 327049001, "num_examples": 12552}, {"name": "invalidated", "num_bytes": 13462567, "num_examples": 462}], "download_size": 421434184, "dataset_size": 626182616}, {"config_name": "ta", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 69052658, "num_examples": 2009}, {"name": "test", "num_bytes": 67616865, "num_examples": 1781}, {"name": "validation", "num_bytes": 63248009, "num_examples": 1779}, {"name": "other", "num_bytes": 246650792, "num_examples": 7428}, {"name": "validated", "num_bytes": 438961956, "num_examples": 12652}, {"name": "invalidated", "num_bytes": 23587453, "num_examples": 594}], "download_size": 679766097, "dataset_size": 909117733}, {"config_name": "th", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 100435725, "num_examples": 2917}, {"name": "test", "num_bytes": 82030679, "num_examples": 2188}, {"name": "validation", "num_bytes": 63237632, "num_examples": 1922}, {"name": "other", "num_bytes": 95235301, "num_examples": 2671}, {"name": "validated", "num_bytes": 245734783, "num_examples": 7028}, {"name": "invalidated", "num_bytes": 18247080, "num_examples": 467}], "download_size": 341305736, "dataset_size": 604921200}, {"config_name": "tr", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 57879052, "num_examples": 1831}, {"name": "test", "num_bytes": 60268059, "num_examples": 1647}, {"name": "validation", "num_bytes": 54914798, "num_examples": 1647}, {"name": "other", "num_bytes": 10954154, "num_examples": 325}, {"name": "validated", "num_bytes": 585777527, "num_examples": 18685}, {"name": "invalidated", "num_bytes": 59288266, "num_examples": 1726}], "download_size": 620848700, "dataset_size": 829081856}, {"config_name": "tt", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 348132697, "num_examples": 11211}, {"name": "test", "num_bytes": 135120057, "num_examples": 4485}, {"name": "validation", "num_bytes": 61690964, "num_examples": 2127}, {"name": "other", "num_bytes": 62158038, "num_examples": 1798}, {"name": "validated", "num_bytes": 767791517, "num_examples": 25781}, {"name": "invalidated", "num_bytes": 10403128, "num_examples": 287}], "download_size": 777153207, "dataset_size": 1385296401}, {"config_name": "uk", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 161925063, "num_examples": 4035}, {"name": "test", "num_bytes": 138422211, "num_examples": 3235}, {"name": "validation", "num_bytes": 135483169, "num_examples": 3236}, {"name": "other", "num_bytes": 327979131, "num_examples": 8161}, {"name": "validated", "num_bytes": 889863965, "num_examples": 22337}, {"name": "invalidated", "num_bytes": 55745301, "num_examples": 1255}], "download_size": 1218559031, "dataset_size": 1709418840}, {"config_name": "vi", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6244454, "num_examples": 221}, {"name": "test", "num_bytes": 6656365, "num_examples": 198}, {"name": "validation", "num_bytes": 6531856, "num_examples": 200}, {"name": "other", "num_bytes": 31315434, "num_examples": 870}, {"name": "validated", "num_bytes": 19432595, "num_examples": 619}, {"name": "invalidated", "num_bytes": 2981661, "num_examples": 78}], "download_size": 51929480, "dataset_size": 73162365}, {"config_name": "vot", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 146467, "num_examples": 3}, {"name": "test"}, {"name": "validation"}, {"name": "other", "num_bytes": 7963322, "num_examples": 411}, {"name": "validated", "num_bytes": 146467, "num_examples": 3}, {"name": "invalidated", "num_bytes": 107949, "num_examples": 6}], "download_size": 7792602, "dataset_size": 8364205}, {"config_name": "zh-CN", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 793667379, "num_examples": 18541}, {"name": "test", "num_bytes": 420202544, "num_examples": 8760}, {"name": "validation", "num_bytes": 396096323, "num_examples": 8743}, {"name": "other", "num_bytes": 381264783, "num_examples": 8948}, {"name": "validated", "num_bytes": 1618113625, "num_examples": 36405}, {"name": "invalidated", "num_bytes": 266234479, "num_examples": 5305}], "download_size": 2184602350, "dataset_size": 3875579133}, {"config_name": "zh-HK", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 221459521, "num_examples": 7506}, {"name": "test", "num_bytes": 217627041, "num_examples": 5172}, {"name": "validation", "num_bytes": 196071110, "num_examples": 5172}, {"name": "other", "num_bytes": 1319233252, "num_examples": 38830}, {"name": "validated", "num_bytes": 1482087591, "num_examples": 41835}, {"name": "invalidated", "num_bytes": 124170969, "num_examples": 2999}], "download_size": 2774145806, "dataset_size": 3560649484}, {"config_name": "zh-TW", "features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 97323787, "num_examples": 3507}, {"name": "test", "num_bytes": 85512325, "num_examples": 2895}, {"name": "validation", "num_bytes": 80402637, "num_examples": 2895}, {"name": "other", "num_bytes": 623801957, "num_examples": 22477}, {"name": "validated", "num_bytes": 1568842090, "num_examples": 61232}, {"name": "invalidated", "num_bytes": 100241443, "num_examples": 3584}], "download_size": 2182836295, "dataset_size": 2556124239}]}
2024-02-13T08:55:48+00:00
[]
[ "ab", "ar", "as", "br", "ca", "cnh", "cs", "cv", "cy", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "hi", "hsb", "hu", "ia", "id", "it", "ja", "ka", "kab", "ky", "lg", "lt", "lv", "mn", "mt", "nl", "or", "pa", "pl", "pt", "rm", "ro", "ru", "rw", "sah", "sl", "sv", "ta", "th", "tr", "tt", "uk", "vi", "vot", "zh" ]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-extended|common_voice #language-Abkhazian #language-Arabic #language-Assamese #language-Breton #language-Catalan #language-Hakha Chin #language-Czech #language-Chuvash #language-Welsh #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Hindi #language-Upper Sorbian #language-Hungarian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Italian #language-Japanese #language-Georgian #language-Kabyle #language-Kirghiz #language-Ganda #language-Lithuanian #language-Latvian #language-Mongolian #language-Maltese #language-Dutch #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Portuguese #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Slovenian #language-Swedish #language-Tamil #language-Thai #language-Turkish #language-Tatar #language-Ukrainian #language-Vietnamese #language-Votic #language-Chinese #license-cc0-1.0 #region-us
# Dataset Card for common_voice <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Deprecated:</b> Dataset "common_voice" is deprecated and will soon be deleted. Use datasets under <a href="URL organisation instead. For example, you can load <a href="URL Voice 13</a> dataset via <code>load_dataset("mozilla-foundation/common_voice_13_0", "en")</code></p> </div> ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines. The dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment. ' {'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': "''", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path': 'nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000} ' ### Data Fields client_id: An id for which client (voice) made the recording path: The path to the audio file audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. sentence: The sentence the user was prompted to speak up_votes: How many upvotes the audio file has received from reviewers down_votes: How many downvotes the audio file has received from reviewers age: The age of the speaker. gender: The gender of the speaker accent: Accent of the speaker locale: The locale of the speaker segment: Usually empty field ### Data Splits The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other. The validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality. The invalidated data is data has been invalidated by reviewers and recieved downvotes that the data is of low quality. The reported data is data that has been reported, for different reasons. The other data is data that has not yet been reviewed. The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Public Domain, CC-0 ### Contributions Thanks to @BirgerMoell for adding this dataset.
[ "# Dataset Card for common_voice\n\n<div class=\"course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400\">\n<p><b>Deprecated:</b> Dataset \"common_voice\" is deprecated and will soon be deleted. Use datasets under <a href=\"URL organisation instead. For example, you can load <a href=\"URL Voice 13</a> dataset via <code>load_dataset(\"mozilla-foundation/common_voice_13_0\", \"en\")</code></p>\n</div>", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.\n\nThe dataset currently consists of 7,335 validated hours in 60 languages, but we\u0019re always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.\n\n'\n{'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': \"''\", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path': 'nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000}\n'", "### Data Fields\n\nclient_id: An id for which client (voice) made the recording\n\npath: The path to the audio file\n\naudio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\nsentence: The sentence the user was prompted to speak\n\nup_votes: How many upvotes the audio file has received from reviewers\n\ndown_votes: How many downvotes the audio file has received from reviewers\n\nage: The age of the speaker.\n\ngender: The gender of the speaker\n\naccent: Accent of the speaker\n\nlocale: The locale of the speaker\n\nsegment: Usually empty field", "### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand recieved downvotes that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nPublic Domain, CC-0", "### Contributions\n\nThanks to @BirgerMoell for adding this dataset." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-extended|common_voice #language-Abkhazian #language-Arabic #language-Assamese #language-Breton #language-Catalan #language-Hakha Chin #language-Czech #language-Chuvash #language-Welsh #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Hindi #language-Upper Sorbian #language-Hungarian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Italian #language-Japanese #language-Georgian #language-Kabyle #language-Kirghiz #language-Ganda #language-Lithuanian #language-Latvian #language-Mongolian #language-Maltese #language-Dutch #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Portuguese #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Slovenian #language-Swedish #language-Tamil #language-Thai #language-Turkish #language-Tatar #language-Ukrainian #language-Vietnamese #language-Votic #language-Chinese #license-cc0-1.0 #region-us \n", "# Dataset Card for common_voice\n\n<div class=\"course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400\">\n<p><b>Deprecated:</b> Dataset \"common_voice\" is deprecated and will soon be deleted. Use datasets under <a href=\"URL organisation instead. For example, you can load <a href=\"URL Voice 13</a> dataset via <code>load_dataset(\"mozilla-foundation/common_voice_13_0\", \"en\")</code></p>\n</div>", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.\n\nThe dataset currently consists of 7,335 validated hours in 60 languages, but we\u0019re always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.\n\n'\n{'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': \"''\", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path': 'nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000}\n'", "### Data Fields\n\nclient_id: An id for which client (voice) made the recording\n\npath: The path to the audio file\n\naudio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\nsentence: The sentence the user was prompted to speak\n\nup_votes: How many upvotes the audio file has received from reviewers\n\ndown_votes: How many downvotes the audio file has received from reviewers\n\nage: The age of the speaker.\n\ngender: The gender of the speaker\n\naccent: Accent of the speaker\n\nlocale: The locale of the speaker\n\nsegment: Usually empty field", "### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand recieved downvotes that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nPublic Domain, CC-0", "### Contributions\n\nThanks to @BirgerMoell for adding this dataset." ]
[ 460, 211, 120, 27, 112, 10, 5, 6, 350, 282, 147, 5, 7, 4, 10, 10, 5, 5, 9, 42, 8, 41, 8, 7, 5, 6, 11, 18 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-extended|common_voice #language-Abkhazian #language-Arabic #language-Assamese #language-Breton #language-Catalan #language-Hakha Chin #language-Czech #language-Chuvash #language-Welsh #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Hindi #language-Upper Sorbian #language-Hungarian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Italian #language-Japanese #language-Georgian #language-Kabyle #language-Kirghiz #language-Ganda #language-Lithuanian #language-Latvian #language-Mongolian #language-Maltese #language-Dutch #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Portuguese #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Slovenian #language-Swedish #language-Tamil #language-Thai #language-Turkish #language-Tatar #language-Ukrainian #language-Vietnamese #language-Votic #language-Chinese #license-cc0-1.0 #region-us \n", "passage: # Dataset Card for common_voice\n\n<div class=\"course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400\">\n<p><b>Deprecated:</b> Dataset \"common_voice\" is deprecated and will soon be deleted. Use datasets under <a href=\"URL organisation instead. For example, you can load <a href=\"URL Voice 13</a> dataset via <code>load_dataset(\"mozilla-foundation/common_voice_13_0\", \"en\")</code></p>\n</div>## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.\n\nThe dataset currently consists of 7,335 validated hours in 60 languages, but we\u0019re always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.### Supported Tasks and Leaderboards### Languages\n\nEnglish## Dataset Structure", "passage: ### Data Instances\n\nA typical data point comprises the path to the audio file, called path and its sentence. Additional fields include accent, age, client_id, up_votes down_votes, gender, locale and segment.\n\n'\n{'accent': 'netherlands', 'age': 'fourties', 'client_id': 'bbbcb732e0f422150c30ff3654bbab572e2a617da107bca22ff8b89ab2e4f124d03b6a92c48322862f60bd0179ae07baf0f9b4f9c4e11d581e0cec70f703ba54', 'down_votes': 0, 'gender': 'male', 'locale': 'nl', 'path': 'nl/clips/common_voice_nl_23522441.mp3', 'segment': \"''\", 'sentence': 'Ik vind dat een dubieuze procedure.', 'up_votes': 2, 'audio': {'path': 'nl/clips/common_voice_nl_23522441.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000}\n'### Data Fields\n\nclient_id: An id for which client (voice) made the recording\n\npath: The path to the audio file\n\naudio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\nsentence: The sentence the user was prompted to speak\n\nup_votes: How many upvotes the audio file has received from reviewers\n\ndown_votes: How many downvotes the audio file has received from reviewers\n\nage: The age of the speaker.\n\ngender: The gender of the speaker\n\naccent: Accent of the speaker\n\nlocale: The locale of the speaker\n\nsegment: Usually empty field### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and recieved upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand recieved downvotes that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
94630fe30dad47192a8546eb75f094926d47e155
# Dataset Card for "commonsense_qa" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.tau-nlp.org/commonsenseqa - **Repository:** https://github.com/jonathanherzig/commonsenseqa - **Paper:** https://arxiv.org/abs/1811.00937 - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 4.68 MB - **Size of the generated dataset:** 2.18 MB - **Total amount of disk used:** 6.86 MB ### Dataset Summary CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers. The dataset is provided in two major training/validation/testing set splits: "Random split" which is the main evaluation split, and "Question token split", see paper for details. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The dataset is in English (`en`). ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 4.68 MB - **Size of the generated dataset:** 2.18 MB - **Total amount of disk used:** 6.86 MB An example of 'train' looks as follows: ``` {'id': '075e483d21c29a511267ef62bedc0461', 'question': 'The sanctions against the school were a punishing blow, and they seemed to what the efforts the school had made to change?', 'question_concept': 'punishing', 'choices': {'label': ['A', 'B', 'C', 'D', 'E'], 'text': ['ignore', 'enforce', 'authoritarian', 'yell at', 'avoid']}, 'answerKey': 'A'} ``` ### Data Fields The data fields are the same among all splits. #### default - `id` (`str`): Unique ID. - `question`: a `string` feature. - `question_concept` (`str`): ConceptNet concept associated to the question. - `choices`: a dictionary feature containing: - `label`: a `string` feature. - `text`: a `string` feature. - `answerKey`: a `string` feature. ### Data Splits | name | train | validation | test | |---------|------:|-----------:|-----:| | default | 9741 | 1221 | 1140 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The dataset is licensed under the MIT License. See: https://github.com/jonathanherzig/commonsenseqa/issues/5 ### Citation Information ``` @inproceedings{talmor-etal-2019-commonsenseqa, title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge", author = "Talmor, Alon and Herzig, Jonathan and Lourie, Nicholas and Berant, Jonathan", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", month = jun, year = "2019", address = "Minneapolis, Minnesota", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N19-1421", doi = "10.18653/v1/N19-1421", pages = "4149--4158", archivePrefix = "arXiv", eprint = "1811.00937", primaryClass = "cs", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
tau/commonsense_qa
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:mit", "arxiv:1811.00937", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "paperswithcode_id": "commonsenseqa", "pretty_name": "CommonsenseQA", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "question_concept", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "label", "dtype": "string"}, {"name": "text", "dtype": "string"}]}, {"name": "answerKey", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2207794, "num_examples": 9741}, {"name": "validation", "num_bytes": 273848, "num_examples": 1221}, {"name": "test", "num_bytes": 257842, "num_examples": 1140}], "download_size": 1558570, "dataset_size": 2739484}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-04T07:44:16+00:00
[ "1811.00937" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #arxiv-1811.00937 #region-us
Dataset Card for "commonsense\_qa" ================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Point of Contact: * Size of downloaded dataset files: 4.68 MB * Size of the generated dataset: 2.18 MB * Total amount of disk used: 6.86 MB ### Dataset Summary CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers. The dataset is provided in two major training/validation/testing set splits: "Random split" which is the main evaluation split, and "Question token split", see paper for details. ### Supported Tasks and Leaderboards ### Languages The dataset is in English ('en'). Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 4.68 MB * Size of the generated dataset: 2.18 MB * Total amount of disk used: 6.86 MB An example of 'train' looks as follows: ### Data Fields The data fields are the same among all splits. #### default * 'id' ('str'): Unique ID. * 'question': a 'string' feature. * 'question\_concept' ('str'): ConceptNet concept associated to the question. * 'choices': a dictionary feature containing: + 'label': a 'string' feature. + 'text': a 'string' feature. * 'answerKey': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The dataset is licensed under the MIT License. See: URL ### Contributions Thanks to @thomwolf, @lewtun, @albertvillanova, @patrickvonplaten for adding this dataset.
[ "### Dataset Summary\n\n\nCommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge\nto predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers.\nThe dataset is provided in two major training/validation/testing set splits: \"Random split\" which is the main evaluation\nsplit, and \"Question token split\", see paper for details.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is in English ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 4.68 MB\n* Size of the generated dataset: 2.18 MB\n* Total amount of disk used: 6.86 MB\n\n\nAn example of 'train' looks as follows:", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'id' ('str'): Unique ID.\n* 'question': a 'string' feature.\n* 'question\\_concept' ('str'): ConceptNet concept associated to the question.\n* 'choices': a dictionary feature containing:\n\t+ 'label': a 'string' feature.\n\t+ 'text': a 'string' feature.\n* 'answerKey': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset is licensed under the MIT License.\n\n\nSee: URL", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @albertvillanova, @patrickvonplaten for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #arxiv-1811.00937 #region-us \n", "### Dataset Summary\n\n\nCommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge\nto predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers.\nThe dataset is provided in two major training/validation/testing set splits: \"Random split\" which is the main evaluation\nsplit, and \"Question token split\", see paper for details.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is in English ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 4.68 MB\n* Size of the generated dataset: 2.18 MB\n* Total amount of disk used: 6.86 MB\n\n\nAn example of 'train' looks as follows:", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'id' ('str'): Unique ID.\n* 'question': a 'string' feature.\n* 'question\\_concept' ('str'): ConceptNet concept associated to the question.\n* 'choices': a dictionary feature containing:\n\t+ 'label': a 'string' feature.\n\t+ 'text': a 'string' feature.\n* 'answerKey': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset is licensed under the MIT License.\n\n\nSee: URL", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @albertvillanova, @patrickvonplaten for adding this dataset." ]
[ 100, 100, 10, 22, 6, 49, 17, 100, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 20, 34 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #arxiv-1811.00937 #region-us \n### Dataset Summary\n\n\nCommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge\nto predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers.\nThe dataset is provided in two major training/validation/testing set splits: \"Random split\" which is the main evaluation\nsplit, and \"Question token split\", see paper for details.### Supported Tasks and Leaderboards### Languages\n\n\nThe dataset is in English ('en').\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 4.68 MB\n* Size of the generated dataset: 2.18 MB\n* Total amount of disk used: 6.86 MB\n\n\nAn example of 'train' looks as follows:### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'id' ('str'): Unique ID.\n* 'question': a 'string' feature.\n* 'question\\_concept' ('str'): ConceptNet concept associated to the question.\n* 'choices': a dictionary feature containing:\n\t+ 'label': a 'string' feature.\n\t+ 'text': a 'string' feature.\n* 'answerKey': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases" ]
71b758ecc688b2822d07ffa7f8393299f1dc7cac
# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/hendrycks/math - **Repository:** https://github.com/hendrycks/math - **Paper:** https://arxiv.org/pdf/2103.03874.pdf - **Leaderboard:** N/A - **Point of Contact:** Dan Hendrycks ### Dataset Summary The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems from mathematics competitions, including the AMC 10, AMC 12, AIME, and more. Each problem in MATH has a full step-by-step solution, which can be used to teach models to generate answer derivations and explanations. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances A data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's `\boxed` tag. An example from the dataset is: ``` {'problem': 'A board game spinner is divided into three parts labeled $A$, $B$ and $C$. The probability of the spinner landing on $A$ is $\\frac{1}{3}$ and the probability of the spinner landing on $B$ is $\\frac{5}{12}$. What is the probability of the spinner landing on $C$? Express your answer as a common fraction.', 'level': 'Level 1', 'type': 'Counting & Probability', 'solution': 'The spinner is guaranteed to land on exactly one of the three regions, so we know that the sum of the probabilities of it landing in each region will be 1. If we let the probability of it landing in region $C$ be $x$, we then have the equation $1 = \\frac{5}{12}+\\frac{1}{3}+x$, from which we have $x=\\boxed{\\frac{1}{4}}$.'} ``` ### Data Fields * `problem`: The competition math problem. * `solution`: The step-by-step solution. * `level`: The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'. * `type`: The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus. ### Data Splits * train: 7,500 examples * test: 5,000 examples ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information https://github.com/hendrycks/math/blob/main/LICENSE ### Citation Information ```bibtex @article{hendrycksmath2021, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, journal={arXiv preprint arXiv:2103.03874}, year={2021} } ``` ### Contributions Thanks to [@hacobe](https://github.com/hacobe) for adding this dataset.
hendrycks/competition_math
[ "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "explanation-generation", "arxiv:2103.03874", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "Mathematics Aptitude Test of Heuristics (MATH)", "tags": ["explanation-generation"], "dataset_info": {"features": [{"name": "problem", "dtype": "string"}, {"name": "level", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "solution", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5984788, "num_examples": 7500}, {"name": "test", "num_bytes": 3732575, "num_examples": 5000}], "download_size": 20327424, "dataset_size": 9717363}}
2023-06-08T05:40:09+00:00
[ "2103.03874" ]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #explanation-generation #arxiv-2103.03874 #region-us
# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: N/A - Point of Contact: Dan Hendrycks ### Dataset Summary The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems from mathematics competitions, including the AMC 10, AMC 12, AIME, and more. Each problem in MATH has a full step-by-step solution, which can be used to teach models to generate answer derivations and explanations. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances A data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's '\boxed' tag. An example from the dataset is: ### Data Fields * 'problem': The competition math problem. * 'solution': The step-by-step solution. * 'level': The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'. * 'type': The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus. ### Data Splits * train: 7,500 examples * test: 5,000 examples ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information URL ### Contributions Thanks to @hacobe for adding this dataset.
[ "# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: N/A\n- Point of Contact: Dan Hendrycks", "### Dataset Summary\n\nThe Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems\nfrom mathematics competitions, including the AMC 10, AMC 12, AIME, and more. \nEach problem in MATH has a full step-by-step solution, which can be used to teach\nmodels to generate answer derivations and explanations.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nA data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's '\\boxed' tag.\n\nAn example from the dataset is:", "### Data Fields\n\n* 'problem': The competition math problem.\n* 'solution': The step-by-step solution.\n* 'level': The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'.\n* 'type': The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus.", "### Data Splits\n\n* train: 7,500 examples\n* test: 5,000 examples", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nURL", "### Contributions\n\nThanks to @hacobe for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #explanation-generation #arxiv-2103.03874 #region-us \n", "# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: N/A\n- Point of Contact: Dan Hendrycks", "### Dataset Summary\n\nThe Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems\nfrom mathematics competitions, including the AMC 10, AMC 12, AIME, and more. \nEach problem in MATH has a full step-by-step solution, which can be used to teach\nmodels to generate answer derivations and explanations.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nA data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's '\\boxed' tag.\n\nAn example from the dataset is:", "### Data Fields\n\n* 'problem': The competition math problem.\n* 'solution': The step-by-step solution.\n* 'level': The problem's difficulty level from 'Level 1' to 'Level 5', where a subject's easiest problems for humans are assigned to 'Level 1' and a subject's hardest problems are assigned to 'Level 5'.\n* 'type': The subject of the problem: Algebra, Counting & Probability, Geometry, Intermediate Algebra, Number Theory, Prealgebra and Precalculus.", "### Data Splits\n\n* train: 7,500 examples\n* test: 5,000 examples", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nURL", "### Contributions\n\nThanks to @hacobe for adding this dataset." ]
[ 96, 21, 125, 34, 81, 10, 4, 6, 69, 136, 20, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 7, 17 ]
[ "passage: TAGS\n#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #explanation-generation #arxiv-2103.03874 #region-us \n# Dataset Card for Mathematics Aptitude Test of Heuristics (MATH) dataset## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: N/A\n- Point of Contact: Dan Hendrycks### Dataset Summary\n\nThe Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems\nfrom mathematics competitions, including the AMC 10, AMC 12, AIME, and more. \nEach problem in MATH has a full step-by-step solution, which can be used to teach\nmodels to generate answer derivations and explanations.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances\n\nA data instance consists of a competition math problem and its step-by-step solution written in LaTeX and natural language. The step-by-step solution contains the final answer enclosed in LaTeX's '\\boxed' tag.\n\nAn example from the dataset is:" ]
1e853ebccb1e19d7d4cf3dcc7b04c39497064031
# Dataset Card for "compguesswhat" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://compguesswhat.github.io/](https://compguesswhat.github.io/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** https://arxiv.org/abs/2006.02174 - **Paper:** https://doi.org/10.18653/v1/2020.acl-main.682 - **Point of Contact:** [Alessandro Suglia](mailto:alessandro.suglia@gmail.com) - **Size of downloaded dataset files:** 112.05 MB - **Size of the generated dataset:** 271.11 MB - **Total amount of disk used:** 383.16 MB ### Dataset Summary CompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference scene is an image in VisualGenome. Visit the website for more details: https://compguesswhat.github.io ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### compguesswhat-original - **Size of downloaded dataset files:** 107.21 MB - **Size of the generated dataset:** 174.37 MB - **Total amount of disk used:** 281.57 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "id": 2424, "image": "{\"coco_url\": \"http://mscoco.org/images/270512\", \"file_name\": \"COCO_train2014_000000270512.jpg\", \"flickr_url\": \"http://farm6.stat...", "objects": "{\"area\": [1723.5133056640625, 4838.5361328125, 287.44476318359375, 44918.7109375, 3688.09375, 522.1935424804688], \"bbox\": [[5.61...", "qas": { "answer": ["Yes", "No", "No", "Yes"], "id": [4983, 4996, 5006, 5017], "question": ["Is it in the foreground?", "Does it have wings?", "Is it a person?", "Is it a vehicle?"] }, "status": "success", "target_id": 1197044, "timestamp": "2016-07-08 15:07:38" } ``` #### compguesswhat-zero_shot - **Size of downloaded dataset files:** 4.84 MB - **Size of the generated dataset:** 96.74 MB - **Total amount of disk used:** 101.59 MB An example of 'nd_valid' looks as follows. ``` This example was too long and was cropped: { "id": 0, "image": { "coco_url": "https://s3.amazonaws.com/nocaps/val/004e21eb2e686f40.jpg", "date_captured": "2018-11-06 11:04:33", "file_name": "004e21eb2e686f40.jpg", "height": 1024, "id": 6, "license": 0, "open_images_id": "004e21eb2e686f40", "width": 768 }, "objects": "{\"IsOccluded\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], \"IsTruncated\": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], \"area\": [3...", "status": "incomplete", "target_id": "004e21eb2e686f40_30" } ``` ### Data Fields The data fields are the same among all splits. #### compguesswhat-original - `id`: a `int32` feature. - `target_id`: a `int32` feature. - `timestamp`: a `string` feature. - `status`: a `string` feature. - `id`: a `int32` feature. - `file_name`: a `string` feature. - `flickr_url`: a `string` feature. - `coco_url`: a `string` feature. - `height`: a `int32` feature. - `width`: a `int32` feature. - `width`: a `int32` feature. - `height`: a `int32` feature. - `url`: a `string` feature. - `coco_id`: a `int32` feature. - `flickr_id`: a `string` feature. - `image_id`: a `string` feature. - `qas`: a dictionary feature containing: - `question`: a `string` feature. - `answer`: a `string` feature. - `id`: a `int32` feature. - `objects`: a dictionary feature containing: - `id`: a `int32` feature. - `bbox`: a `list` of `float32` features. - `category`: a `string` feature. - `area`: a `float32` feature. - `category_id`: a `int32` feature. - `segment`: a dictionary feature containing: - `feature`: a `float32` feature. #### compguesswhat-zero_shot - `id`: a `int32` feature. - `target_id`: a `string` feature. - `status`: a `string` feature. - `id`: a `int32` feature. - `file_name`: a `string` feature. - `coco_url`: a `string` feature. - `height`: a `int32` feature. - `width`: a `int32` feature. - `license`: a `int32` feature. - `open_images_id`: a `string` feature. - `date_captured`: a `string` feature. - `objects`: a dictionary feature containing: - `id`: a `string` feature. - `bbox`: a `list` of `float32` features. - `category`: a `string` feature. - `area`: a `float32` feature. - `category_id`: a `int32` feature. - `IsOccluded`: a `int32` feature. - `IsTruncated`: a `int32` feature. - `segment`: a dictionary feature containing: - `MaskPath`: a `string` feature. - `LabelName`: a `string` feature. - `BoxID`: a `string` feature. - `BoxXMin`: a `string` feature. - `BoxXMax`: a `string` feature. - `BoxYMin`: a `string` feature. - `BoxYMax`: a `string` feature. - `PredictedIoU`: a `string` feature. - `Clicks`: a `string` feature. ### Data Splits #### compguesswhat-original | |train|validation|test| |----------------------|----:|---------:|---:| |compguesswhat-original|46341| 9738|9621| #### compguesswhat-zero_shot | |nd_valid|od_valid|nd_test|od_test| |-----------------------|-------:|-------:|------:|------:| |compguesswhat-zero_shot| 5343| 5372| 13836| 13300| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{suglia-etal-2020-compguesswhat, title = "{C}omp{G}uess{W}hat?!: A Multi-task Evaluation Framework for Grounded Language Learning", author = "Suglia, Alessandro and Konstas, Ioannis and Vanzo, Andrea and Bastianelli, Emanuele and Elliott, Desmond and Frank, Stella and Lemon, Oliver", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.682", pages = "7625--7641", abstract = "Approaches to Grounded Language Learning are commonly focused on a single task-based final performance measure which may not depend on desirable properties of the learned hidden representations, such as their ability to predict object attributes or generalize to unseen situations. To remedy this, we present GroLLA, an evaluation framework for Grounded Language Learning with Attributes based on three sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zero-shot evaluation. We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations, in particular with respect to attribute grounding. To this end, we extend the original GuessWhat?! dataset by including a semantic layer on top of the perceptual one. Specifically, we enrich the VisualGenome scene graphs associated with the GuessWhat?! images with several attributes from resources such as VISA and ImSitu. We then compare several hidden state representations from current state-of-the-art approaches to Grounded Language Learning. By using diagnostic classifiers, we show that current models{'} learned representations are not expressive enough to encode object attributes (average F1 of 44.27). In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50.06{\%}).", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@aleSuglia](https://github.com/aleSuglia), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
compguesswhat
[ "task_categories:visual-question-answering", "task_ids:visual-question-answering", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other-guesswhat", "language:en", "license:unknown", "arxiv:2006.02174", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|other-guesswhat"], "task_categories": ["visual-question-answering"], "task_ids": ["visual-question-answering"], "paperswithcode_id": "compguesswhat", "pretty_name": "CompGuessWhat?!", "dataset_info": [{"config_name": "compguesswhat-original", "features": [{"name": "id", "dtype": "int32"}, {"name": "target_id", "dtype": "int32"}, {"name": "timestamp", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "image", "struct": [{"name": "id", "dtype": "int32"}, {"name": "file_name", "dtype": "string"}, {"name": "flickr_url", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "height", "dtype": "int32"}, {"name": "width", "dtype": "int32"}, {"name": "visual_genome", "struct": [{"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "url", "dtype": "string"}, {"name": "coco_id", "dtype": "int32"}, {"name": "flickr_id", "dtype": "string"}, {"name": "image_id", "dtype": "string"}]}]}, {"name": "qas", "sequence": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "id", "dtype": "int32"}]}, {"name": "objects", "sequence": [{"name": "id", "dtype": "int32"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "category", "dtype": "string"}, {"name": "area", "dtype": "float32"}, {"name": "category_id", "dtype": "int32"}, {"name": "segment", "sequence": {"sequence": "float32"}}]}], "splits": [{"name": "train", "num_bytes": 123556580, "num_examples": 46341}, {"name": "validation", "num_bytes": 25441428, "num_examples": 9738}, {"name": "test", "num_bytes": 25369227, "num_examples": 9621}], "download_size": 105349759, "dataset_size": 174367235}, {"config_name": "compguesswhat-zero_shot", "features": [{"name": "id", "dtype": "int32"}, {"name": "target_id", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "image", "struct": [{"name": "id", "dtype": "int32"}, {"name": "file_name", "dtype": "string"}, {"name": "coco_url", "dtype": "string"}, {"name": "height", "dtype": "int32"}, {"name": "width", "dtype": "int32"}, {"name": "license", "dtype": "int32"}, {"name": "open_images_id", "dtype": "string"}, {"name": "date_captured", "dtype": "string"}]}, {"name": "objects", "sequence": [{"name": "id", "dtype": "string"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "category", "dtype": "string"}, {"name": "area", "dtype": "float32"}, {"name": "category_id", "dtype": "int32"}, {"name": "IsOccluded", "dtype": "int32"}, {"name": "IsTruncated", "dtype": "int32"}, {"name": "segment", "sequence": [{"name": "MaskPath", "dtype": "string"}, {"name": "LabelName", "dtype": "string"}, {"name": "BoxID", "dtype": "string"}, {"name": "BoxXMin", "dtype": "string"}, {"name": "BoxXMax", "dtype": "string"}, {"name": "BoxYMin", "dtype": "string"}, {"name": "BoxYMax", "dtype": "string"}, {"name": "PredictedIoU", "dtype": "string"}, {"name": "Clicks", "dtype": "string"}]}]}], "splits": [{"name": "nd_valid", "num_bytes": 13510589, "num_examples": 5343}, {"name": "nd_test", "num_bytes": 36228021, "num_examples": 13836}, {"name": "od_valid", "num_bytes": 14051972, "num_examples": 5372}, {"name": "od_test", "num_bytes": 32950869, "num_examples": 13300}], "download_size": 6548812, "dataset_size": 96741451}], "configs": [{"config_name": "compguesswhat-original", "data_files": [{"split": "train", "path": "compguesswhat-original/train-*"}, {"split": "validation", "path": "compguesswhat-original/validation-*"}, {"split": "test", "path": "compguesswhat-original/test-*"}]}, {"config_name": "compguesswhat-zero_shot", "data_files": [{"split": "nd_valid", "path": "compguesswhat-zero_shot/nd_valid-*"}, {"split": "nd_test", "path": "compguesswhat-zero_shot/nd_test-*"}, {"split": "od_valid", "path": "compguesswhat-zero_shot/od_valid-*"}, {"split": "od_test", "path": "compguesswhat-zero_shot/od_test-*"}]}]}
2024-02-07T17:39:43+00:00
[ "2006.02174" ]
[ "en" ]
TAGS #task_categories-visual-question-answering #task_ids-visual-question-answering #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|other-guesswhat #language-English #license-unknown #arxiv-2006.02174 #region-us
Dataset Card for "compguesswhat" ================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Paper: URL * Point of Contact: Alessandro Suglia * Size of downloaded dataset files: 112.05 MB * Size of the generated dataset: 271.11 MB * Total amount of disk used: 383.16 MB ### Dataset Summary CompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference scene is an image in VisualGenome. Visit the website for more details: URL ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### compguesswhat-original * Size of downloaded dataset files: 107.21 MB * Size of the generated dataset: 174.37 MB * Total amount of disk used: 281.57 MB An example of 'validation' looks as follows. #### compguesswhat-zero\_shot * Size of downloaded dataset files: 4.84 MB * Size of the generated dataset: 96.74 MB * Total amount of disk used: 101.59 MB An example of 'nd\_valid' looks as follows. ### Data Fields The data fields are the same among all splits. #### compguesswhat-original * 'id': a 'int32' feature. * 'target\_id': a 'int32' feature. * 'timestamp': a 'string' feature. * 'status': a 'string' feature. * 'id': a 'int32' feature. * 'file\_name': a 'string' feature. * 'flickr\_url': a 'string' feature. * 'coco\_url': a 'string' feature. * 'height': a 'int32' feature. * 'width': a 'int32' feature. * 'width': a 'int32' feature. * 'height': a 'int32' feature. * 'url': a 'string' feature. * 'coco\_id': a 'int32' feature. * 'flickr\_id': a 'string' feature. * 'image\_id': a 'string' feature. * 'qas': a dictionary feature containing: + 'question': a 'string' feature. + 'answer': a 'string' feature. + 'id': a 'int32' feature. * 'objects': a dictionary feature containing: + 'id': a 'int32' feature. + 'bbox': a 'list' of 'float32' features. + 'category': a 'string' feature. + 'area': a 'float32' feature. + 'category\_id': a 'int32' feature. + 'segment': a dictionary feature containing: - 'feature': a 'float32' feature. #### compguesswhat-zero\_shot * 'id': a 'int32' feature. * 'target\_id': a 'string' feature. * 'status': a 'string' feature. * 'id': a 'int32' feature. * 'file\_name': a 'string' feature. * 'coco\_url': a 'string' feature. * 'height': a 'int32' feature. * 'width': a 'int32' feature. * 'license': a 'int32' feature. * 'open\_images\_id': a 'string' feature. * 'date\_captured': a 'string' feature. * 'objects': a dictionary feature containing: + 'id': a 'string' feature. + 'bbox': a 'list' of 'float32' features. + 'category': a 'string' feature. + 'area': a 'float32' feature. + 'category\_id': a 'int32' feature. + 'IsOccluded': a 'int32' feature. + 'IsTruncated': a 'int32' feature. + 'segment': a dictionary feature containing: - 'MaskPath': a 'string' feature. - 'LabelName': a 'string' feature. - 'BoxID': a 'string' feature. - 'BoxXMin': a 'string' feature. - 'BoxXMax': a 'string' feature. - 'BoxYMin': a 'string' feature. - 'BoxYMax': a 'string' feature. - 'PredictedIoU': a 'string' feature. - 'Clicks': a 'string' feature. ### Data Splits #### compguesswhat-original #### compguesswhat-zero\_shot Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @aleSuglia, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nCompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations,\nin particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference\nscene is an image in VisualGenome. Visit the website for more details: URL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### compguesswhat-original\n\n\n* Size of downloaded dataset files: 107.21 MB\n* Size of the generated dataset: 174.37 MB\n* Total amount of disk used: 281.57 MB\n\n\nAn example of 'validation' looks as follows.", "#### compguesswhat-zero\\_shot\n\n\n* Size of downloaded dataset files: 4.84 MB\n* Size of the generated dataset: 96.74 MB\n* Total amount of disk used: 101.59 MB\n\n\nAn example of 'nd\\_valid' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### compguesswhat-original\n\n\n* 'id': a 'int32' feature.\n* 'target\\_id': a 'int32' feature.\n* 'timestamp': a 'string' feature.\n* 'status': a 'string' feature.\n* 'id': a 'int32' feature.\n* 'file\\_name': a 'string' feature.\n* 'flickr\\_url': a 'string' feature.\n* 'coco\\_url': a 'string' feature.\n* 'height': a 'int32' feature.\n* 'width': a 'int32' feature.\n* 'width': a 'int32' feature.\n* 'height': a 'int32' feature.\n* 'url': a 'string' feature.\n* 'coco\\_id': a 'int32' feature.\n* 'flickr\\_id': a 'string' feature.\n* 'image\\_id': a 'string' feature.\n* 'qas': a dictionary feature containing:\n\t+ 'question': a 'string' feature.\n\t+ 'answer': a 'string' feature.\n\t+ 'id': a 'int32' feature.\n* 'objects': a dictionary feature containing:\n\t+ 'id': a 'int32' feature.\n\t+ 'bbox': a 'list' of 'float32' features.\n\t+ 'category': a 'string' feature.\n\t+ 'area': a 'float32' feature.\n\t+ 'category\\_id': a 'int32' feature.\n\t+ 'segment': a dictionary feature containing:\n\t\t- 'feature': a 'float32' feature.", "#### compguesswhat-zero\\_shot\n\n\n* 'id': a 'int32' feature.\n* 'target\\_id': a 'string' feature.\n* 'status': a 'string' feature.\n* 'id': a 'int32' feature.\n* 'file\\_name': a 'string' feature.\n* 'coco\\_url': a 'string' feature.\n* 'height': a 'int32' feature.\n* 'width': a 'int32' feature.\n* 'license': a 'int32' feature.\n* 'open\\_images\\_id': a 'string' feature.\n* 'date\\_captured': a 'string' feature.\n* 'objects': a dictionary feature containing:\n\t+ 'id': a 'string' feature.\n\t+ 'bbox': a 'list' of 'float32' features.\n\t+ 'category': a 'string' feature.\n\t+ 'area': a 'float32' feature.\n\t+ 'category\\_id': a 'int32' feature.\n\t+ 'IsOccluded': a 'int32' feature.\n\t+ 'IsTruncated': a 'int32' feature.\n\t+ 'segment': a dictionary feature containing:\n\t\t- 'MaskPath': a 'string' feature.\n\t\t- 'LabelName': a 'string' feature.\n\t\t- 'BoxID': a 'string' feature.\n\t\t- 'BoxXMin': a 'string' feature.\n\t\t- 'BoxXMax': a 'string' feature.\n\t\t- 'BoxYMin': a 'string' feature.\n\t\t- 'BoxYMax': a 'string' feature.\n\t\t- 'PredictedIoU': a 'string' feature.\n\t\t- 'Clicks': a 'string' feature.", "### Data Splits", "#### compguesswhat-original", "#### compguesswhat-zero\\_shot\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @aleSuglia, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-visual-question-answering #task_ids-visual-question-answering #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|other-guesswhat #language-English #license-unknown #arxiv-2006.02174 #region-us \n", "### Dataset Summary\n\n\nCompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations,\nin particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference\nscene is an image in VisualGenome. Visit the website for more details: URL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### compguesswhat-original\n\n\n* Size of downloaded dataset files: 107.21 MB\n* Size of the generated dataset: 174.37 MB\n* Total amount of disk used: 281.57 MB\n\n\nAn example of 'validation' looks as follows.", "#### compguesswhat-zero\\_shot\n\n\n* Size of downloaded dataset files: 4.84 MB\n* Size of the generated dataset: 96.74 MB\n* Total amount of disk used: 101.59 MB\n\n\nAn example of 'nd\\_valid' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### compguesswhat-original\n\n\n* 'id': a 'int32' feature.\n* 'target\\_id': a 'int32' feature.\n* 'timestamp': a 'string' feature.\n* 'status': a 'string' feature.\n* 'id': a 'int32' feature.\n* 'file\\_name': a 'string' feature.\n* 'flickr\\_url': a 'string' feature.\n* 'coco\\_url': a 'string' feature.\n* 'height': a 'int32' feature.\n* 'width': a 'int32' feature.\n* 'width': a 'int32' feature.\n* 'height': a 'int32' feature.\n* 'url': a 'string' feature.\n* 'coco\\_id': a 'int32' feature.\n* 'flickr\\_id': a 'string' feature.\n* 'image\\_id': a 'string' feature.\n* 'qas': a dictionary feature containing:\n\t+ 'question': a 'string' feature.\n\t+ 'answer': a 'string' feature.\n\t+ 'id': a 'int32' feature.\n* 'objects': a dictionary feature containing:\n\t+ 'id': a 'int32' feature.\n\t+ 'bbox': a 'list' of 'float32' features.\n\t+ 'category': a 'string' feature.\n\t+ 'area': a 'float32' feature.\n\t+ 'category\\_id': a 'int32' feature.\n\t+ 'segment': a dictionary feature containing:\n\t\t- 'feature': a 'float32' feature.", "#### compguesswhat-zero\\_shot\n\n\n* 'id': a 'int32' feature.\n* 'target\\_id': a 'string' feature.\n* 'status': a 'string' feature.\n* 'id': a 'int32' feature.\n* 'file\\_name': a 'string' feature.\n* 'coco\\_url': a 'string' feature.\n* 'height': a 'int32' feature.\n* 'width': a 'int32' feature.\n* 'license': a 'int32' feature.\n* 'open\\_images\\_id': a 'string' feature.\n* 'date\\_captured': a 'string' feature.\n* 'objects': a dictionary feature containing:\n\t+ 'id': a 'string' feature.\n\t+ 'bbox': a 'list' of 'float32' features.\n\t+ 'category': a 'string' feature.\n\t+ 'area': a 'float32' feature.\n\t+ 'category\\_id': a 'int32' feature.\n\t+ 'IsOccluded': a 'int32' feature.\n\t+ 'IsTruncated': a 'int32' feature.\n\t+ 'segment': a dictionary feature containing:\n\t\t- 'MaskPath': a 'string' feature.\n\t\t- 'LabelName': a 'string' feature.\n\t\t- 'BoxID': a 'string' feature.\n\t\t- 'BoxXMin': a 'string' feature.\n\t\t- 'BoxXMax': a 'string' feature.\n\t\t- 'BoxYMin': a 'string' feature.\n\t\t- 'BoxYMax': a 'string' feature.\n\t\t- 'PredictedIoU': a 'string' feature.\n\t\t- 'Clicks': a 'string' feature.", "### Data Splits", "#### compguesswhat-original", "#### compguesswhat-zero\\_shot\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @aleSuglia, @lhoestq for adding this dataset." ]
[ 112, 75, 10, 11, 6, 58, 62, 17, 389, 413, 5, 8, 17, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 27 ]
[ "passage: TAGS\n#task_categories-visual-question-answering #task_ids-visual-question-answering #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|other-guesswhat #language-English #license-unknown #arxiv-2006.02174 #region-us \n### Dataset Summary\n\n\nCompGuessWhat?! is an instance of a multi-task framework for evaluating the quality of learned neural representations,\nin particular concerning attribute grounding. Use this dataset if you want to use the set of games whose reference\nscene is an image in VisualGenome. Visit the website for more details: URL### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### compguesswhat-original\n\n\n* Size of downloaded dataset files: 107.21 MB\n* Size of the generated dataset: 174.37 MB\n* Total amount of disk used: 281.57 MB\n\n\nAn example of 'validation' looks as follows.#### compguesswhat-zero\\_shot\n\n\n* Size of downloaded dataset files: 4.84 MB\n* Size of the generated dataset: 96.74 MB\n* Total amount of disk used: 101.59 MB\n\n\nAn example of 'nd\\_valid' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.", "passage: #### compguesswhat-original\n\n\n* 'id': a 'int32' feature.\n* 'target\\_id': a 'int32' feature.\n* 'timestamp': a 'string' feature.\n* 'status': a 'string' feature.\n* 'id': a 'int32' feature.\n* 'file\\_name': a 'string' feature.\n* 'flickr\\_url': a 'string' feature.\n* 'coco\\_url': a 'string' feature.\n* 'height': a 'int32' feature.\n* 'width': a 'int32' feature.\n* 'width': a 'int32' feature.\n* 'height': a 'int32' feature.\n* 'url': a 'string' feature.\n* 'coco\\_id': a 'int32' feature.\n* 'flickr\\_id': a 'string' feature.\n* 'image\\_id': a 'string' feature.\n* 'qas': a dictionary feature containing:\n\t+ 'question': a 'string' feature.\n\t+ 'answer': a 'string' feature.\n\t+ 'id': a 'int32' feature.\n* 'objects': a dictionary feature containing:\n\t+ 'id': a 'int32' feature.\n\t+ 'bbox': a 'list' of 'float32' features.\n\t+ 'category': a 'string' feature.\n\t+ 'area': a 'float32' feature.\n\t+ 'category\\_id': a 'int32' feature.\n\t+ 'segment': a dictionary feature containing:\n\t\t- 'feature': a 'float32' feature.#### compguesswhat-zero\\_shot\n\n\n* 'id': a 'int32' feature.\n* 'target\\_id': a 'string' feature.\n* 'status': a 'string' feature.\n* 'id': a 'int32' feature.\n* 'file\\_name': a 'string' feature.\n* 'coco\\_url': a 'string' feature.\n* 'height': a 'int32' feature.\n* 'width': a 'int32' feature.\n* 'license': a 'int32' feature.\n* 'open\\_images\\_id': a 'string' feature.\n* 'date\\_captured': a 'string' feature.\n* 'objects': a dictionary feature containing:\n\t+ 'id': a 'string' feature.\n\t+ 'bbox': a 'list' of 'float32' features.\n\t+ 'category': a 'string' feature.\n\t+ 'area': a 'float32' feature.\n\t+ 'category\\_id': a 'int32' feature.\n\t+ 'IsOccluded': a 'int32' feature.\n\t+ 'IsTruncated': a 'int32' feature.\n\t+ 'segment': a dictionary feature containing:\n\t\t- 'MaskPath': a 'string' feature.\n\t\t- 'LabelName': a 'string' feature.\n\t\t- 'BoxID': a 'string' feature.\n\t\t- 'BoxXMin': a 'string' feature.\n\t\t- 'BoxXMax': a 'string' feature.\n\t\t- 'BoxYMin': a 'string' feature.\n\t\t- 'BoxYMax': a 'string' feature.\n\t\t- 'PredictedIoU': a 'string' feature.\n\t\t- 'Clicks': a 'string' feature.### Data Splits#### compguesswhat-original#### compguesswhat-zero\\_shot\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
075e1c601ff242f9760ca85cf68b5ec41963d4e6
# Dataset Card for Conceptnet5 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/commonsense/conceptnet5/wiki - **Repository:** https://github.com/commonsense/conceptnet5/wiki - **Paper:** https://arxiv.org/abs/1612.03975 ### Dataset Summary ConceptNet is a multilingual knowledge base, representing words and phrases that people use and the common-sense relationships between them. The knowledge in ConceptNet is collected from a variety of resources, including crowd-sourced resources (such as Wiktionary and Open Mind Common Sense), games with a purpose (such as Verbosity and nadya.jp), and expert-created resources (such as WordNet and JMDict). You can browse what ConceptNet knows at http://conceptnet.io. This dataset is designed to provide training data for common sense relationships pulls together from various sources. The dataset is multi-lingual. See langauge codes and language info here: https://github.com/commonsense/conceptnet5/wiki/Languages This dataset provides an interface for the conceptnet5 csv file, and some (but not all) of the raw text data used to build conceptnet5: omcsnet_sentences_free.txt, and omcsnet_sentences_more.txt. One use of this dataset would be to learn to extract the conceptnet relationship from the omcsnet sentences. Conceptnet5 has 34,074,917 relationships. Of those relationships, there are 2,176,099 surface text sentences related to those 2M entries. omcsnet_sentences_free has 898,161 lines. omcsnet_sentences_more has 2,001,736 lines. Original downloads are available here https://github.com/commonsense/conceptnet5/wiki/Downloads. For more information, see: https://github.com/commonsense/conceptnet5/wiki The omcsnet data comes with the following warning from the authors of the above site: Remember: this data comes from various forms of crowdsourcing. Sentences in these files are not necessarily true, useful, or appropriate. ### Languages en, fr, it, de, es, ru, pt, ja, nl, zh and others ## Dataset Structure ### Data Instances There are three configurations for the dataset: conceptnet5, omcs_sentences_free, omcs_sentences_more. Conceptnet5 defines: `` { 'sentence': ..., 'full_rel': ..., 'rel': ..., 'arg1': ..., 'arg2': ..., 'lang': ..., 'extra_info': ... 'weight': ... } `` The omcs text defines: `` { 'sentence': ..., 'raw_data': ... 'weight': ... } `` ### Data Fields For conceptnet5 configurations: * full_rel: the full relationship. e.g., /a/[/r/Antonym/,/c/en/able/,/c/en/cane/] * rel: the binary relationship. e.g., /r/Antonym * arg1: the first argument to the binary relationship. e.g., /c/en/able * arg2: the second argument to the binary relationship. e.g., /c/en/cane * lang: the language code. e.g., en, fr, etc. If the arg1 and arg2 are two different languages, then the form os lang1/lang2. * extra_info: a string that includes json data that has the dataset name, license type (mostly cc-4.0), contributor, etc. e.g., : {"dataset": "/d/verbosity", "license": "cc:by/4.0", "sources": [{"contributor": "/s/resource/verbosity"}], "surfaceEnd": "cane", "surfaceStart": "able", "surfaceText": "[[able]] is the opposite of [[cane]]", "weight": 0.299} * sentence: the sentence from which the relationship was extracted, if one exists, with brackets around the arg1 and arg2. e.g., [[able]] is the opposite of [[cane]] * weight: the weight assigned by the curators or automatically to the relationship, between 1.0-0.0, higher being more certain. For the omcs text configurations: * sentence: the raw sentence * raw_data: the raw tab seperated data of the form, id, text, curator_id, created_on, lanugage_id, activity_id, and score. Most of this information was tied to older systems for entering the data os was not partsed into fields for the dataset. e.g., 1237278 someone can be at catch 10805 2006-11-14 17:56:49.70872-05 en 27 1 * lang: the language code ### Data Splits There are no splits. ## Dataset Creation ### Curation Rationale This dataset was gathered and created over many years for research in common sense reasoning. ### Source Data #### Initial Data Collection and Normalization Started as the Open Mind Common Sense project at MIT Media Lab in 1999. See https://en.wikipedia.org/wiki/Open_Mind_Common_Sense #### Who are the source language producers? Crowd Sourced ### Annotations #### Annotation process Crowd Source template text, games, etc. #### Who are the annotators? Crowd sourced. ### Personal and Sensitive Information Unkown, but likely there are names of famous individuals. ## Considerations for Using the Data ### Social Impact of Dataset The goal for the work is to help machines understand common sense. ### Discussion of Biases See the website and paper for efforts to minimize data bias, but please note that omcs_sentences_free, omcs_sentences_more are raw data entered by users and may very well have biased data. ### Other Known Limitations While the relationship dataset is large, the amount of actual sentences is limited. ## Additional Information ### Dataset Curators The authors of https://github.com/commonsense/conceptnet5/wiki and Luminoso. ### Licensing Information This work includes data from ConceptNet 5, which was compiled by the Commonsense Computing Initiative. ConceptNet 5 is freely available under the Creative Commons Attribution-ShareAlike license (CC BY SA 3.0) from http://conceptnet.io. The included data was created by contributors to Commonsense Computing projects, contributors to Wikimedia projects, DBPedia, OpenCyc, Games with a Purpose, Princeton University's WordNet, Francis Bond's Open Multilingual WordNet, and Jim Breen's JMDict. Credits and acknowledgements ConceptNet has been developed by: The MIT Media Lab, through various groups at different times: Commonsense Computing Software Agents Digital Intuition The Commonsense Computing Initiative, a worldwide collaboration with contributions from: National Taiwan University Universidade Federal de São Carlos Hokkaido University Tilburg University Nihon Unisys Labs Dentsu Inc. Kyoto University Yahoo Research Japan Luminoso Technologies, Inc. Significant amounts of data were imported from: WordNet, a project of Princeton University Open Multilingual WordNet, compiled by Francis Bond and Kyonghee Paik Wikipedia and Wiktionary, collaborative projects of the Wikimedia Foundation Luis von Ahn's "Games with a Purpose" JMDict, compiled by Jim Breen CC-CEDict, by MDBG The Unicode CLDR DBPedia Here is a short, incomplete list of people who have made significant contributions to the development of ConceptNet as a data resource, roughly in order of appearance: Push Singh Catherine Havasi Hugo Liu Hyemin Chung Robyn Speer Ken Arnold Yen-Ling Kuo Joshua Chin Joanna Lowry-Duda Robert Beaudoin Naoki Otani Vanya Cohen Licenses for included resources Commonsense Computing The Commonsense Computing project originated at the MIT Media Lab and expanded worldwide. Tens of thousands of contributors have taken some time to teach facts to computers. Their pseudonyms can be found in the "sources" list found in ConceptNet's raw data and in its API. Games with a Purpose Data collected from Verbosity, one of the CMU "Games with a Purpose", is used and released under ConceptNet's license, by permission from Luis von Ahn and Harshit Surana. Verbosity players are anonymous, so in the "sources" list, data from Verbosity is simply credited to the pseudonym "verbosity". Wikimedia projects ConceptNet uses data directly from Wiktionary, the free dictionary. It also uses data from Wikipedia, the free encyclopedia via DBPedia. Wiktionary and Wikipedia are collaborative projects, authored by their respective online communities. They are currently released under the Creative Commons Attribution-ShareAlike license. Wikimedia encourages giving attribution by providing links to the hosted pages that the data came from, and DBPedia asks for the same thing in turn. In addition to crediting the assertions that came from Wiktionary and DBPedia, we also provide "ExternalURL" edges pointing to the page that they came from. For example, the term /c/de/sprache has an ExternalURL link pointing to http://en.wiktionary.org/wiki/Sprache. Its list of individual contributors can be seen by following its "History" link. The URLs of links to DBPedia are the same as the resource names that DBPedia uses, encouraging interoperability with their linked data. WordNet WordNet is available under an unencumbered license: see http://wordnet.princeton.edu/wordnet/license/. Its text is reproduced below: WordNet Release 3.0 This software and database is being provided to you, the LICENSEE, by Princeton University under the following license. By obtaining, using and/or copying this software and database, you agree that you have read, understood, and will comply with these terms and conditions.: Permission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution. WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS. The name of Princeton University or Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database. Title to copyright in this software, database and any associated documentation shall at all times remain with Princeton University and LICENSEE agrees to preserve same. Open Multilingual WordNet Open Multilingual WordNet was compiled by Francis Bond, Kyonghee Paik, and Ryan Foster, from data provided by many multilingual WordNet projects. Here is the complete list of references to the projects that created the data. ### Citation Information ``` @paper{speer2017conceptnet, author = {Robyn Speer and Joshua Chin and Catherine Havasi}, title = {ConceptNet 5.5: An Open Multilingual Graph of General Knowledge}, conference = {AAAI Conference on Artificial Intelligence}, year = {2017}, pages = {4444--4451}, keywords = {ConceptNet; knowledge graph; word embeddings}, url = {http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14972} } ``` ### Contributions Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset.
conceptnet5
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10M<n<100M", "size_categories:1M<n<10M", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:it", "language:ja", "language:nl", "language:pt", "language:ru", "language:zh", "license:cc-by-4.0", "arxiv:1612.03975", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "found"], "language": ["de", "en", "es", "fr", "it", "ja", "nl", "pt", "ru", "zh"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "10M<n<100M", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "paperswithcode_id": "conceptnet", "pretty_name": "Conceptnet5", "config_names": ["conceptnet5", "omcs_sentences_free", "omcs_sentences_more"], "dataset_info": [{"config_name": "conceptnet5", "features": [{"name": "sentence", "dtype": "string"}, {"name": "full_rel", "dtype": "string"}, {"name": "rel", "dtype": "string"}, {"name": "arg1", "dtype": "string"}, {"name": "arg2", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "extra_info", "dtype": "string"}, {"name": "weight", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 11493772756, "num_examples": 34074917}], "download_size": 1280623369, "dataset_size": 11493772756}, {"config_name": "omcs_sentences_free", "features": [{"name": "sentence", "dtype": "string"}, {"name": "raw_data", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 174810230, "num_examples": 898160}], "download_size": 72941617, "dataset_size": 174810230}, {"config_name": "omcs_sentences_more", "features": [{"name": "sentence", "dtype": "string"}, {"name": "raw_data", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 341421867, "num_examples": 2001735}], "download_size": 129630544, "dataset_size": 341421867}], "configs": [{"config_name": "conceptnet5", "data_files": [{"split": "train", "path": "conceptnet5/train-*"}], "default": true}, {"config_name": "omcs_sentences_free", "data_files": [{"split": "train", "path": "omcs_sentences_free/train-*"}]}, {"config_name": "omcs_sentences_more", "data_files": [{"split": "train", "path": "omcs_sentences_more/train-*"}]}]}
2024-02-08T12:07:58+00:00
[ "1612.03975" ]
[ "de", "en", "es", "fr", "it", "ja", "nl", "pt", "ru", "zh" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10M<n<100M #size_categories-1M<n<10M #source_datasets-original #language-German #language-English #language-Spanish #language-French #language-Italian #language-Japanese #language-Dutch #language-Portuguese #language-Russian #language-Chinese #license-cc-by-4.0 #arxiv-1612.03975 #region-us
# Dataset Card for Conceptnet5 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL ### Dataset Summary ConceptNet is a multilingual knowledge base, representing words and phrases that people use and the common-sense relationships between them. The knowledge in ConceptNet is collected from a variety of resources, including crowd-sourced resources (such as Wiktionary and Open Mind Common Sense), games with a purpose (such as Verbosity and URL), and expert-created resources (such as WordNet and JMDict). You can browse what ConceptNet knows at URL. This dataset is designed to provide training data for common sense relationships pulls together from various sources. The dataset is multi-lingual. See langauge codes and language info here: URL This dataset provides an interface for the conceptnet5 csv file, and some (but not all) of the raw text data used to build conceptnet5: omcsnet_sentences_free.txt, and omcsnet_sentences_more.txt. One use of this dataset would be to learn to extract the conceptnet relationship from the omcsnet sentences. Conceptnet5 has 34,074,917 relationships. Of those relationships, there are 2,176,099 surface text sentences related to those 2M entries. omcsnet_sentences_free has 898,161 lines. omcsnet_sentences_more has 2,001,736 lines. Original downloads are available here URL For more information, see: URL The omcsnet data comes with the following warning from the authors of the above site: Remember: this data comes from various forms of crowdsourcing. Sentences in these files are not necessarily true, useful, or appropriate. ### Languages en, fr, it, de, es, ru, pt, ja, nl, zh and others ## Dataset Structure ### Data Instances There are three configurations for the dataset: conceptnet5, omcs_sentences_free, omcs_sentences_more. Conceptnet5 defines: '' { 'sentence': ..., 'full_rel': ..., 'rel': ..., 'arg1': ..., 'arg2': ..., 'lang': ..., 'extra_info': ... 'weight': ... } '' The omcs text defines: '' { 'sentence': ..., 'raw_data': ... 'weight': ... } '' ### Data Fields For conceptnet5 configurations: * full_rel: the full relationship. e.g., /a/[/r/Antonym/,/c/en/able/,/c/en/cane/] * rel: the binary relationship. e.g., /r/Antonym * arg1: the first argument to the binary relationship. e.g., /c/en/able * arg2: the second argument to the binary relationship. e.g., /c/en/cane * lang: the language code. e.g., en, fr, etc. If the arg1 and arg2 are two different languages, then the form os lang1/lang2. * extra_info: a string that includes json data that has the dataset name, license type (mostly cc-4.0), contributor, etc. e.g., : {"dataset": "/d/verbosity", "license": "cc:by/4.0", "sources": [{"contributor": "/s/resource/verbosity"}], "surfaceEnd": "cane", "surfaceStart": "able", "surfaceText": "[[able]] is the opposite of [[cane]]", "weight": 0.299} * sentence: the sentence from which the relationship was extracted, if one exists, with brackets around the arg1 and arg2. e.g., [[able]] is the opposite of [[cane]] * weight: the weight assigned by the curators or automatically to the relationship, between 1.0-0.0, higher being more certain. For the omcs text configurations: * sentence: the raw sentence * raw_data: the raw tab seperated data of the form, id, text, curator_id, created_on, lanugage_id, activity_id, and score. Most of this information was tied to older systems for entering the data os was not partsed into fields for the dataset. e.g., 1237278 someone can be at catch 10805 2006-11-14 17:56:49.70872-05 en 27 1 * lang: the language code ### Data Splits There are no splits. ## Dataset Creation ### Curation Rationale This dataset was gathered and created over many years for research in common sense reasoning. ### Source Data #### Initial Data Collection and Normalization Started as the Open Mind Common Sense project at MIT Media Lab in 1999. See URL #### Who are the source language producers? Crowd Sourced ### Annotations #### Annotation process Crowd Source template text, games, etc. #### Who are the annotators? Crowd sourced. ### Personal and Sensitive Information Unkown, but likely there are names of famous individuals. ## Considerations for Using the Data ### Social Impact of Dataset The goal for the work is to help machines understand common sense. ### Discussion of Biases See the website and paper for efforts to minimize data bias, but please note that omcs_sentences_free, omcs_sentences_more are raw data entered by users and may very well have biased data. ### Other Known Limitations While the relationship dataset is large, the amount of actual sentences is limited. ## Additional Information ### Dataset Curators The authors of URL and Luminoso. ### Licensing Information This work includes data from ConceptNet 5, which was compiled by the Commonsense Computing Initiative. ConceptNet 5 is freely available under the Creative Commons Attribution-ShareAlike license (CC BY SA 3.0) from URL. The included data was created by contributors to Commonsense Computing projects, contributors to Wikimedia projects, DBPedia, OpenCyc, Games with a Purpose, Princeton University's WordNet, Francis Bond's Open Multilingual WordNet, and Jim Breen's JMDict. Credits and acknowledgements ConceptNet has been developed by: The MIT Media Lab, through various groups at different times: Commonsense Computing Software Agents Digital Intuition The Commonsense Computing Initiative, a worldwide collaboration with contributions from: National Taiwan University Universidade Federal de São Carlos Hokkaido University Tilburg University Nihon Unisys Labs Dentsu Inc. Kyoto University Yahoo Research Japan Luminoso Technologies, Inc. Significant amounts of data were imported from: WordNet, a project of Princeton University Open Multilingual WordNet, compiled by Francis Bond and Kyonghee Paik Wikipedia and Wiktionary, collaborative projects of the Wikimedia Foundation Luis von Ahn's "Games with a Purpose" JMDict, compiled by Jim Breen CC-CEDict, by MDBG The Unicode CLDR DBPedia Here is a short, incomplete list of people who have made significant contributions to the development of ConceptNet as a data resource, roughly in order of appearance: Push Singh Catherine Havasi Hugo Liu Hyemin Chung Robyn Speer Ken Arnold Yen-Ling Kuo Joshua Chin Joanna Lowry-Duda Robert Beaudoin Naoki Otani Vanya Cohen Licenses for included resources Commonsense Computing The Commonsense Computing project originated at the MIT Media Lab and expanded worldwide. Tens of thousands of contributors have taken some time to teach facts to computers. Their pseudonyms can be found in the "sources" list found in ConceptNet's raw data and in its API. Games with a Purpose Data collected from Verbosity, one of the CMU "Games with a Purpose", is used and released under ConceptNet's license, by permission from Luis von Ahn and Harshit Surana. Verbosity players are anonymous, so in the "sources" list, data from Verbosity is simply credited to the pseudonym "verbosity". Wikimedia projects ConceptNet uses data directly from Wiktionary, the free dictionary. It also uses data from Wikipedia, the free encyclopedia via DBPedia. Wiktionary and Wikipedia are collaborative projects, authored by their respective online communities. They are currently released under the Creative Commons Attribution-ShareAlike license. Wikimedia encourages giving attribution by providing links to the hosted pages that the data came from, and DBPedia asks for the same thing in turn. In addition to crediting the assertions that came from Wiktionary and DBPedia, we also provide "ExternalURL" edges pointing to the page that they came from. For example, the term /c/de/sprache has an ExternalURL link pointing to URL Its list of individual contributors can be seen by following its "History" link. The URLs of links to DBPedia are the same as the resource names that DBPedia uses, encouraging interoperability with their linked data. WordNet WordNet is available under an unencumbered license: see URL Its text is reproduced below: WordNet Release 3.0 This software and database is being provided to you, the LICENSEE, by Princeton University under the following license. By obtaining, using and/or copying this software and database, you agree that you have read, understood, and will comply with these terms and conditions.: Permission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution. WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS. The name of Princeton University or Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database. Title to copyright in this software, database and any associated documentation shall at all times remain with Princeton University and LICENSEE agrees to preserve same. Open Multilingual WordNet Open Multilingual WordNet was compiled by Francis Bond, Kyonghee Paik, and Ryan Foster, from data provided by many multilingual WordNet projects. Here is the complete list of references to the projects that created the data. ### Contributions Thanks to @ontocord for adding this dataset.
[ "# Dataset Card for Conceptnet5", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary\n\nConceptNet is a multilingual knowledge base, representing words and\nphrases that people use and the common-sense relationships between\nthem. The knowledge in ConceptNet is collected from a variety of\nresources, including crowd-sourced resources (such as Wiktionary and\nOpen Mind Common Sense), games with a purpose (such as Verbosity and\nURL), and expert-created resources (such as WordNet and JMDict).\n\nYou can browse what ConceptNet knows at URL.\n\nThis dataset is designed to provide training data\nfor common sense relationships pulls together from various sources.\n\nThe dataset is multi-lingual. See langauge codes and language info\nhere: URL\n\n\nThis dataset provides an interface for the conceptnet5 csv file, and\nsome (but not all) of the raw text data used to build conceptnet5:\nomcsnet_sentences_free.txt, and omcsnet_sentences_more.txt.\n\nOne use of this dataset would be to learn to extract the conceptnet\nrelationship from the omcsnet sentences.\n\nConceptnet5 has 34,074,917 relationships. Of those relationships,\nthere are 2,176,099 surface text sentences related to those 2M\nentries.\n\nomcsnet_sentences_free has 898,161 lines. omcsnet_sentences_more has\n2,001,736 lines.\n\nOriginal downloads are available here\nURL For more\ninformation, see: URL\n\nThe omcsnet data comes with the following warning from the authors of\nthe above site: \n\nRemember: this data comes from various forms of\ncrowdsourcing. Sentences in these files are not necessarily true,\nuseful, or appropriate.", "### Languages\nen, fr, it, de, es, ru, pt, ja, nl, zh and others", "## Dataset Structure", "### Data Instances\n\nThere are three configurations for the dataset: conceptnet5, omcs_sentences_free, omcs_sentences_more. \n\nConceptnet5 defines:\n\n''\n{\n\t'sentence': ...,\n\t'full_rel': ...,\n\t'rel': ...,\n\t'arg1': ...,\n\t'arg2': ...,\n\t'lang': ...,\n\t'extra_info': ...\n\t'weight': ...\n}\n''\n\nThe omcs text defines:\n''\n{\n\t'sentence': ...,\n\t'raw_data': ...\n\t'weight': ...\n}\n''", "### Data Fields\n\nFor conceptnet5 configurations:\n* full_rel: the full relationship. e.g., /a/[/r/Antonym/,/c/en/able/,/c/en/cane/] \n* rel: the binary relationship. e.g., /r/Antonym \n* arg1: the first argument to the binary relationship. e.g., /c/en/able\n* arg2: the second argument to the binary relationship. e.g., /c/en/cane \n* lang: the language code. e.g., en, fr, etc. If the arg1 and arg2 are two different languages, then the form os lang1/lang2.\n* extra_info: a string that includes json data that has the dataset name, license type (mostly cc-4.0), contributor, etc. e.g., : {\"dataset\": \"/d/verbosity\", \"license\": \"cc:by/4.0\", \"sources\": [{\"contributor\": \"/s/resource/verbosity\"}], \"surfaceEnd\": \"cane\", \"surfaceStart\": \"able\", \"surfaceText\": \"[[able]] is the opposite of [[cane]]\", \"weight\": 0.299}\n* sentence: the sentence from which the relationship was extracted, if one exists, with brackets around the arg1 and arg2. e.g., [[able]] is the opposite of [[cane]]\n* weight: the weight assigned by the curators or automatically to the relationship, between 1.0-0.0, higher being more certain. \n\nFor the omcs text configurations:\n\n* sentence: the raw sentence\n* raw_data: the raw tab seperated data of the form, id, text, curator_id, created_on, lanugage_id, activity_id, and score. Most of this information was tied to older systems for entering the data os was not partsed into fields for the dataset. e.g., 1237278 someone can be at catch 10805 2006-11-14 17:56:49.70872-05 en 27 1\n* lang: the language code", "### Data Splits\n\nThere are no splits.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was gathered and created over many years for research in common sense reasoning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nStarted as the Open Mind Common Sense project at MIT Media Lab in 1999. See URL", "#### Who are the source language producers?\n\nCrowd Sourced", "### Annotations", "#### Annotation process\n\nCrowd Source template text, games, etc.", "#### Who are the annotators?\n\nCrowd sourced.", "### Personal and Sensitive Information\n\nUnkown, but likely there are names of famous individuals.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe goal for the work is to help machines understand common sense.", "### Discussion of Biases\n\nSee the website and paper for efforts to minimize data bias, but\nplease note that omcs_sentences_free, omcs_sentences_more are raw data\nentered by users and may very well have biased data.", "### Other Known Limitations\n\nWhile the relationship dataset is large, the amount of actual sentences is limited.", "## Additional Information", "### Dataset Curators\n\nThe authors of URL and Luminoso.", "### Licensing Information\n\nThis work includes data from ConceptNet 5, which was compiled by the\nCommonsense Computing Initiative. ConceptNet 5 is freely available under\nthe Creative Commons Attribution-ShareAlike license (CC BY SA 3.0) from\nURL.\n\nThe included data was created by contributors to Commonsense Computing\nprojects, contributors to Wikimedia projects, DBPedia, OpenCyc, Games\nwith a Purpose, Princeton University's WordNet, Francis Bond's Open\nMultilingual WordNet, and Jim Breen's JMDict.\nCredits and acknowledgements\nConceptNet has been developed by:\n\nThe MIT Media Lab, through various groups at different times:\n\nCommonsense Computing\nSoftware Agents\nDigital Intuition\nThe Commonsense Computing Initiative, a worldwide collaboration with contributions from:\n\nNational Taiwan University\nUniversidade Federal de São Carlos\nHokkaido University\nTilburg University\nNihon Unisys Labs\nDentsu Inc.\nKyoto University\nYahoo Research Japan\nLuminoso Technologies, Inc.\n\nSignificant amounts of data were imported from:\n\nWordNet, a project of Princeton University\nOpen Multilingual WordNet, compiled by Francis Bond and Kyonghee Paik\nWikipedia and Wiktionary, collaborative projects of the Wikimedia Foundation\nLuis von Ahn's \"Games with a Purpose\"\nJMDict, compiled by Jim Breen\nCC-CEDict, by MDBG\nThe Unicode CLDR\nDBPedia\nHere is a short, incomplete list of people who have made significant contributions to the development of ConceptNet as a data resource, roughly in order of appearance:\n\nPush Singh\nCatherine Havasi\nHugo Liu\nHyemin Chung\nRobyn Speer\nKen Arnold\nYen-Ling Kuo\nJoshua Chin\nJoanna Lowry-Duda\nRobert Beaudoin\nNaoki Otani\nVanya Cohen\nLicenses for included resources\nCommonsense Computing\nThe Commonsense Computing project originated at the MIT Media Lab and expanded worldwide. Tens of thousands of contributors have taken some time to teach facts to computers. Their pseudonyms can be found in the \"sources\" list found in ConceptNet's raw data and in its API.\n\nGames with a Purpose\nData collected from Verbosity, one of the CMU \"Games with a Purpose\", is used and released under ConceptNet's license, by permission from Luis von Ahn and Harshit Surana.\n\nVerbosity players are anonymous, so in the \"sources\" list, data from Verbosity is simply credited to the pseudonym \"verbosity\".\n\nWikimedia projects\nConceptNet uses data directly from Wiktionary, the free dictionary. It also uses data from Wikipedia, the free encyclopedia via DBPedia.\n\nWiktionary and Wikipedia are collaborative projects, authored by their respective online communities. They are currently released under the Creative Commons Attribution-ShareAlike license.\n\nWikimedia encourages giving attribution by providing links to the hosted pages that the data came from, and DBPedia asks for the same thing in turn. In addition to crediting the assertions that came from Wiktionary and DBPedia, we also provide \"ExternalURL\" edges pointing to the page that they came from. For example, the term /c/de/sprache has an ExternalURL link pointing to URL Its list of individual contributors can be seen by following its \"History\" link.\n\nThe URLs of links to DBPedia are the same as the resource names that DBPedia uses, encouraging interoperability with their linked data.\n\nWordNet\nWordNet is available under an unencumbered license: see URL Its text is reproduced below:\n\nWordNet Release 3.0\n\nThis software and database is being provided to you, the LICENSEE, by Princeton University under the following license. By obtaining, using and/or copying this software and database, you agree that you have read, understood, and will comply with these terms and conditions.:\n\nPermission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution.\n\nWordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.\n\nTHIS SOFTWARE AND DATABASE IS PROVIDED \"AS IS\" AND PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.\n\nThe name of Princeton University or Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database. Title to copyright in this software, database and any associated documentation shall at all times remain with Princeton University and LICENSEE agrees to preserve same.\n\nOpen Multilingual WordNet\nOpen Multilingual WordNet was compiled by Francis Bond, Kyonghee Paik, and Ryan Foster, from data provided by many multilingual WordNet projects. Here is the complete list of references to the projects that created the data.", "### Contributions\n\nThanks to @ontocord for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10M<n<100M #size_categories-1M<n<10M #source_datasets-original #language-German #language-English #language-Spanish #language-French #language-Italian #language-Japanese #language-Dutch #language-Portuguese #language-Russian #language-Chinese #license-cc-by-4.0 #arxiv-1612.03975 #region-us \n", "# Dataset Card for Conceptnet5", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary\n\nConceptNet is a multilingual knowledge base, representing words and\nphrases that people use and the common-sense relationships between\nthem. The knowledge in ConceptNet is collected from a variety of\nresources, including crowd-sourced resources (such as Wiktionary and\nOpen Mind Common Sense), games with a purpose (such as Verbosity and\nURL), and expert-created resources (such as WordNet and JMDict).\n\nYou can browse what ConceptNet knows at URL.\n\nThis dataset is designed to provide training data\nfor common sense relationships pulls together from various sources.\n\nThe dataset is multi-lingual. See langauge codes and language info\nhere: URL\n\n\nThis dataset provides an interface for the conceptnet5 csv file, and\nsome (but not all) of the raw text data used to build conceptnet5:\nomcsnet_sentences_free.txt, and omcsnet_sentences_more.txt.\n\nOne use of this dataset would be to learn to extract the conceptnet\nrelationship from the omcsnet sentences.\n\nConceptnet5 has 34,074,917 relationships. Of those relationships,\nthere are 2,176,099 surface text sentences related to those 2M\nentries.\n\nomcsnet_sentences_free has 898,161 lines. omcsnet_sentences_more has\n2,001,736 lines.\n\nOriginal downloads are available here\nURL For more\ninformation, see: URL\n\nThe omcsnet data comes with the following warning from the authors of\nthe above site: \n\nRemember: this data comes from various forms of\ncrowdsourcing. Sentences in these files are not necessarily true,\nuseful, or appropriate.", "### Languages\nen, fr, it, de, es, ru, pt, ja, nl, zh and others", "## Dataset Structure", "### Data Instances\n\nThere are three configurations for the dataset: conceptnet5, omcs_sentences_free, omcs_sentences_more. \n\nConceptnet5 defines:\n\n''\n{\n\t'sentence': ...,\n\t'full_rel': ...,\n\t'rel': ...,\n\t'arg1': ...,\n\t'arg2': ...,\n\t'lang': ...,\n\t'extra_info': ...\n\t'weight': ...\n}\n''\n\nThe omcs text defines:\n''\n{\n\t'sentence': ...,\n\t'raw_data': ...\n\t'weight': ...\n}\n''", "### Data Fields\n\nFor conceptnet5 configurations:\n* full_rel: the full relationship. e.g., /a/[/r/Antonym/,/c/en/able/,/c/en/cane/] \n* rel: the binary relationship. e.g., /r/Antonym \n* arg1: the first argument to the binary relationship. e.g., /c/en/able\n* arg2: the second argument to the binary relationship. e.g., /c/en/cane \n* lang: the language code. e.g., en, fr, etc. If the arg1 and arg2 are two different languages, then the form os lang1/lang2.\n* extra_info: a string that includes json data that has the dataset name, license type (mostly cc-4.0), contributor, etc. e.g., : {\"dataset\": \"/d/verbosity\", \"license\": \"cc:by/4.0\", \"sources\": [{\"contributor\": \"/s/resource/verbosity\"}], \"surfaceEnd\": \"cane\", \"surfaceStart\": \"able\", \"surfaceText\": \"[[able]] is the opposite of [[cane]]\", \"weight\": 0.299}\n* sentence: the sentence from which the relationship was extracted, if one exists, with brackets around the arg1 and arg2. e.g., [[able]] is the opposite of [[cane]]\n* weight: the weight assigned by the curators or automatically to the relationship, between 1.0-0.0, higher being more certain. \n\nFor the omcs text configurations:\n\n* sentence: the raw sentence\n* raw_data: the raw tab seperated data of the form, id, text, curator_id, created_on, lanugage_id, activity_id, and score. Most of this information was tied to older systems for entering the data os was not partsed into fields for the dataset. e.g., 1237278 someone can be at catch 10805 2006-11-14 17:56:49.70872-05 en 27 1\n* lang: the language code", "### Data Splits\n\nThere are no splits.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was gathered and created over many years for research in common sense reasoning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nStarted as the Open Mind Common Sense project at MIT Media Lab in 1999. See URL", "#### Who are the source language producers?\n\nCrowd Sourced", "### Annotations", "#### Annotation process\n\nCrowd Source template text, games, etc.", "#### Who are the annotators?\n\nCrowd sourced.", "### Personal and Sensitive Information\n\nUnkown, but likely there are names of famous individuals.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe goal for the work is to help machines understand common sense.", "### Discussion of Biases\n\nSee the website and paper for efforts to minimize data bias, but\nplease note that omcs_sentences_free, omcs_sentences_more are raw data\nentered by users and may very well have biased data.", "### Other Known Limitations\n\nWhile the relationship dataset is large, the amount of actual sentences is limited.", "## Additional Information", "### Dataset Curators\n\nThe authors of URL and Luminoso.", "### Licensing Information\n\nThis work includes data from ConceptNet 5, which was compiled by the\nCommonsense Computing Initiative. ConceptNet 5 is freely available under\nthe Creative Commons Attribution-ShareAlike license (CC BY SA 3.0) from\nURL.\n\nThe included data was created by contributors to Commonsense Computing\nprojects, contributors to Wikimedia projects, DBPedia, OpenCyc, Games\nwith a Purpose, Princeton University's WordNet, Francis Bond's Open\nMultilingual WordNet, and Jim Breen's JMDict.\nCredits and acknowledgements\nConceptNet has been developed by:\n\nThe MIT Media Lab, through various groups at different times:\n\nCommonsense Computing\nSoftware Agents\nDigital Intuition\nThe Commonsense Computing Initiative, a worldwide collaboration with contributions from:\n\nNational Taiwan University\nUniversidade Federal de São Carlos\nHokkaido University\nTilburg University\nNihon Unisys Labs\nDentsu Inc.\nKyoto University\nYahoo Research Japan\nLuminoso Technologies, Inc.\n\nSignificant amounts of data were imported from:\n\nWordNet, a project of Princeton University\nOpen Multilingual WordNet, compiled by Francis Bond and Kyonghee Paik\nWikipedia and Wiktionary, collaborative projects of the Wikimedia Foundation\nLuis von Ahn's \"Games with a Purpose\"\nJMDict, compiled by Jim Breen\nCC-CEDict, by MDBG\nThe Unicode CLDR\nDBPedia\nHere is a short, incomplete list of people who have made significant contributions to the development of ConceptNet as a data resource, roughly in order of appearance:\n\nPush Singh\nCatherine Havasi\nHugo Liu\nHyemin Chung\nRobyn Speer\nKen Arnold\nYen-Ling Kuo\nJoshua Chin\nJoanna Lowry-Duda\nRobert Beaudoin\nNaoki Otani\nVanya Cohen\nLicenses for included resources\nCommonsense Computing\nThe Commonsense Computing project originated at the MIT Media Lab and expanded worldwide. Tens of thousands of contributors have taken some time to teach facts to computers. Their pseudonyms can be found in the \"sources\" list found in ConceptNet's raw data and in its API.\n\nGames with a Purpose\nData collected from Verbosity, one of the CMU \"Games with a Purpose\", is used and released under ConceptNet's license, by permission from Luis von Ahn and Harshit Surana.\n\nVerbosity players are anonymous, so in the \"sources\" list, data from Verbosity is simply credited to the pseudonym \"verbosity\".\n\nWikimedia projects\nConceptNet uses data directly from Wiktionary, the free dictionary. It also uses data from Wikipedia, the free encyclopedia via DBPedia.\n\nWiktionary and Wikipedia are collaborative projects, authored by their respective online communities. They are currently released under the Creative Commons Attribution-ShareAlike license.\n\nWikimedia encourages giving attribution by providing links to the hosted pages that the data came from, and DBPedia asks for the same thing in turn. In addition to crediting the assertions that came from Wiktionary and DBPedia, we also provide \"ExternalURL\" edges pointing to the page that they came from. For example, the term /c/de/sprache has an ExternalURL link pointing to URL Its list of individual contributors can be seen by following its \"History\" link.\n\nThe URLs of links to DBPedia are the same as the resource names that DBPedia uses, encouraging interoperability with their linked data.\n\nWordNet\nWordNet is available under an unencumbered license: see URL Its text is reproduced below:\n\nWordNet Release 3.0\n\nThis software and database is being provided to you, the LICENSEE, by Princeton University under the following license. By obtaining, using and/or copying this software and database, you agree that you have read, understood, and will comply with these terms and conditions.:\n\nPermission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution.\n\nWordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.\n\nTHIS SOFTWARE AND DATABASE IS PROVIDED \"AS IS\" AND PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.\n\nThe name of Princeton University or Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database. Title to copyright in this software, database and any associated documentation shall at all times remain with Princeton University and LICENSEE agrees to preserve same.\n\nOpen Multilingual WordNet\nOpen Multilingual WordNet was compiled by Francis Bond, Kyonghee Paik, and Ryan Foster, from data provided by many multilingual WordNet projects. Here is the complete list of references to the projects that created the data.", "### Contributions\n\nThanks to @ontocord for adding this dataset." ]
[ 183, 8, 120, 18, 359, 27, 6, 129, 484, 11, 5, 27, 4, 27, 14, 5, 15, 14, 21, 8, 20, 57, 24, 5, 15, 1227, 16 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10M<n<100M #size_categories-1M<n<10M #source_datasets-original #language-German #language-English #language-Spanish #language-French #language-Italian #language-Japanese #language-Dutch #language-Portuguese #language-Russian #language-Chinese #license-cc-by-4.0 #arxiv-1612.03975 #region-us \n# Dataset Card for Conceptnet5## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "passage: ### Dataset Summary\n\nConceptNet is a multilingual knowledge base, representing words and\nphrases that people use and the common-sense relationships between\nthem. The knowledge in ConceptNet is collected from a variety of\nresources, including crowd-sourced resources (such as Wiktionary and\nOpen Mind Common Sense), games with a purpose (such as Verbosity and\nURL), and expert-created resources (such as WordNet and JMDict).\n\nYou can browse what ConceptNet knows at URL.\n\nThis dataset is designed to provide training data\nfor common sense relationships pulls together from various sources.\n\nThe dataset is multi-lingual. See langauge codes and language info\nhere: URL\n\n\nThis dataset provides an interface for the conceptnet5 csv file, and\nsome (but not all) of the raw text data used to build conceptnet5:\nomcsnet_sentences_free.txt, and omcsnet_sentences_more.txt.\n\nOne use of this dataset would be to learn to extract the conceptnet\nrelationship from the omcsnet sentences.\n\nConceptnet5 has 34,074,917 relationships. Of those relationships,\nthere are 2,176,099 surface text sentences related to those 2M\nentries.\n\nomcsnet_sentences_free has 898,161 lines. omcsnet_sentences_more has\n2,001,736 lines.\n\nOriginal downloads are available here\nURL For more\ninformation, see: URL\n\nThe omcsnet data comes with the following warning from the authors of\nthe above site: \n\nRemember: this data comes from various forms of\ncrowdsourcing. Sentences in these files are not necessarily true,\nuseful, or appropriate.### Languages\nen, fr, it, de, es, ru, pt, ja, nl, zh and others## Dataset Structure### Data Instances\n\nThere are three configurations for the dataset: conceptnet5, omcs_sentences_free, omcs_sentences_more. \n\nConceptnet5 defines:\n\n''\n{\n\t'sentence': ...,\n\t'full_rel': ...,\n\t'rel': ...,\n\t'arg1': ...,\n\t'arg2': ...,\n\t'lang': ...,\n\t'extra_info': ...\n\t'weight': ...\n}\n''\n\nThe omcs text defines:\n''\n{\n\t'sentence': ...,\n\t'raw_data': ...\n\t'weight': ...\n}\n''", "passage: ### Data Fields\n\nFor conceptnet5 configurations:\n* full_rel: the full relationship. e.g., /a/[/r/Antonym/,/c/en/able/,/c/en/cane/] \n* rel: the binary relationship. e.g., /r/Antonym \n* arg1: the first argument to the binary relationship. e.g., /c/en/able\n* arg2: the second argument to the binary relationship. e.g., /c/en/cane \n* lang: the language code. e.g., en, fr, etc. If the arg1 and arg2 are two different languages, then the form os lang1/lang2.\n* extra_info: a string that includes json data that has the dataset name, license type (mostly cc-4.0), contributor, etc. e.g., : {\"dataset\": \"/d/verbosity\", \"license\": \"cc:by/4.0\", \"sources\": [{\"contributor\": \"/s/resource/verbosity\"}], \"surfaceEnd\": \"cane\", \"surfaceStart\": \"able\", \"surfaceText\": \"[[able]] is the opposite of [[cane]]\", \"weight\": 0.299}\n* sentence: the sentence from which the relationship was extracted, if one exists, with brackets around the arg1 and arg2. e.g., [[able]] is the opposite of [[cane]]\n* weight: the weight assigned by the curators or automatically to the relationship, between 1.0-0.0, higher being more certain. \n\nFor the omcs text configurations:\n\n* sentence: the raw sentence\n* raw_data: the raw tab seperated data of the form, id, text, curator_id, created_on, lanugage_id, activity_id, and score. Most of this information was tied to older systems for entering the data os was not partsed into fields for the dataset. e.g., 1237278 someone can be at catch 10805 2006-11-14 17:56:49.70872-05 en 27 1\n* lang: the language code### Data Splits\n\nThere are no splits.## Dataset Creation### Curation Rationale\n\nThis dataset was gathered and created over many years for research in common sense reasoning.### Source Data#### Initial Data Collection and Normalization\n\nStarted as the Open Mind Common Sense project at MIT Media Lab in 1999. See URL#### Who are the source language producers?\n\nCrowd Sourced### Annotations#### Annotation process\n\nCrowd Source template text, games, etc.#### Who are the annotators?\n\nCrowd sourced.### Personal and Sensitive Information\n\nUnkown, but likely there are names of famous individuals.## Considerations for Using the Data### Social Impact of Dataset\n\nThe goal for the work is to help machines understand common sense.### Discussion of Biases\n\nSee the website and paper for efforts to minimize data bias, but\nplease note that omcs_sentences_free, omcs_sentences_more are raw data\nentered by users and may very well have biased data.### Other Known Limitations\n\nWhile the relationship dataset is large, the amount of actual sentences is limited.## Additional Information### Dataset Curators\n\nThe authors of URL and Luminoso." ]
ce1947a0302ee3ede3bf53750c16f7b6be5daf0e
# Dataset Card for "conll2000" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.clips.uantwerpen.be/conll2000/chunking/](https://www.clips.uantwerpen.be/conll2000/chunking/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.48 MB - **Size of the generated dataset:** 6.55 MB - **Total amount of disk used:** 10.03 MB ### Dataset Summary Text chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence He reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows: [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ] [PP in ] [NP September ] . Text chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test data for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ) as the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as test data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by Sabine Buchholz from Tilburg University, The Netherlands. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### conll2000 - **Size of downloaded dataset files:** 3.48 MB - **Size of the generated dataset:** 6.55 MB - **Total amount of disk used:** 10.03 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "chunk_tags": [11, 13, 11, 12, 21, 22, 22, 22, 22, 11, 12, 12, 17, 11, 12, 13, 11, 0, 1, 13, 11, 11, 0, 21, 22, 22, 11, 12, 12, 13, 11, 12, 12, 11, 12, 12, 0], "id": "0", "pos_tags": [19, 14, 11, 19, 39, 27, 37, 32, 34, 11, 15, 19, 14, 19, 22, 14, 20, 5, 15, 14, 19, 19, 5, 34, 32, 34, 11, 15, 19, 14, 20, 9, 20, 24, 15, 22, 6], "tokens": "[\"Confidence\", \"in\", \"the\", \"pound\", \"is\", \"widely\", \"expected\", \"to\", \"take\", \"another\", \"sharp\", \"dive\", \"if\", \"trade\", \"figur..." } ``` ### Data Fields The data fields are the same among all splits. #### conll2000 - `id`: a `string` feature. - `tokens`: a `list` of `string` features. - `pos_tags`: a `list` of classification labels, with possible values including `''` (0), `#` (1), `$` (2), `(` (3), `)` (4). - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4). ### Data Splits | name |train|test| |---------|----:|---:| |conll2000| 8937|2013| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{tksbuchholz2000conll, author = "Tjong Kim Sang, Erik F. and Sabine Buchholz", title = "Introduction to the CoNLL-2000 Shared Task: Chunking", editor = "Claire Cardie and Walter Daelemans and Claire Nedellec and Tjong Kim Sang, Erik", booktitle = "Proceedings of CoNLL-2000 and LLL-2000", publisher = "Lisbon, Portugal", pages = "127--132", year = "2000" } ``` ### Contributions Thanks to [@vblagoje](https://github.com/vblagoje), [@jplu](https://github.com/jplu) for adding this dataset.
conll2000
[ "language:en", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "paperswithcode_id": "conll-2000-1", "pretty_name": "CoNLL-2000", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "''", "1": "#", "2": "$", "3": "(", "4": ")", "5": ",", "6": ".", "7": ":", "8": "``", "9": "CC", "10": "CD", "11": "DT", "12": "EX", "13": "FW", "14": "IN", "15": "JJ", "16": "JJR", "17": "JJS", "18": "MD", "19": "NN", "20": "NNP", "21": "NNPS", "22": "NNS", "23": "PDT", "24": "POS", "25": "PRP", "26": "PRP$", "27": "RB", "28": "RBR", "29": "RBS", "30": "RP", "31": "SYM", "32": "TO", "33": "UH", "34": "VB", "35": "VBD", "36": "VBG", "37": "VBN", "38": "VBP", "39": "VBZ", "40": "WDT", "41": "WP", "42": "WP$", "43": "WRB"}}}}, {"name": "chunk_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-ADJP", "2": "I-ADJP", "3": "B-ADVP", "4": "I-ADVP", "5": "B-CONJP", "6": "I-CONJP", "7": "B-INTJ", "8": "I-INTJ", "9": "B-LST", "10": "I-LST", "11": "B-NP", "12": "I-NP", "13": "B-PP", "14": "I-PP", "15": "B-PRT", "16": "I-PRT", "17": "B-SBAR", "18": "I-SBAR", "19": "B-UCP", "20": "I-UCP", "21": "B-VP", "22": "I-VP"}}}}], "splits": [{"name": "train", "num_bytes": 5356965, "num_examples": 8937}, {"name": "test", "num_bytes": 1201151, "num_examples": 2013}], "download_size": 3481560, "dataset_size": 6558116}}
2023-04-05T09:02:23+00:00
[]
[ "en" ]
TAGS #language-English #region-us
Dataset Card for "conll2000" ============================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 3.48 MB * Size of the generated dataset: 6.55 MB * Total amount of disk used: 10.03 MB ### Dataset Summary Text chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence He reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows: [NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ] [PP in ] [NP September ] . Text chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test data for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ) as the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as test data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by Sabine Buchholz from Tilburg University, The Netherlands. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### conll2000 * Size of downloaded dataset files: 3.48 MB * Size of the generated dataset: 6.55 MB * Total amount of disk used: 10.03 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### conll2000 * 'id': a 'string' feature. * 'tokens': a 'list' of 'string' features. * 'pos\_tags': a 'list' of classification labels, with possible values including '''' (0), '#' (1), '$' (2), '(' (3), ')' (4). * 'chunk\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-ADJP' (1), 'I-ADJP' (2), 'B-ADVP' (3), 'I-ADVP' (4). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @vblagoje, @jplu for adding this dataset.
[ "### Dataset Summary\n\n\nText chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence\nHe reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows:\n[NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ]\n[PP in ] [NP September ] .\n\n\nText chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test\ndata for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ)\nas the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as\ntest data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by\nSabine Buchholz from Tilburg University, The Netherlands.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### conll2000\n\n\n* Size of downloaded dataset files: 3.48 MB\n* Size of the generated dataset: 6.55 MB\n* Total amount of disk used: 10.03 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### conll2000\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels, with possible values including '''' (0), '#' (1), '$' (2), '(' (3), ')' (4).\n* 'chunk\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-ADJP' (1), 'I-ADJP' (2), 'B-ADVP' (3), 'I-ADVP' (4).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @vblagoje, @jplu for adding this dataset." ]
[ "TAGS\n#language-English #region-us \n", "### Dataset Summary\n\n\nText chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence\nHe reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows:\n[NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ]\n[PP in ] [NP September ] .\n\n\nText chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test\ndata for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ)\nas the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as\ntest data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by\nSabine Buchholz from Tilburg University, The Netherlands.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### conll2000\n\n\n* Size of downloaded dataset files: 3.48 MB\n* Size of the generated dataset: 6.55 MB\n* Total amount of disk used: 10.03 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### conll2000\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels, with possible values including '''' (0), '#' (1), '$' (2), '(' (3), ')' (4).\n* 'chunk\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-ADJP' (1), 'I-ADJP' (2), 'B-ADVP' (3), 'I-ADVP' (4).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @vblagoje, @jplu for adding this dataset." ]
[ 10, 229, 10, 11, 6, 51, 17, 138, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 22 ]
[ "passage: TAGS\n#language-English #region-us \n### Dataset Summary\n\n\nText chunking consists of dividing a text in syntactically correlated parts of words. For example, the sentence\nHe reckons the current account deficit will narrow to only # 1.8 billion in September . can be divided as follows:\n[NP He ] [VP reckons ] [NP the current account deficit ] [VP will narrow ] [PP to ] [NP only # 1.8 billion ]\n[PP in ] [NP September ] .\n\n\nText chunking is an intermediate step towards full parsing. It was the shared task for CoNLL-2000. Training and test\ndata for this task is available. This data consists of the same partitions of the Wall Street Journal corpus (WSJ)\nas the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as\ntest data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by\nSabine Buchholz from Tilburg University, The Netherlands.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### conll2000\n\n\n* Size of downloaded dataset files: 3.48 MB\n* Size of the generated dataset: 6.55 MB\n* Total amount of disk used: 10.03 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### conll2000\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels, with possible values including '''' (0), '#' (1), '$' (2), '(' (3), ')' (4).\n* 'chunk\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-ADJP' (1), 'I-ADJP' (2), 'B-ADVP' (3), 'I-ADVP' (4).### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization" ]
94b6b2ab574876c04cf1ac9097acf4903789315c
# Dataset Card for CoNLL-2002 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepage](https://www.clips.uantwerpen.be/conll2002/ner/) - **Repository:** [github](https://github.com/teropa/nlp/tree/master/resources/corpora/conll2002) - **Paper:** [paper](https://www.aclweb.org/anthology/W02-2024/) - **Point of Contact:** [Erik Tjong Kim Sang](erikt@uia.ua.ac.be) ### Dataset Summary Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] . The shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training). ### Supported Tasks and Leaderboards Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English. After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used. - `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data. - `parsing`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data. ### Languages There are two languages available : Spanish (es) and Dutch (nl). ## Dataset Structure ### Data Instances The examples look like this : ``` {'id': '0', 'ner_tags': [5, 6, 0, 0, 0, 0, 3, 0, 0], 'pos_tags': [4, 28, 13, 59, 28, 21, 29, 22, 20], 'tokens': ['La', 'Coruña', ',', '23', 'may', '(', 'EFECOM', ')', '.'] } ``` The original data files within the Dutch sub-dataset have `-DOCSTART-` lines used to separate documents, but these lines are removed here. Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation. ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token - `pos_tags`: the POS tags of each token The POS tags correspond to this list for Spanish: ``` 'AO', 'AQ', 'CC', 'CS', 'DA', 'DE', 'DD', 'DI', 'DN', 'DP', 'DT', 'Faa', 'Fat', 'Fc', 'Fd', 'Fe', 'Fg', 'Fh', 'Fia', 'Fit', 'Fp', 'Fpa', 'Fpt', 'Fs', 'Ft', 'Fx', 'Fz', 'I', 'NC', 'NP', 'P0', 'PD', 'PI', 'PN', 'PP', 'PR', 'PT', 'PX', 'RG', 'RN', 'SP', 'VAI', 'VAM', 'VAN', 'VAP', 'VAS', 'VMG', 'VMI', 'VMM', 'VMN', 'VMP', 'VMS', 'VSG', 'VSI', 'VSM', 'VSN', 'VSP', 'VSS', 'Y', 'Z' ``` And this list for Dutch: ``` 'Adj', 'Adv', 'Art', 'Conj', 'Int', 'Misc', 'N', 'Num', 'Prep', 'Pron', 'Punc', 'V' ``` The NER tags correspond to this list: ``` "O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC", ``` The NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked. ### Data Splits For both configurations (Spanish and Dutch), there are three splits. The original splits were named `train`, `testa` and `testb` and they correspond to the `train`, `validation` and `test` splits. The splits have the following sizes : | | train | validation | test | | ----- |-------:|------------:|------:| | N. Examples (Spanish) | 8324 | 1916 | 1518 | | N. Examples (Dutch) | 15807 | 2896 | 5196 | ## Dataset Creation ### Curation Rationale The dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish. [More Information Needed] ### Source Data The Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000. The Dutch data consist of four editions of the Belgian newspaper "De Morgen" of 2000 (June 2, July 1, August 1 and September 1). #### Initial Data Collection and Normalization The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable. #### Who are the source language producers? The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above. ### Annotations #### Annotation process For the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible. #### Who are the annotators? The Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB). The Dutch data was annotated as a part of the Atranos project at the University of Antwerp. ### Personal and Sensitive Information The data is sourced from newspaper source and only contains mentions of public figures or individuals ## Considerations for Using the Data ### Social Impact of Dataset Named Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors. ### Discussion of Biases News text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation. ### Other Known Limitations Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains. ## Additional Information ### Dataset Curators The annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392). ### Licensing Information The licensing status of the data, especially the news source text, is unknown. ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @inproceedings{tjong-kim-sang-2002-introduction, title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition", author = "Tjong Kim Sang, Erik F.", booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)", year = "2002", url = "https://www.aclweb.org/anthology/W02-2024", } ``` ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset.
conll2002
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:es", "language:nl", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["es", "nl"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "part-of-speech"], "paperswithcode_id": "conll-2002", "pretty_name": "CoNLL-2002", "config_names": ["es", "nl"], "dataset_info": [{"config_name": "es", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "AO", "1": "AQ", "2": "CC", "3": "CS", "4": "DA", "5": "DE", "6": "DD", "7": "DI", "8": "DN", "9": "DP", "10": "DT", "11": "Faa", "12": "Fat", "13": "Fc", "14": "Fd", "15": "Fe", "16": "Fg", "17": "Fh", "18": "Fia", "19": "Fit", "20": "Fp", "21": "Fpa", "22": "Fpt", "23": "Fs", "24": "Ft", "25": "Fx", "26": "Fz", "27": "I", "28": "NC", "29": "NP", "30": "P0", "31": "PD", "32": "PI", "33": "PN", "34": "PP", "35": "PR", "36": "PT", "37": "PX", "38": "RG", "39": "RN", "40": "SP", "41": "VAI", "42": "VAM", "43": "VAN", "44": "VAP", "45": "VAS", "46": "VMG", "47": "VMI", "48": "VMM", "49": "VMN", "50": "VMP", "51": "VMS", "52": "VSG", "53": "VSI", "54": "VSM", "55": "VSN", "56": "VSP", "57": "VSS", "58": "Y", "59": "Z"}}}}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "splits": [{"name": "train", "num_bytes": 6672173, "num_examples": 8324}, {"name": "validation", "num_bytes": 1333784, "num_examples": 1916}, {"name": "test", "num_bytes": 1294156, "num_examples": 1518}], "download_size": 4140690, "dataset_size": 9300113}, {"config_name": "nl", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "Adj", "1": "Adv", "2": "Art", "3": "Conj", "4": "Int", "5": "Misc", "6": "N", "7": "Num", "8": "Prep", "9": "Pron", "10": "Punc", "11": "V"}}}}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "splits": [{"name": "train", "num_bytes": 5308959, "num_examples": 15807}, {"name": "validation", "num_bytes": 994298, "num_examples": 2896}, {"name": "test", "num_bytes": 1808862, "num_examples": 5196}], "download_size": 3642241, "dataset_size": 8112119}]}
2024-01-18T09:33:49+00:00
[]
[ "es", "nl" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-crowdsourced #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #language-Dutch #license-unknown #region-us
Dataset Card for CoNLL-2002 =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: homepage * Repository: github * Paper: paper * Point of Contact: Erik Tjong Kim Sang ### Dataset Summary Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example: [PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] . The shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training). ### Supported Tasks and Leaderboards Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English. After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used. * 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data. * 'parsing': The performance in this task is measured with F1 (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data. ### Languages There are two languages available : Spanish (es) and Dutch (nl). Dataset Structure ----------------- ### Data Instances The examples look like this : The original data files within the Dutch sub-dataset have '-DOCSTART-' lines used to separate documents, but these lines are removed here. Indeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation. ### Data Fields * 'id': id of the sample * 'tokens': the tokens of the example text * 'ner\_tags': the NER tags of each token * 'pos\_tags': the POS tags of each token The POS tags correspond to this list for Spanish: And this list for Dutch: The NER tags correspond to this list: The NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked. ### Data Splits For both configurations (Spanish and Dutch), there are three splits. The original splits were named 'train', 'testa' and 'testb' and they correspond to the 'train', 'validation' and 'test' splits. The splits have the following sizes : Dataset Creation ---------------- ### Curation Rationale The dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish. ### Source Data The Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000. The Dutch data consist of four editions of the Belgian newspaper "De Morgen" of 2000 (June 2, July 1, August 1 and September 1). #### Initial Data Collection and Normalization The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable. #### Who are the source language producers? The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above. ### Annotations #### Annotation process For the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible. #### Who are the annotators? The Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB). The Dutch data was annotated as a part of the Atranos project at the University of Antwerp. ### Personal and Sensitive Information The data is sourced from newspaper source and only contains mentions of public figures or individuals Considerations for Using the Data --------------------------------- ### Social Impact of Dataset Named Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors. ### Discussion of Biases News text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation. ### Other Known Limitations Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains. Additional Information ---------------------- ### Dataset Curators The annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392). ### Licensing Information The licensing status of the data, especially the news source text, is unknown. Provide the BibTex-formatted reference for the dataset. For example: ### Contributions Thanks to @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:\n\n\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\n\n\nThe shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).", "### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.\n\n\nAfter 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.\n\n\n* 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.\n* 'parsing': The performance in this task is measured with F1 (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.", "### Languages\n\n\nThere are two languages available : Spanish (es) and Dutch (nl).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe examples look like this :\n\n\nThe original data files within the Dutch sub-dataset have '-DOCSTART-' lines used to separate documents, but these lines are removed here.\nIndeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.", "### Data Fields\n\n\n* 'id': id of the sample\n* 'tokens': the tokens of the example text\n* 'ner\\_tags': the NER tags of each token\n* 'pos\\_tags': the POS tags of each token\n\n\nThe POS tags correspond to this list for Spanish:\n\n\nAnd this list for Dutch:\n\n\nThe NER tags correspond to this list:\n\n\nThe NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).\n\n\nIt is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.", "### Data Splits\n\n\nFor both configurations (Spanish and Dutch), there are three splits.\n\n\nThe original splits were named 'train', 'testa' and 'testb' and they correspond to the 'train', 'validation' and 'test' splits.\n\n\nThe splits have the following sizes :\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.", "### Source Data\n\n\nThe Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.\n\n\nThe Dutch data consist of four editions of the Belgian newspaper \"De Morgen\" of 2000 (June 2, July 1, August 1 and September 1).", "#### Initial Data Collection and Normalization\n\n\nThe articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.", "#### Who are the source language producers?\n\n\nThe source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.", "### Annotations", "#### Annotation process\n\n\nFor the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.", "#### Who are the annotators?\n\n\nThe Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).\n\n\nThe Dutch data was annotated as a part of the Atranos project at the University of Antwerp.", "### Personal and Sensitive Information\n\n\nThe data is sourced from newspaper source and only contains mentions of public figures or individuals\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nNamed Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.", "### Discussion of Biases\n\n\nNews text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.", "### Other Known Limitations\n\n\nUsers should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392).", "### Licensing Information\n\n\nThe licensing status of the data, especially the news source text, is unknown.\n\n\nProvide the BibTex-formatted reference for the dataset. For example:", "### Contributions\n\n\nThanks to @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-crowdsourced #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #language-Dutch #license-unknown #region-us \n", "### Dataset Summary\n\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:\n\n\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\n\n\nThe shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).", "### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.\n\n\nAfter 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.\n\n\n* 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.\n* 'parsing': The performance in this task is measured with F1 (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.", "### Languages\n\n\nThere are two languages available : Spanish (es) and Dutch (nl).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe examples look like this :\n\n\nThe original data files within the Dutch sub-dataset have '-DOCSTART-' lines used to separate documents, but these lines are removed here.\nIndeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.", "### Data Fields\n\n\n* 'id': id of the sample\n* 'tokens': the tokens of the example text\n* 'ner\\_tags': the NER tags of each token\n* 'pos\\_tags': the POS tags of each token\n\n\nThe POS tags correspond to this list for Spanish:\n\n\nAnd this list for Dutch:\n\n\nThe NER tags correspond to this list:\n\n\nThe NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).\n\n\nIt is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.", "### Data Splits\n\n\nFor both configurations (Spanish and Dutch), there are three splits.\n\n\nThe original splits were named 'train', 'testa' and 'testb' and they correspond to the 'train', 'validation' and 'test' splits.\n\n\nThe splits have the following sizes :\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.", "### Source Data\n\n\nThe Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.\n\n\nThe Dutch data consist of four editions of the Belgian newspaper \"De Morgen\" of 2000 (June 2, July 1, August 1 and September 1).", "#### Initial Data Collection and Normalization\n\n\nThe articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.", "#### Who are the source language producers?\n\n\nThe source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.", "### Annotations", "#### Annotation process\n\n\nFor the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.", "#### Who are the annotators?\n\n\nThe Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).\n\n\nThe Dutch data was annotated as a part of the Atranos project at the University of Antwerp.", "### Personal and Sensitive Information\n\n\nThe data is sourced from newspaper source and only contains mentions of public figures or individuals\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nNamed Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.", "### Discussion of Biases\n\n\nNews text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.", "### Other Known Limitations\n\n\nUsers should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392).", "### Licensing Information\n\n\nThe licensing status of the data, especially the news source text, is unknown.\n\n\nProvide the BibTex-formatted reference for the dataset. For example:", "### Contributions\n\n\nThanks to @lhoestq for adding this dataset." ]
[ 112, 221, 383, 27, 83, 203, 79, 40, 64, 35, 33, 5, 44, 79, 38, 97, 65, 45, 32, 44, 17 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-crowdsourced #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #language-Dutch #license-unknown #region-us \n### Dataset Summary\n\n\nNamed entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:\n\n\n[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .\n\n\nThe shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).", "passage: ### Supported Tasks and Leaderboards\n\n\nNamed Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.\n\n\nAfter 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.\n\n\n* 'named-entity-recognition': The performance in this task is measured with F1 (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.\n* 'parsing': The performance in this task is measured with F1 (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.### Languages\n\n\nThere are two languages available : Spanish (es) and Dutch (nl).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThe examples look like this :\n\n\nThe original data files within the Dutch sub-dataset have '-DOCSTART-' lines used to separate documents, but these lines are removed here.\nIndeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.### Data Fields\n\n\n* 'id': id of the sample\n* 'tokens': the tokens of the example text\n* 'ner\\_tags': the NER tags of each token\n* 'pos\\_tags': the POS tags of each token\n\n\nThe POS tags correspond to this list for Spanish:\n\n\nAnd this list for Dutch:\n\n\nThe NER tags correspond to this list:\n\n\nThe NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).\n\n\nIt is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.### Data Splits\n\n\nFor both configurations (Spanish and Dutch), there are three splits.\n\n\nThe original splits were named 'train', 'testa' and 'testb' and they correspond to the 'train', 'validation' and 'test' splits.\n\n\nThe splits have the following sizes :\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.### Source Data\n\n\nThe Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.\n\n\nThe Dutch data consist of four editions of the Belgian newspaper \"De Morgen\" of 2000 (June 2, July 1, August 1 and September 1)." ]
3f1cce917ab38486481b062921eb137e7bd3c205
# Dataset Card for "conll2003" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 4.85 MB - **Size of the generated dataset:** 10.26 MB - **Total amount of disk used:** 15.11 MB ### Dataset Summary The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on a separate line and there is an empty line after each sentence. The first item on each line is a word, the second a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2 tagging scheme, whereas the original dataset uses IOB1. For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419 ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### conll2003 - **Size of downloaded dataset files:** 4.85 MB - **Size of the generated dataset:** 10.26 MB - **Total amount of disk used:** 15.11 MB An example of 'train' looks as follows. ``` { "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0], "id": "0", "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7], "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."] } ``` The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here. Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation. ### Data Fields The data fields are the same among all splits. #### conll2003 - `id`: a `string` feature. - `tokens`: a `list` of `string` features. - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices: ```python {'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12, 'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23, 'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33, 'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43, 'WP': 44, 'WP$': 45, 'WRB': 46} ``` - `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices: ```python {'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8, 'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17, 'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22} ``` - `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices: ```python {'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8} ``` ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |conll2003|14041| 3250|3453| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page: > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST. The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html): > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements: > > [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html) > > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST. > > [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html) > > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization. ### Citation Information ``` @inproceedings{tjong-kim-sang-de-meulder-2003-introduction, title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition", author = "Tjong Kim Sang, Erik F. and De Meulder, Fien", booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003", year = "2003", url = "https://www.aclweb.org/anthology/W03-0419", pages = "142--147", } ``` ### Contributions Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
conll2003
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-reuters-corpus", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-reuters-corpus"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "part-of-speech"], "paperswithcode_id": "conll-2003", "pretty_name": "CoNLL-2003", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "\"", "1": "''", "2": "#", "3": "$", "4": "(", "5": ")", "6": ",", "7": ".", "8": ":", "9": "``", "10": "CC", "11": "CD", "12": "DT", "13": "EX", "14": "FW", "15": "IN", "16": "JJ", "17": "JJR", "18": "JJS", "19": "LS", "20": "MD", "21": "NN", "22": "NNP", "23": "NNPS", "24": "NNS", "25": "NN|SYM", "26": "PDT", "27": "POS", "28": "PRP", "29": "PRP$", "30": "RB", "31": "RBR", "32": "RBS", "33": "RP", "34": "SYM", "35": "TO", "36": "UH", "37": "VB", "38": "VBD", "39": "VBG", "40": "VBN", "41": "VBP", "42": "VBZ", "43": "WDT", "44": "WP", "45": "WP$", "46": "WRB"}}}}, {"name": "chunk_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-ADJP", "2": "I-ADJP", "3": "B-ADVP", "4": "I-ADVP", "5": "B-CONJP", "6": "I-CONJP", "7": "B-INTJ", "8": "I-INTJ", "9": "B-LST", "10": "I-LST", "11": "B-NP", "12": "I-NP", "13": "B-PP", "14": "I-PP", "15": "B-PRT", "16": "I-PRT", "17": "B-SBAR", "18": "I-SBAR", "19": "B-UCP", "20": "I-UCP", "21": "B-VP", "22": "I-VP"}}}}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "config_name": "conll2003", "splits": [{"name": "train", "num_bytes": 6931345, "num_examples": 14041}, {"name": "validation", "num_bytes": 1739223, "num_examples": 3250}, {"name": "test", "num_bytes": 1582054, "num_examples": 3453}], "download_size": 982975, "dataset_size": 10252622}, "train-eval-index": [{"config": "conll2003", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}, "metrics": [{"type": "seqeval", "name": "seqeval"}]}]}
2024-01-18T09:34:17+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-reuters-corpus #language-English #license-other #region-us
Dataset Card for "conll2003" ============================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 4.85 MB * Size of the generated dataset: 10.26 MB * Total amount of disk used: 15.11 MB ### Dataset Summary The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on a separate line and there is an empty line after each sentence. The first item on each line is a word, the second a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2 tagging scheme, whereas the original dataset uses IOB1. For more details see URL and URL ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### conll2003 * Size of downloaded dataset files: 4.85 MB * Size of the generated dataset: 10.26 MB * Total amount of disk used: 15.11 MB An example of 'train' looks as follows. The original data files have '-DOCSTART-' lines used to separate documents, but these lines are removed here. Indeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation. ### Data Fields The data fields are the same among all splits. #### conll2003 * 'id': a 'string' feature. * 'tokens': a 'list' of 'string' features. * 'pos\_tags': a 'list' of classification labels ('int'). Full tagset with indices: * 'chunk\_tags': a 'list' of classification labels ('int'). Full tagset with indices: * 'ner\_tags': a 'list' of classification labels ('int'). Full tagset with indices: ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information From the CoNLL2003 shared task page: > > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST. > > > The copyrights are defined below, from the Reuters Corpus page: > > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements: > > > Organizational agreement > > > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST. > > > Individual agreement > > > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization. > > > ### Contributions Thanks to @jplu, @vblagoje, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nThe shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on\nfour types of named entities: persons, locations, organizations and names of miscellaneous entities that do\nnot belong to the previous three groups.\n\n\nThe CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on\na separate line and there is an empty line after each sentence. The first item on each line is a word, the second\na part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags\nand the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only\nif two phrases of the same type immediately follow each other, the first word of the second phrase will have tag\nB-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2\ntagging scheme, whereas the original dataset uses IOB1.\n\n\nFor more details see URL and URL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### conll2003\n\n\n* Size of downloaded dataset files: 4.85 MB\n* Size of the generated dataset: 10.26 MB\n* Total amount of disk used: 15.11 MB\n\n\nAn example of 'train' looks as follows.\n\n\nThe original data files have '-DOCSTART-' lines used to separate documents, but these lines are removed here.\nIndeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### conll2003\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:\n* 'chunk\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:\n* 'ner\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nFrom the CoNLL2003 shared task page:\n\n\n\n> \n> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.\n> \n> \n> \n\n\nThe copyrights are defined below, from the Reuters Corpus page:\n\n\n\n> \n> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:\n> \n> \n> Organizational agreement\n> \n> \n> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.\n> \n> \n> Individual agreement\n> \n> \n> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.\n> \n> \n>", "### Contributions\n\n\nThanks to @jplu, @vblagoje, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-reuters-corpus #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nThe shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on\nfour types of named entities: persons, locations, organizations and names of miscellaneous entities that do\nnot belong to the previous three groups.\n\n\nThe CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on\na separate line and there is an empty line after each sentence. The first item on each line is a word, the second\na part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags\nand the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only\nif two phrases of the same type immediately follow each other, the first word of the second phrase will have tag\nB-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2\ntagging scheme, whereas the original dataset uses IOB1.\n\n\nFor more details see URL and URL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### conll2003\n\n\n* Size of downloaded dataset files: 4.85 MB\n* Size of the generated dataset: 10.26 MB\n* Total amount of disk used: 15.11 MB\n\n\nAn example of 'train' looks as follows.\n\n\nThe original data files have '-DOCSTART-' lines used to separate documents, but these lines are removed here.\nIndeed '-DOCSTART-' is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### conll2003\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:\n* 'chunk\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:\n* 'ner\\_tags': a 'list' of classification labels ('int'). Full tagset with indices:", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nFrom the CoNLL2003 shared task page:\n\n\n\n> \n> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.\n> \n> \n> \n\n\nThe copyrights are defined below, from the Reuters Corpus page:\n\n\n\n> \n> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:\n> \n> \n> Organizational agreement\n> \n> \n> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.\n> \n> \n> Individual agreement\n> \n> \n> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.\n> \n> \n>", "### Contributions\n\n\nThanks to @jplu, @vblagoje, @lhoestq for adding this dataset." ]
[ 114, 267, 10, 11, 6, 114, 17, 120, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 214, 27 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-reuters-corpus #language-English #license-other #region-us \n### Dataset Summary\n\n\nThe shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on\nfour types of named entities: persons, locations, organizations and names of miscellaneous entities that do\nnot belong to the previous three groups.\n\n\nThe CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on\na separate line and there is an empty line after each sentence. The first item on each line is a word, the second\na part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags\nand the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only\nif two phrases of the same type immediately follow each other, the first word of the second phrase will have tag\nB-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2\ntagging scheme, whereas the original dataset uses IOB1.\n\n\nFor more details see URL and URL### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances" ]
f9205bbe2a0ac3ec159599e7a1e1694f4080a39b
# Dataset Card for "conllpp" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/ZihanWangKi/CrossWeigh) - **Repository:** [Github](https://github.com/ZihanWangKi/CrossWeigh) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/D19-1519) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set have been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One correction on the test set for example, is: ``` { "tokens": ["SOCCER", "-", "JAPAN", "GET", "LUCKY", "WIN", ",", "CHINA", "IN", "SURPRISE", "DEFEAT", "."], "original_ner_tags_in_conll2003": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-PER", "O", "O", "O", "O"], "corrected_ner_tags_in_conllpp": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-LOC", "O", "O", "O", "O"], } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances #### conllpp - **Size of downloaded dataset files:** 4.85 MB - **Size of the generated dataset:** 10.26 MB - **Total amount of disk used:** 15.11 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0], "id": "0", "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7], "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."] } ``` ### Data Fields The data fields are the same among all splits. #### conllpp - `id`: a `string` feature. - `tokens`: a `list` of `string` features. - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4). - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4). - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4). ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |conll2003|14041| 3250|3453| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{wang2019crossweigh, title={CrossWeigh: Training Named Entity Tagger from Imperfect Annotations}, author={Wang, Zihan and Shang, Jingbo and Liu, Liyuan and Lu, Lihao and Liu, Jiacheng and Han, Jiawei}, booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)}, pages={5157--5166}, year={2019} } ``` ### Contributions Thanks to [@ZihanWangKi](https://github.com/ZihanWangKi) for adding this dataset.
conllpp
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|conll2003", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|conll2003"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "conll", "pretty_name": "CoNLL++", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "\"", "1": "''", "2": "#", "3": "$", "4": "(", "5": ")", "6": ",", "7": ".", "8": ":", "9": "``", "10": "CC", "11": "CD", "12": "DT", "13": "EX", "14": "FW", "15": "IN", "16": "JJ", "17": "JJR", "18": "JJS", "19": "LS", "20": "MD", "21": "NN", "22": "NNP", "23": "NNPS", "24": "NNS", "25": "NN|SYM", "26": "PDT", "27": "POS", "28": "PRP", "29": "PRP$", "30": "RB", "31": "RBR", "32": "RBS", "33": "RP", "34": "SYM", "35": "TO", "36": "UH", "37": "VB", "38": "VBD", "39": "VBG", "40": "VBN", "41": "VBP", "42": "VBZ", "43": "WDT", "44": "WP", "45": "WP$", "46": "WRB"}}}}, {"name": "chunk_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-ADJP", "2": "I-ADJP", "3": "B-ADVP", "4": "I-ADVP", "5": "B-CONJP", "6": "I-CONJP", "7": "B-INTJ", "8": "I-INTJ", "9": "B-LST", "10": "I-LST", "11": "B-NP", "12": "I-NP", "13": "B-PP", "14": "I-PP", "15": "B-PRT", "16": "I-PRT", "17": "B-SBAR", "18": "I-SBAR", "19": "B-UCP", "20": "I-UCP", "21": "B-VP", "22": "I-VP"}}}}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "config_name": "conllpp", "splits": [{"name": "train", "num_bytes": 6931393, "num_examples": 14041}, {"name": "validation", "num_bytes": 1739247, "num_examples": 3250}, {"name": "test", "num_bytes": 1582078, "num_examples": 3453}], "download_size": 4859600, "dataset_size": 10252718}, "train-eval-index": [{"config": "conllpp", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}, "metrics": [{"type": "seqeval", "name": "seqeval"}]}]}
2024-01-18T09:35:35+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|conll2003 #language-English #license-unknown #region-us
Dataset Card for "conllpp" ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Github * Repository: Github * Paper: Aclweb * Leaderboard: * Point of Contact: ### Dataset Summary CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set have been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One correction on the test set for example, is: ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### conllpp * Size of downloaded dataset files: 4.85 MB * Size of the generated dataset: 10.26 MB * Total amount of disk used: 15.11 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### conllpp * 'id': a 'string' feature. * 'tokens': a 'list' of 'string' features. * 'pos\_tags': a 'list' of classification labels, with possible values including '"' (0), '''' (1), '#' (2), '$' (3), '(' (4). * 'chunk\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-ADJP' (1), 'I-ADJP' (2), 'B-ADVP' (3), 'I-ADVP' (4). * 'ner\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-PER' (1), 'I-PER' (2), 'B-ORG' (3), 'I-ORG' (4). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @ZihanWangKi for adding this dataset.
[ "### Dataset Summary\n\n\nCoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set\nhave been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One\ncorrection on the test set for example, is:", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### conllpp\n\n\n* Size of downloaded dataset files: 4.85 MB\n* Size of the generated dataset: 10.26 MB\n* Total amount of disk used: 15.11 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### conllpp\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels, with possible values including '\"' (0), '''' (1), '#' (2), '$' (3), '(' (4).\n* 'chunk\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-ADJP' (1), 'I-ADJP' (2), 'B-ADVP' (3), 'I-ADVP' (4).\n* 'ner\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-PER' (1), 'I-PER' (2), 'B-ORG' (3), 'I-ORG' (4).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ZihanWangKi for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|conll2003 #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nCoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set\nhave been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One\ncorrection on the test set for example, is:", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### conllpp\n\n\n* Size of downloaded dataset files: 4.85 MB\n* Size of the generated dataset: 10.26 MB\n* Total amount of disk used: 15.11 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### conllpp\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels, with possible values including '\"' (0), '''' (1), '#' (2), '$' (3), '(' (4).\n* 'chunk\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-ADJP' (1), 'I-ADJP' (2), 'B-ADVP' (3), 'I-ADVP' (4).\n* 'ner\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-PER' (1), 'I-PER' (2), 'B-ORG' (3), 'I-ORG' (4).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ZihanWangKi for adding this dataset." ]
[ 99, 75, 10, 11, 6, 51, 17, 193, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|conll2003 #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nCoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set\nhave been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One\ncorrection on the test set for example, is:### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### conllpp\n\n\n* Size of downloaded dataset files: 4.85 MB\n* Size of the generated dataset: 10.26 MB\n* Total amount of disk used: 15.11 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### conllpp\n\n\n* 'id': a 'string' feature.\n* 'tokens': a 'list' of 'string' features.\n* 'pos\\_tags': a 'list' of classification labels, with possible values including '\"' (0), '''' (1), '#' (2), '$' (3), '(' (4).\n* 'chunk\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-ADJP' (1), 'I-ADJP' (2), 'B-ADVP' (3), 'I-ADVP' (4).\n* 'ner\\_tags': a 'list' of classification labels, with possible values including 'O' (0), 'B-PER' (1), 'I-PER' (2), 'B-ORG' (3), 'I-ORG' (4).### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?" ]
a04aa023327e4a18a7204392b3260ee001e3cbd9
# Dataset Card for Consumer Finance Complaints ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.consumerfinance.gov/data-research/consumer-complaints/ - **Repository:** https://github.com/cfpb/consumerfinance.gov - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This database is a collection of complaints about consumer financial products and services that we sent to companies for response. The Consumer Complaint Database is a collection of complaints about consumer financial products and services that we sent to companies for response. Complaints are published after the company responds, confirming a commercial relationship with the consumer, or after 15 days, whichever comes first. Complaints referred to other regulators, such as complaints about depository institutions with less than $10 billion in assets, are not published in the Consumer Complaint Database. The database generally updates daily. Complaints can give us insights into problems people are experiencing in the marketplace and help us regulate consumer financial products and services under existing federal consumer financial laws, enforce those laws judiciously, and educate and empower consumers to make informed financial decisions. We also report on complaint trends annually in Consumer Response’s Annual Report to Congress. ### Supported Tasks and Leaderboards Text Classification Tasks | Task | Label Name | Description | SOTA | | ----------- | ----------- |----------- | ----------- | | Text Classification | Product| Predict the related product of a complaint | N/A | | Task | Label Name | Description | SOTA | | ----------- | ----------- |----------- | ----------- | | Text Classification | Sub-Product| Predict the related sub product of a complaint | N/A | | Task | Label Name | Description | SOTA | | ----------- | ----------- |----------- | ----------- | | Text Classification | Tags | Predict whether a complaint has been made by someone elderly or a service person| N/A | ### Languages English ## Dataset Structure ### Data Instances This dataset is a point in time extract of the database, the database increases in size every day An example of 'train' looks as follows. ``` { "Complaint ID": "4511031", "Product": "Credit reporting, credit repair services, or other personal consumer reports", "Sub Issue": "Credit inquiries on your report that you don't recognize", "Consumer Disputed": "N/A", "Sub Product": "Credit reporting", "State": "TX", "Tags": "Older American, Servicemember", "Company Public Response": "", "Zip Code": "75202", "Issue": "Improper use of your report", "Submitted via": "Web", "Company Response To Consumer": "Closed with explanation", "Complaint Text": "I am XXXX XXXX and I am submitting this complaint myself and there is no third party involved. Despite the multiple previous written requests, the unverified inquiries listed below still remain on my credit report in violation of Federal Law. The Equifax Credit Bureau failed to comply with Fair Credit Reporting Act, XXXX XXXX sections XXXX within the time set forth by law and continued reporting of erroneous information which now, given all my attempts to address it directly with the creditor, as willful negligence and non-compliance with federal statutes. PLEASE REMOVE THE FOLLOWING INQUIRIES COMPLETELY FROM MY CREDIT REPORT : XXXX CARD-Date of inquiry XX/XX/XXXX XXXX CARD-Date of inquiry XX/XX/XXXX", "Date Received": "07-02-2021", "Company": "EQUIFAX, INC.", "Consumer Consent Provided": "Consent not provided", "Timely Response": "Yes", "Date Sent To Company": "2021-07-02" } ``` ### Data Fields | Field | name | Description | Data Type | | ----------- | ----------- |----------- | ----------- | | Date received | The date the CFPB received the complaint | date & time | | | Product | The type of product the consumer identified in the complaint | plain text | This field is a categorical variable. | | Sub-product | The type of sub-product the consumer identified in the complaint | plain text | This field is a categorical variable. Not all Products have Sub-products. | | Issue | The issue the consumer identified in the complaint | plain text | This field is a categorical variable. Possible values are dependent on Product. | | Sub-issue | The sub-issue the consumer identified in the complaint | plain text | This field is a categorical variable. Possible values are dependent on product and issue. Not all Issues have corresponding Sub-issues. | | Consumer complaint narrative | Consumer complaint narrative is the consumer-submitted description of "what happened" from the complaint. Consumers must opt-in to share their narrative. We will not publish the narrative unless the consumer consents, and consumers can opt-out at any time. The CFPB takes reasonable steps to scrub personal information from each complaint that could be used to identify the consumer. | plain text | Consumers' descriptions of what happened are included if consumers consent to publishing the description and after we take steps to remove personal information. | | Company public response | The company's optional, public-facing response to a consumer's complaint. Companies can choose to select a response from a pre-set list of options that will be posted on the public database. For example, "Company believes complaint is the result of an isolated error." | plain text | Companies' public-facing responses to complaints are included if companies choose to publish one. Companies may select a public response from a set list of options as soon as they respond to the complaint, but no later than 180 days after the complaint was sent to the company for response. | | Company | The complaint is about this company | plain text | This field is a categorical variable. | | State | The state of the mailing address provided by the consumer | plain text | This field is a categorical variable. | | ZIP code | The mailing ZIP code provided by the consumer | plain text | Mailing ZIP code provided by the consumer. This field may: i) include the first five digits of a ZIP code; ii) include the first three digits of a ZIP code (if the consumer consented to publication of their complaint narrative); or iii) be blank (if ZIP codes have been submitted with non-numeric values, if there are less than 20,000 people in a given ZIP code, or if the complaint has an address outside of the United States). For example, complaints where the submitter reports the age of the consumer as 62 years or older are tagged, ‘Older American.’ Complaints submitted by or on behalf of a servicemember or the spouse or dependent of a servicemember are tagged, ‘Servicemember.’ Servicemember includes anyone who is active duty, National Guard, or Reservist, as well as anyone who previously served and is a Veteran or retiree. | | Tags | Data that supports easier searching and sorting of complaints submitted by or on behalf of consumers. | plain text | | | Consumer consent provided? | Identifies whether the consumer opted in to publish their complaint narrative. We do not publish the narrative unless the consumer consents and consumers can opt-out at any time. | plain text | This field shows whether a consumer provided consent to publish their complaint narrative | | Submitted via | How the complaint was submitted to the CFPB | plain text | This field is a categorical variable. | | Date sent to company | The date the CFPB sent the complaint to the company | date & time | | | Company response to consumer | This is how the company responded. For example, "Closed with explanation." | plain text | This field is a categorical variable. | | Timely response? | Whether the company gave a timely response | plain text | yes/no | | Consumer disputed? | Whether the consumer disputed the company’s response | plain text | YES/ NO/ N/A: The Bureau discontinued the consumer dispute option on April 24, 2017. | | Complaint ID | The unique identification number for a complaint | number | | ### Data Splits This dataset only contains a TRAIN set - this can be further split into TRAIN, TEST and VALIDATE subsets with the datasets library ## Dataset Creation ### Curation Rationale Open sourcing customer complaints ### Source Data https://cfpb.github.io/api/ccdb/ #### Initial Data Collection and Normalization This database is maintained by the Consumer Financial Protection Bureau #### Who are the source language producers? English ### Annotations #### Annotation process User submitted to the CFPB #### Who are the annotators? N/A ### Personal and Sensitive Information All PII data has been anonymised ## Considerations for Using the Data ### Social Impact of Dataset N/A ### Discussion of Biases This database is not a statistical sample of consumers’ experiences in the marketplace. Complaints are not necessarily representative of all consumers’ experiences and complaints do not constitute “information” for purposes of the Information Quality Act . Complaint volume should be considered in the context of company size and/or market share. For example, companies with more customers may have more complaints than companies with fewer customers. We encourage you to pair complaint data with public and private data sets for additional context. The Bureau publishes the consumer’s narrative description of his or her experience if the consumer opts to share it publicly and after the Bureau takes steps to remove personal information. We don’t verify all the allegations in complaint narratives. Unproven allegations in consumer narratives should be regarded as opinion, not fact. We do not adopt the views expressed and make no representation that consumers’ allegations are accurate, clear, complete, or unbiased in substance or presentation. Users should consider what conclusions may be fairly drawn from complaints alone. ### Other Known Limitations N/A ## Additional Information ### Dataset Curators https://cfpb.github.io/api/ccdb/ ### Licensing Information Creative Commons Zero v1.0 Universal ### Citation Information N/A ### Contributions Thanks to [@kayvane1](https://github.com/kayvane1) for adding this dataset and to the [Consumer Financial Protection Bureau](https://cfpb.github.io/) for publishing it.
consumer-finance-complaints
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "pretty_name": "consumer-finance-complaints", "dataset_info": {"features": [{"name": "Date Received", "dtype": "timestamp[s]"}, {"name": "Product", "dtype": {"class_label": {"names": {"0": "Credit reporting, credit repair services, or other personal consumer reports", "1": "Debt collection", "2": "Mortgage", "3": "Credit card or prepaid card", "4": "Checking or savings account", "5": "Credit reporting", "6": "Student loan", "7": "Money transfer, virtual currency, or money service", "8": "Credit card", "9": "Vehicle loan or lease", "10": "Bank account or service", "11": "Payday loan, title loan, or personal loan", "12": "Consumer Loan", "13": "Payday loan", "14": "Money transfers", "15": "Prepaid card", "16": "Other financial service", "17": "Virtual currency"}}}}, {"name": "Sub Product", "dtype": {"class_label": {"names": {"0": "Credit reporting", "1": "General-purpose credit card or charge card", "2": "Checking account", "3": "Other debt", "4": "Second mortgage", "5": "Conventional home mortgage", "6": "I do not know", "7": "Credit card debt", "8": "Medical debt", "9": "Federal student loan servicing", "10": "FHA mortgage", "11": "Conventional fixed mortgage", "12": "Loan", "13": "Other (i.e. phone, health club, etc.)", "14": "Store credit card", "15": "Installment loan", "16": "Credit card", "17": "Medical", "18": "Mobile or digital wallet", "19": "Private student loan", "20": "Non-federal student loan", "21": "Domestic (US) money transfer", "22": "VA mortgage", "23": "Vehicle loan", "24": "Auto debt", "25": "Payday loan", "26": "Conventional adjustable mortgage (ARM)", "27": "Other personal consumer report", "28": "Payday loan debt", "29": "Savings account", "30": "Virtual currency", "31": "Other bank product/service", "32": "Other type of mortgage", "33": "Other banking product or service", "34": "Other mortgage", "35": "International money transfer", "36": "Lease", "37": "General-purpose prepaid card", "38": "Home equity loan or line of credit (HELOC)", "39": "Government benefit card", "40": "Mortgage debt", "41": "Personal line of credit", "42": "Home equity loan or line of credit", "43": "Federal student loan debt", "44": "Private student loan debt", "45": "Credit repair services", "46": "Title loan", "47": "Auto", "48": "Vehicle lease", "49": "Mortgage", "50": "Reverse mortgage", "51": "General purpose card", "52": "CD (Certificate of Deposit)", "53": "Federal student loan", "54": "Payroll card", "55": "Debt settlement", "56": "Check cashing service", "57": "Traveler's check or cashier's check", "58": "Gift card", "59": "(CD) Certificate of deposit", "60": "Money order", "61": "Foreign currency exchange", "62": "Refund anticipation check", "63": "Gift or merchant card", "64": "Cashing a check without an account", "65": "ID prepaid card", "66": "Mobile wallet", "67": "Government benefit payment card", "68": "Pawn loan", "69": "Other special purpose card", "70": "Check cashing", "71": "Credit repair", "72": "Traveler\u2019s/Cashier\u2019s checks", "73": "Transit card", "74": "Student prepaid card", "75": "Electronic Benefit Transfer / EBT card", "76": ""}}}}, {"name": "Issue", "dtype": "string"}, {"name": "Sub Issue", "dtype": "string"}, {"name": "Complaint Text", "dtype": "string"}, {"name": "Company Public Response", "dtype": "string"}, {"name": "Company", "dtype": "string"}, {"name": "State", "dtype": "string"}, {"name": "Zip Code", "dtype": "string"}, {"name": "Tags", "dtype": {"class_label": {"names": {"0": "Servicemember", "1": "Older American", "2": "Older American, Servicemember", "3": ""}}}}, {"name": "Consumer Consent Provided", "dtype": "string"}, {"name": "Submitted via", "dtype": "string"}, {"name": "Date Sent To Company", "dtype": "string"}, {"name": "Company Response To Consumer", "dtype": "string"}, {"name": "Timely Response", "dtype": "string"}, {"name": "Consumer Disputed", "dtype": "string"}, {"name": "Complaint ID", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1605177353, "num_examples": 2455765}], "download_size": 404187716, "dataset_size": 1605177353}}
2024-01-18T09:36:11+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #region-us
Dataset Card for Consumer Finance Complaints ============================================ Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This database is a collection of complaints about consumer financial products and services that we sent to companies for response. The Consumer Complaint Database is a collection of complaints about consumer financial products and services that we sent to companies for response. Complaints are published after the company responds, confirming a commercial relationship with the consumer, or after 15 days, whichever comes first. Complaints referred to other regulators, such as complaints about depository institutions with less than $10 billion in assets, are not published in the Consumer Complaint Database. The database generally updates daily. Complaints can give us insights into problems people are experiencing in the marketplace and help us regulate consumer financial products and services under existing federal consumer financial laws, enforce those laws judiciously, and educate and empower consumers to make informed financial decisions. We also report on complaint trends annually in Consumer Response’s Annual Report to Congress. ### Supported Tasks and Leaderboards Text Classification Tasks ### Languages English Dataset Structure ----------------- ### Data Instances This dataset is a point in time extract of the database, the database increases in size every day An example of 'train' looks as follows. ### Data Fields ### Data Splits This dataset only contains a TRAIN set - this can be further split into TRAIN, TEST and VALIDATE subsets with the datasets library Dataset Creation ---------------- ### Curation Rationale Open sourcing customer complaints ### Source Data URL #### Initial Data Collection and Normalization This database is maintained by the Consumer Financial Protection Bureau #### Who are the source language producers? English ### Annotations #### Annotation process User submitted to the CFPB #### Who are the annotators? N/A ### Personal and Sensitive Information All PII data has been anonymised Considerations for Using the Data --------------------------------- ### Social Impact of Dataset N/A ### Discussion of Biases This database is not a statistical sample of consumers’ experiences in the marketplace. Complaints are not necessarily representative of all consumers’ experiences and complaints do not constitute “information” for purposes of the Information Quality Act . Complaint volume should be considered in the context of company size and/or market share. For example, companies with more customers may have more complaints than companies with fewer customers. We encourage you to pair complaint data with public and private data sets for additional context. The Bureau publishes the consumer’s narrative description of his or her experience if the consumer opts to share it publicly and after the Bureau takes steps to remove personal information. We don’t verify all the allegations in complaint narratives. Unproven allegations in consumer narratives should be regarded as opinion, not fact. We do not adopt the views expressed and make no representation that consumers’ allegations are accurate, clear, complete, or unbiased in substance or presentation. Users should consider what conclusions may be fairly drawn from complaints alone. ### Other Known Limitations N/A Additional Information ---------------------- ### Dataset Curators URL ### Licensing Information Creative Commons Zero v1.0 Universal N/A ### Contributions Thanks to @kayvane1 for adding this dataset and to the Consumer Financial Protection Bureau for publishing it.
[ "### Dataset Summary\n\n\nThis database is a collection of complaints about consumer financial products and services that we sent to companies for response.\n\n\nThe Consumer Complaint Database is a collection of complaints about consumer financial products and services that we sent to companies for response. Complaints are published after the company responds, confirming a commercial relationship with the consumer, or after 15 days, whichever comes first. Complaints referred to other regulators, such as complaints about depository institutions with less than $10 billion in assets, are not published in the Consumer Complaint Database. The database generally updates daily.\n\n\nComplaints can give us insights into problems people are experiencing in the marketplace and help us regulate consumer financial products and services under existing federal consumer financial laws, enforce those laws judiciously, and educate and empower consumers to make informed financial decisions. We also report on complaint trends annually in Consumer Response’s Annual Report to Congress.", "### Supported Tasks and Leaderboards\n\n\nText Classification Tasks", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis dataset is a point in time extract of the database, the database increases in size every day\n\n\nAn example of 'train' looks as follows.", "### Data Fields", "### Data Splits\n\n\nThis dataset only contains a TRAIN set - this can be further split into TRAIN, TEST and VALIDATE subsets with the datasets library\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nOpen sourcing customer complaints", "### Source Data\n\n\nURL", "#### Initial Data Collection and Normalization\n\n\nThis database is maintained by the Consumer Financial Protection Bureau", "#### Who are the source language producers?\n\n\nEnglish", "### Annotations", "#### Annotation process\n\n\nUser submitted to the CFPB", "#### Who are the annotators?\n\n\nN/A", "### Personal and Sensitive Information\n\n\nAll PII data has been anonymised\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nN/A", "### Discussion of Biases\n\n\nThis database is not a statistical sample of consumers’ experiences in the marketplace. Complaints are not necessarily representative of all consumers’ experiences and complaints do not constitute “information” for purposes of the Information Quality Act .\n\n\nComplaint volume should be considered in the context of company size and/or market share. For example, companies with more customers may have more complaints than companies with fewer customers. We encourage you to pair complaint data with public and private data sets for additional context.\n\n\nThe Bureau publishes the consumer’s narrative description of his or her experience if the consumer opts to share it publicly and after the Bureau takes steps to remove personal information. We don’t verify all the allegations in complaint narratives. Unproven allegations in consumer narratives should be regarded as opinion, not fact. We do not adopt the views expressed and make no representation that consumers’ allegations are accurate, clear, complete, or unbiased in substance or presentation. Users should consider what conclusions may be fairly drawn from complaints alone.", "### Other Known Limitations\n\n\nN/A\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL", "### Licensing Information\n\n\nCreative Commons Zero v1.0 Universal\n\n\nN/A", "### Contributions\n\n\nThanks to @kayvane1 for adding this dataset and to the Consumer Financial Protection Bureau for publishing it." ]
[ "TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #region-us \n", "### Dataset Summary\n\n\nThis database is a collection of complaints about consumer financial products and services that we sent to companies for response.\n\n\nThe Consumer Complaint Database is a collection of complaints about consumer financial products and services that we sent to companies for response. Complaints are published after the company responds, confirming a commercial relationship with the consumer, or after 15 days, whichever comes first. Complaints referred to other regulators, such as complaints about depository institutions with less than $10 billion in assets, are not published in the Consumer Complaint Database. The database generally updates daily.\n\n\nComplaints can give us insights into problems people are experiencing in the marketplace and help us regulate consumer financial products and services under existing federal consumer financial laws, enforce those laws judiciously, and educate and empower consumers to make informed financial decisions. We also report on complaint trends annually in Consumer Response’s Annual Report to Congress.", "### Supported Tasks and Leaderboards\n\n\nText Classification Tasks", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis dataset is a point in time extract of the database, the database increases in size every day\n\n\nAn example of 'train' looks as follows.", "### Data Fields", "### Data Splits\n\n\nThis dataset only contains a TRAIN set - this can be further split into TRAIN, TEST and VALIDATE subsets with the datasets library\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nOpen sourcing customer complaints", "### Source Data\n\n\nURL", "#### Initial Data Collection and Normalization\n\n\nThis database is maintained by the Consumer Financial Protection Bureau", "#### Who are the source language producers?\n\n\nEnglish", "### Annotations", "#### Annotation process\n\n\nUser submitted to the CFPB", "#### Who are the annotators?\n\n\nN/A", "### Personal and Sensitive Information\n\n\nAll PII data has been anonymised\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nN/A", "### Discussion of Biases\n\n\nThis database is not a statistical sample of consumers’ experiences in the marketplace. Complaints are not necessarily representative of all consumers’ experiences and complaints do not constitute “information” for purposes of the Information Quality Act .\n\n\nComplaint volume should be considered in the context of company size and/or market share. For example, companies with more customers may have more complaints than companies with fewer customers. We encourage you to pair complaint data with public and private data sets for additional context.\n\n\nThe Bureau publishes the consumer’s narrative description of his or her experience if the consumer opts to share it publicly and after the Bureau takes steps to remove personal information. We don’t verify all the allegations in complaint narratives. Unproven allegations in consumer narratives should be regarded as opinion, not fact. We do not adopt the views expressed and make no representation that consumers’ allegations are accurate, clear, complete, or unbiased in substance or presentation. Users should consider what conclusions may be fairly drawn from complaints alone.", "### Other Known Limitations\n\n\nN/A\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nURL", "### Licensing Information\n\n\nCreative Commons Zero v1.0 Universal\n\n\nN/A", "### Contributions\n\n\nThanks to @kayvane1 for adding this dataset and to the Consumer Financial Protection Bureau for publishing it." ]
[ 91, 206, 15, 12, 39, 5, 47, 13, 5, 22, 11, 5, 11, 12, 26, 10, 240, 17, 7, 15, 29 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #region-us \n### Dataset Summary\n\n\nThis database is a collection of complaints about consumer financial products and services that we sent to companies for response.\n\n\nThe Consumer Complaint Database is a collection of complaints about consumer financial products and services that we sent to companies for response. Complaints are published after the company responds, confirming a commercial relationship with the consumer, or after 15 days, whichever comes first. Complaints referred to other regulators, such as complaints about depository institutions with less than $10 billion in assets, are not published in the Consumer Complaint Database. The database generally updates daily.\n\n\nComplaints can give us insights into problems people are experiencing in the marketplace and help us regulate consumer financial products and services under existing federal consumer financial laws, enforce those laws judiciously, and educate and empower consumers to make informed financial decisions. We also report on complaint trends annually in Consumer Response’s Annual Report to Congress.### Supported Tasks and Leaderboards\n\n\nText Classification Tasks### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThis dataset is a point in time extract of the database, the database increases in size every day\n\n\nAn example of 'train' looks as follows.### Data Fields### Data Splits\n\n\nThis dataset only contains a TRAIN set - this can be further split into TRAIN, TEST and VALIDATE subsets with the datasets library\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nOpen sourcing customer complaints### Source Data\n\n\nURL#### Initial Data Collection and Normalization\n\n\nThis database is maintained by the Consumer Financial Protection Bureau#### Who are the source language producers?\n\n\nEnglish### Annotations#### Annotation process\n\n\nUser submitted to the CFPB#### Who are the annotators?\n\n\nN/A" ]
494ada3828cf0994576516ef988bb728515baa3b
# Dataset Card for ConvAi ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]() - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]() - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]() - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]() ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
conv_ai
[ "task_categories:conversational", "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "evaluating-dialogue-systems", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["conversational", "text-classification"], "task_ids": ["text-scoring"], "pretty_name": "ConvAi", "tags": ["evaluating-dialogue-systems"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "dialogId", "dtype": "int32"}, {"name": "context", "dtype": "string"}, {"name": "users", "list": [{"name": "userType", "dtype": "string"}, {"name": "id", "dtype": "string"}]}, {"name": "evaluation", "list": [{"name": "breadth", "dtype": "int32"}, {"name": "userId", "dtype": "string"}, {"name": "quality", "dtype": "int32"}, {"name": "engagement", "dtype": "int32"}]}, {"name": "thread", "list": [{"name": "evaluation", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "userId", "dtype": "string"}, {"name": "time", "dtype": "int32"}]}], "config_name": "conv_ai", "splits": [{"name": "train", "num_bytes": 3924265, "num_examples": 2778}], "download_size": 5804611, "dataset_size": 3924265}}
2024-01-18T09:36:42+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #evaluating-dialogue-systems #region-us
# Dataset Card for ConvAi ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]() - Repository: [If the dataset is hosted on github or has a github homepage, add URL here]() - Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]() - Leaderboard: [If the dataset supports an active leaderboard, add link here]() - Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]() ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patil-suraj for adding this dataset.
[ "# Dataset Card for ConvAi", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: [If the dataset is hosted on github or has a github homepage, add URL here]()\n- Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #evaluating-dialogue-systems #region-us \n", "# Dataset Card for ConvAi", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: [If the dataset is hosted on github or has a github homepage, add URL here]()\n- Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ 110, 8, 120, 160, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #evaluating-dialogue-systems #region-us \n# Dataset Card for ConvAi## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: [If the dataset is hosted on github or has a github homepage, add URL here]()\n- Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information" ]
d290e3d80f05073e71870b0a73883a3ba243d832
# Dataset Card for conv_ai_2 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/DeepPavlov/convai/tree/master/2018 - **Repository:** https://github.com/DeepPavlov/convai/tree/master/2018 - **Paper:** https://arxiv.org/abs/1902.00098 - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary ConvAI is a dataset of human-to-bot conversations labeled for quality. This data can be used to train a metric for evaluating dialogue systems. Moreover, it can be used in the development of chatbots themselves: it contains information on the quality of utterances and entire dialogues, that can guide a dialogue system in search of better answers. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances ``` { "dialog_id": "0x648cc5b7", "dialog": [ { "id": 0, "sender": "participant2", "text": "Hi! How is your day? \ud83d\ude09", "sender_class": "Bot" }, { "id": 1, "sender": "participant1", "text": "Hi! Great!", "sender_class": "Human" }, { "id": 2, "sender": "participant2", "text": "I am good thanks for asking are you currently in high school?", "sender_class": "Bot" } ], "bot_profile": [ "my current goal is to run a k.", "when i grow up i want to be a physical therapist.", "i'm currently in high school.", "i make straight as in school.", "i won homecoming queen this year." ], "user_profile": [ "my favorite color is red.", "i enjoy listening to classical music.", "i'm a christian.", "i can drive a tractor." ], "eval_score": 4, "profile_match": 1 } ``` ### Data Fields - dialog_id : specifies the unique ID for the dialogs. - dialog : Array of dialogs. - bot_profile : Bot annotated response that will be used for evaluation. - user_profile : user annoted response that will be used for evaluation. - eval_score : (`1`,` 2`,` 3`,` 4`,` 5`) how does an user like a conversation. The missing values are replaced with` -1` - profile_match : (`0`,` 1`) an user is given by two profile descriptions (4 sentences each), one of them is the one given to the bot it had been talking to, the other one is random; the user needs to choose one of them.The missing values are replaced with` -1` ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @article{DBLP:journals/corr/abs-1902-00098, author = {Emily Dinan and Varvara Logacheva and Valentin Malykh and Alexander H. Miller and Kurt Shuster and Jack Urbanek and Douwe Kiela and Arthur Szlam and Iulian Serban and Ryan Lowe and Shrimai Prabhumoye and Alan W. Black and Alexander I. Rudnicky and Jason Williams and Joelle Pineau and Mikhail S. Burtsev and Jason Weston}, title = {The Second Conversational Intelligence Challenge (ConvAI2)}, journal = {CoRR}, volume = {abs/1902.00098}, year = {2019}, url = {http://arxiv.org/abs/1902.00098}, archivePrefix = {arXiv}, eprint = {1902.00098}, timestamp = {Wed, 07 Oct 2020 11:09:41 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1902-00098.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ### Contributions Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
conv_ai_2
[ "task_categories:conversational", "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "evaluating-dialogue-systems", "arxiv:1902.00098", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["conversational", "text-classification"], "task_ids": ["text-scoring"], "paperswithcode_id": "convai2", "pretty_name": "Conversational Intelligence Challenge 2", "tags": ["evaluating-dialogue-systems"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "dialog_id", "dtype": "string"}, {"name": "dialog", "list": [{"name": "id", "dtype": "int32"}, {"name": "sender", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "sender_class", "dtype": "string"}]}, {"name": "bot_profile", "sequence": {"list": "string"}}, {"name": "user_profile", "sequence": {"list": "string"}}, {"name": "eval_score", "dtype": "int32"}, {"name": "profile_match", "dtype": "int32"}], "config_name": "conv_ai_2", "splits": [{"name": "train", "num_bytes": 8403805, "num_examples": 3495}], "download_size": 6636788, "dataset_size": 8403805}}
2024-01-18T09:37:05+00:00
[ "1902.00098" ]
[ "en" ]
TAGS #task_categories-conversational #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #evaluating-dialogue-systems #arxiv-1902.00098 #region-us
# Dataset Card for conv_ai_2 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary ConvAI is a dataset of human-to-bot conversations labeled for quality. This data can be used to train a metric for evaluating dialogue systems. Moreover, it can be used in the development of chatbots themselves: it contains information on the quality of utterances and entire dialogues, that can guide a dialogue system in search of better answers. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields - dialog_id : specifies the unique ID for the dialogs. - dialog : Array of dialogs. - bot_profile : Bot annotated response that will be used for evaluation. - user_profile : user annoted response that will be used for evaluation. - eval_score : ('1',' 2',' 3',' 4',' 5') how does an user like a conversation. The missing values are replaced with' -1' - profile_match : ('0',' 1') an user is given by two profile descriptions (4 sentences each), one of them is the one given to the bot it had been talking to, the other one is random; the user needs to choose one of them.The missing values are replaced with' -1' ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information @article{DBLP:journals/corr/abs-1902-00098, author = {Emily Dinan and Varvara Logacheva and Valentin Malykh and Alexander H. Miller and Kurt Shuster and Jack Urbanek and Douwe Kiela and Arthur Szlam and Iulian Serban and Ryan Lowe and Shrimai Prabhumoye and Alan W. Black and Alexander I. Rudnicky and Jason Williams and Joelle Pineau and Mikhail S. Burtsev and Jason Weston}, title = {The Second Conversational Intelligence Challenge (ConvAI2)}, journal = {CoRR}, volume = {abs/1902.00098}, year = {2019}, url = {URL archivePrefix = {arXiv}, eprint = {1902.00098}, timestamp = {Wed, 07 Oct 2020 11:09:41 +0200}, biburl = {URL bibsource = {dblp computer science bibliography, URL} } ### Contributions Thanks to @rkc007 for adding this dataset.
[ "# Dataset Card for conv_ai_2", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nConvAI is a dataset of human-to-bot conversations labeled for quality. This data can be used to train a metric for evaluating dialogue systems. Moreover, it can be used in the development of chatbots themselves: it contains information on the quality of utterances and entire dialogues, that can guide a dialogue system in search of better answers.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- dialog_id : specifies the unique ID for the dialogs.\n- dialog : Array of dialogs.\n- bot_profile : Bot annotated response that will be used for evaluation.\n- user_profile : user annoted response that will be used for evaluation.\n- eval_score : ('1',' 2',' 3',' 4',' 5') how does an user like a conversation. The missing values are replaced with' -1'\n- profile_match : ('0',' 1') an user is given by two profile descriptions (4 sentences each), one of them is the one given to the bot it had been talking to, the other one is random; the user needs to choose one of them.The missing values are replaced with' -1'", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@article{DBLP:journals/corr/abs-1902-00098,\n author = {Emily Dinan and\n Varvara Logacheva and\n Valentin Malykh and\n Alexander H. Miller and\n Kurt Shuster and\n Jack Urbanek and\n Douwe Kiela and\n Arthur Szlam and\n Iulian Serban and\n Ryan Lowe and\n Shrimai Prabhumoye and\n Alan W. Black and\n Alexander I. Rudnicky and\n Jason Williams and\n Joelle Pineau and\n Mikhail S. Burtsev and\n Jason Weston},\n title = {The Second Conversational Intelligence Challenge (ConvAI2)},\n journal = {CoRR},\n volume = {abs/1902.00098},\n year = {2019},\n url = {URL\n archivePrefix = {arXiv},\n eprint = {1902.00098},\n timestamp = {Wed, 07 Oct 2020 11:09:41 +0200},\n biburl = {URL\n bibsource = {dblp computer science bibliography, URL}\n}", "### Contributions\n\nThanks to @rkc007 for adding this dataset." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #evaluating-dialogue-systems #arxiv-1902.00098 #region-us \n", "# Dataset Card for conv_ai_2", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nConvAI is a dataset of human-to-bot conversations labeled for quality. This data can be used to train a metric for evaluating dialogue systems. Moreover, it can be used in the development of chatbots themselves: it contains information on the quality of utterances and entire dialogues, that can guide a dialogue system in search of better answers.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- dialog_id : specifies the unique ID for the dialogs.\n- dialog : Array of dialogs.\n- bot_profile : Bot annotated response that will be used for evaluation.\n- user_profile : user annoted response that will be used for evaluation.\n- eval_score : ('1',' 2',' 3',' 4',' 5') how does an user like a conversation. The missing values are replaced with' -1'\n- profile_match : ('0',' 1') an user is given by two profile descriptions (4 sentences each), one of them is the one given to the bot it had been talking to, the other one is random; the user needs to choose one of them.The missing values are replaced with' -1'", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@article{DBLP:journals/corr/abs-1902-00098,\n author = {Emily Dinan and\n Varvara Logacheva and\n Valentin Malykh and\n Alexander H. Miller and\n Kurt Shuster and\n Jack Urbanek and\n Douwe Kiela and\n Arthur Szlam and\n Iulian Serban and\n Ryan Lowe and\n Shrimai Prabhumoye and\n Alan W. Black and\n Alexander I. Rudnicky and\n Jason Williams and\n Joelle Pineau and\n Mikhail S. Burtsev and\n Jason Weston},\n title = {The Second Conversational Intelligence Challenge (ConvAI2)},\n journal = {CoRR},\n volume = {abs/1902.00098},\n year = {2019},\n url = {URL\n archivePrefix = {arXiv},\n eprint = {1902.00098},\n timestamp = {Wed, 07 Oct 2020 11:09:41 +0200},\n biburl = {URL\n bibsource = {dblp computer science bibliography, URL}\n}", "### Contributions\n\nThanks to @rkc007 for adding this dataset." ]
[ 118, 11, 120, 27, 87, 10, 4, 6, 6, 179, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 222, 17 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #evaluating-dialogue-systems #arxiv-1902.00098 #region-us \n# Dataset Card for conv_ai_2## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nConvAI is a dataset of human-to-bot conversations labeled for quality. This data can be used to train a metric for evaluating dialogue systems. Moreover, it can be used in the development of chatbots themselves: it contains information on the quality of utterances and entire dialogues, that can guide a dialogue system in search of better answers.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances" ]
b55872b8e6a40da5dc84d4c35c64575123e98b16
# Dataset Card for [More Information Needed] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/aliannejadi/ClariQ - **Repository:** https://github.com/aliannejadi/ClariQ - **Paper:** https://arxiv.org/abs/2009.11352 - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary The Conv AI 3 challenge is organized as part of the Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In Information Retrieval (IR) settings such a situation is handled mainly through the diversification of search result page. It is however much more challenging in dialogue settings. Hence, we aim to study the following situation for dialogue settings: - a user is asking an ambiguous question (where ambiguous question is a question to which one can return > 1 possible answers) - the system must identify that the question is ambiguous, and, instead of trying to answer it directly, ask a good clarifying question. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here are a few examples from the dataset: ``` {'topic_id': 8, 'facet_id': 'F0968', 'initial_request': 'I want to know about appraisals.', 'topic_desc': 'Find information about the appraisals in nearby companies.', 'clarification_need': 2, 'question_id': 'F0001', 'question': 'are you looking for a type of appraiser', 'answer': 'im looking for nearby companies that do home appraisals', 'facet_desc': 'Get the TYPE of Appraisals' 'conversation_context': [], 'context_id': 968} ``` ``` {'topic_id': 8, 'facet_id': 'F0969', 'initial_request': 'I want to know about appraisals.', 'topic_desc': 'Find information about the type of appraisals.', 'clarification_need': 2, 'question_id': 'F0005', 'question': 'are you looking for a type of appraiser', 'facet_desc': 'Get the TYPE of Appraisals' 'answer': 'yes jewelry', 'conversation_context': [], 'context_id': 969} ``` ``` {'topic_id': 293, 'facet_id': 'F0729', 'initial_request': 'Tell me about the educational advantages of social networking sites.', 'topic_desc': 'Find information about the educational benefits of the social media sites', 'clarification_need': 2, 'question_id': 'F0009' 'question': 'which social networking sites would you like information on', 'answer': 'i don have a specific one in mind just overall educational benefits to social media sites', 'facet_desc': 'Detailed information about the Networking Sites.' 'conversation_context': [{'question': 'what level of schooling are you interested in gaining the advantages to social networking sites', 'answer': 'all levels'}, {'question': 'what type of educational advantages are you seeking from social networking', 'answer': 'i just want to know if there are any'}], 'context_id': 976573} ``` ### Data Fields - `topic_id`: the ID of the topic (`initial_request`). - `initial_request`: the query (text) that initiates the conversation. - `topic_desc`: a full description of the topic as it appears in the TREC Web Track data. - `clarification_need`: a label from 1 to 4, indicating how much it is needed to clarify a topic. If an `initial_request` is self-contained and would not need any clarification, the label would be 1. While if a `initial_request` is absolutely ambiguous, making it impossible for a search engine to guess the user's right intent before clarification, the label would be 4. - `facet_id`: the ID of the facet. - `facet_desc`: a full description of the facet (information need) as it appears in the TREC Web Track data. - `question_id`: the ID of the question.. - `question`: a clarifying question that the system can pose to the user for the current topic and facet. - `answer`: an answer to the clarifying question, assuming that the user is in the context of the current row (i.e., the user's initial query is `initial_request`, their information need is `facet_desc`, and `question` has been posed to the user). ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @misc{aliannejadi2020convai3, title={ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)}, author={Mohammad Aliannejadi and Julia Kiseleva and Aleksandr Chuklin and Jeff Dalton and Mikhail Burtsev}, year={2020}, eprint={2009.11352}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
conv_ai_3
[ "task_categories:conversational", "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "evaluating-dialogue-systems", "arxiv:2009.11352", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["conversational", "text-classification"], "task_ids": ["text-scoring"], "pretty_name": "More Information Needed", "tags": ["evaluating-dialogue-systems"], "dataset_info": {"features": [{"name": "topic_id", "dtype": "int32"}, {"name": "initial_request", "dtype": "string"}, {"name": "topic_desc", "dtype": "string"}, {"name": "clarification_need", "dtype": "int32"}, {"name": "facet_id", "dtype": "string"}, {"name": "facet_desc", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "config_name": "conv_ai_3", "splits": [{"name": "train", "num_bytes": 2567404, "num_examples": 9176}, {"name": "validation", "num_bytes": 639351, "num_examples": 2313}], "download_size": 2940038, "dataset_size": 3206755}}
2024-01-18T09:37:27+00:00
[ "2009.11352" ]
[ "en" ]
TAGS #task_categories-conversational #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #evaluating-dialogue-systems #arxiv-2009.11352 #region-us
# Dataset Card for ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary The Conv AI 3 challenge is organized as part of the Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In Information Retrieval (IR) settings such a situation is handled mainly through the diversification of search result page. It is however much more challenging in dialogue settings. Hence, we aim to study the following situation for dialogue settings: - a user is asking an ambiguous question (where ambiguous question is a question to which one can return > 1 possible answers) - the system must identify that the question is ambiguous, and, instead of trying to answer it directly, ask a good clarifying question. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances Here are a few examples from the dataset: ### Data Fields - 'topic_id': the ID of the topic ('initial_request'). - 'initial_request': the query (text) that initiates the conversation. - 'topic_desc': a full description of the topic as it appears in the TREC Web Track data. - 'clarification_need': a label from 1 to 4, indicating how much it is needed to clarify a topic. If an 'initial_request' is self-contained and would not need any clarification, the label would be 1. While if a 'initial_request' is absolutely ambiguous, making it impossible for a search engine to guess the user's right intent before clarification, the label would be 4. - 'facet_id': the ID of the facet. - 'facet_desc': a full description of the facet (information need) as it appears in the TREC Web Track data. - 'question_id': the ID of the question.. - 'question': a clarifying question that the system can pose to the user for the current topic and facet. - 'answer': an answer to the clarifying question, assuming that the user is in the context of the current row (i.e., the user's initial query is 'initial_request', their information need is 'facet_desc', and 'question' has been posed to the user). ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information @misc{aliannejadi2020convai3, title={ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)}, author={Mohammad Aliannejadi and Julia Kiseleva and Aleksandr Chuklin and Jeff Dalton and Mikhail Burtsev}, year={2020}, eprint={2009.11352}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to @rkc007 for adding this dataset.
[ "# Dataset Card for", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThe Conv AI 3 challenge is organized as part of the Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In Information Retrieval (IR) settings such a situation is handled mainly through the diversification of search result page. It is however much more challenging in dialogue settings. Hence, we aim to study the following situation for dialogue settings:\n\n- a user is asking an ambiguous question (where ambiguous question is a question to which one can return > 1 possible answers)\n- the system must identify that the question is ambiguous, and, instead of trying to answer it directly, ask a good clarifying question.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are a few examples from the dataset:", "### Data Fields\n\n- 'topic_id': the ID of the topic ('initial_request').\n- 'initial_request': the query (text) that initiates the conversation.\n- 'topic_desc': a full description of the topic as it appears in the TREC Web Track data.\n- 'clarification_need': a label from 1 to 4, indicating how much it is needed to clarify a topic. If an 'initial_request' is self-contained and would not need any clarification, the label would be 1. While if a 'initial_request' is absolutely ambiguous, making it impossible for a search engine to guess the user's right intent before clarification, the label would be 4.\n- 'facet_id': the ID of the facet.\n- 'facet_desc': a full description of the facet (information need) as it appears in the TREC Web Track data.\n- 'question_id': the ID of the question..\n- 'question': a clarifying question that the system can pose to the user for the current topic and facet.\n- 'answer': an answer to the clarifying question, assuming that the user is in the context of the current row (i.e., the user's initial query is 'initial_request', their information need is 'facet_desc', and 'question' has been posed to the user).", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@misc{aliannejadi2020convai3,\ntitle={ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)},\nauthor={Mohammad Aliannejadi and Julia Kiseleva and Aleksandr Chuklin and Jeff Dalton and Mikhail Burtsev},\nyear={2020},\neprint={2009.11352},\narchivePrefix={arXiv},\nprimaryClass={cs.CL}\n}", "### Contributions\n\nThanks to @rkc007 for adding this dataset." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #evaluating-dialogue-systems #arxiv-2009.11352 #region-us \n", "# Dataset Card for", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThe Conv AI 3 challenge is organized as part of the Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In Information Retrieval (IR) settings such a situation is handled mainly through the diversification of search result page. It is however much more challenging in dialogue settings. Hence, we aim to study the following situation for dialogue settings:\n\n- a user is asking an ambiguous question (where ambiguous question is a question to which one can return > 1 possible answers)\n- the system must identify that the question is ambiguous, and, instead of trying to answer it directly, ask a good clarifying question.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are a few examples from the dataset:", "### Data Fields\n\n- 'topic_id': the ID of the topic ('initial_request').\n- 'initial_request': the query (text) that initiates the conversation.\n- 'topic_desc': a full description of the topic as it appears in the TREC Web Track data.\n- 'clarification_need': a label from 1 to 4, indicating how much it is needed to clarify a topic. If an 'initial_request' is self-contained and would not need any clarification, the label would be 1. While if a 'initial_request' is absolutely ambiguous, making it impossible for a search engine to guess the user's right intent before clarification, the label would be 4.\n- 'facet_id': the ID of the facet.\n- 'facet_desc': a full description of the facet (information need) as it appears in the TREC Web Track data.\n- 'question_id': the ID of the question..\n- 'question': a clarifying question that the system can pose to the user for the current topic and facet.\n- 'answer': an answer to the clarifying question, assuming that the user is in the context of the current row (i.e., the user's initial query is 'initial_request', their information need is 'facet_desc', and 'question' has been posed to the user).", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@misc{aliannejadi2020convai3,\ntitle={ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)},\nauthor={Mohammad Aliannejadi and Julia Kiseleva and Aleksandr Chuklin and Jeff Dalton and Mikhail Burtsev},\nyear={2020},\neprint={2009.11352},\narchivePrefix={arXiv},\nprimaryClass={cs.CL}\n}", "### Contributions\n\nThanks to @rkc007 for adding this dataset." ]
[ 118, 5, 120, 27, 186, 10, 4, 6, 17, 337, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 115, 17 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #evaluating-dialogue-systems #arxiv-2009.11352 #region-us \n# Dataset Card for## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThe Conv AI 3 challenge is organized as part of the Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. However, some user requests might be ambiguous. In Information Retrieval (IR) settings such a situation is handled mainly through the diversification of search result page. It is however much more challenging in dialogue settings. Hence, we aim to study the following situation for dialogue settings:\n\n- a user is asking an ambiguous question (where ambiguous question is a question to which one can return > 1 possible answers)\n- the system must identify that the question is ambiguous, and, instead of trying to answer it directly, ask a good clarifying question.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances\n\nHere are a few examples from the dataset:" ]
d83b5d7e1aed9485552ad6861f566ec182a0d5b6
# Dataset Card for ConvQuestions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ConvQuestions page](https://convex.mpi-inf.mpg.de) - **Repository:** [GitHub](https://github.com/PhilippChr/CONVEX) - **Paper:** [Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion](https://arxiv.org/abs/1910.03262) - **Leaderboard:** [ConvQuestions leaderboard](https://convex.mpi-inf.mpg.de) - **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de) ### Dataset Summary ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata. They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk, with conversations from five domains: Books, Movies, Soccer, Music, and TV Series. The questions feature a variety of complex question phenomena like comparisons, aggregations, compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable fair comparison across diverse methods. The data gathering setup was kept as natural as possible, with the annotators selecting entities of their choice from each of the five domains, and formulating the entire conversation in one session. All questions in a conversation are from the same Turker, who also provided gold answers to the questions. For suitability to knowledge graphs, questions were constrained to be objective or factoid in nature, but no other restrictive guidelines were set. A notable property of ConvQuestions is that several questions are not answerable by Wikidata alone (as of September 2019), but the required facts can, for example, be found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper (https://dl.acm.org/citation.cfm?id=3358016). ### Supported Tasks and Leaderboards [Needs More Information] ### Languages en ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { 'domain': 'music', 'seed_entity': 'https://www.wikidata.org/wiki/Q223495', 'seed_entity_text': 'The Carpenters', 'questions': [ 'When did The Carpenters sign with A&M Records?', 'What song was their first hit?', 'When did Karen die?', 'Karen had what eating problem?', 'and how did she die?' ], 'answers': [ [ '1969' ], [ 'https://www.wikidata.org/wiki/Q928282' ], [ '1983' ], [ 'https://www.wikidata.org/wiki/Q131749' ], [ 'https://www.wikidata.org/wiki/Q181754' ] ], 'answer_texts': [ '1969', '(They Long to Be) Close to You', '1983', 'anorexia nervosa', 'heart failure' ] } ``` ### Data Fields - `domain`: a `string` feature. Any of: ['books', 'movies', 'music', 'soccer', 'tv_series'] - `seed_entity`: a `string` feature. Wikidata ID of the topic entity. - `seed_entity_text`: a `string` feature. Surface form of the topic entity. - `questions`: a `list` of `string` features. List of questions (initial question and follow-up questions). - `answers`: a `list` of `lists` of `string` features. List of answers, given as Wikidata IDs or literals (e.g. timestamps or names). - `answer_texts`: a `list` of `string` features. List of surface forms of the answers. ### Data Splits |train|validation|tests| |----:|---------:|----:| | 6720| 2240| 2240| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process With insights from a meticulous in-house pilot study with ten students over two weeks, the authors posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants. #### Who are the annotators? Local students (Saarland Informatics Campus) and AMT Master Workers. ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information The ConvQuestions benchmark is licensed under a Creative Commons Attribution 4.0 International License. ### Citation Information ``` @InProceedings{christmann2019look, title={Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion}, author={Christmann, Philipp and Saha Roy, Rishiraj and Abujabal, Abdalghani and Singh, Jyotsna and Weikum, Gerhard}, booktitle={Proceedings of the 28th ACM International Conference on Information and Knowledge Management}, pages={729--738}, year={2019} } ``` ### Contributions Thanks to [@PhilippChr](https://github.com/PhilippChr) for adding this dataset.
conv_questions
[ "task_categories:question-answering", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:open-domain-qa", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1910.03262", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-generation", "fill-mask"], "task_ids": ["open-domain-qa", "dialogue-modeling"], "pretty_name": "ConvQuestions", "language_bcp47": ["en-US"], "dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "seed_entity", "dtype": "string"}, {"name": "seed_entity_text", "dtype": "string"}, {"name": "questions", "sequence": "string"}, {"name": "answers", "sequence": {"sequence": "string"}}, {"name": "answer_texts", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3589880, "num_examples": 6720}, {"name": "validation", "num_bytes": 1241778, "num_examples": 2240}, {"name": "test", "num_bytes": 1175656, "num_examples": 2240}], "download_size": 3276017, "dataset_size": 6007314}}
2024-01-18T09:37:57+00:00
[ "1910.03262" ]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-text-generation #task_categories-fill-mask #task_ids-open-domain-qa #task_ids-dialogue-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1910.03262 #region-us
Dataset Card for ConvQuestions ============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: ConvQuestions page * Repository: GitHub * Paper: Look before you hop: Conversational question answering over knowledge graphs using judicious context expansion * Leaderboard: ConvQuestions leaderboard * Point of Contact: Philipp Christmann ### Dataset Summary ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata. They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk, with conversations from five domains: Books, Movies, Soccer, Music, and TV Series. The questions feature a variety of complex question phenomena like comparisons, aggregations, compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable fair comparison across diverse methods. The data gathering setup was kept as natural as possible, with the annotators selecting entities of their choice from each of the five domains, and formulating the entire conversation in one session. All questions in a conversation are from the same Turker, who also provided gold answers to the questions. For suitability to knowledge graphs, questions were constrained to be objective or factoid in nature, but no other restrictive guidelines were set. A notable property of ConvQuestions is that several questions are not answerable by Wikidata alone (as of September 2019), but the required facts can, for example, be found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper (URL ### Supported Tasks and Leaderboards ### Languages en Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Fields * 'domain': a 'string' feature. Any of: ['books', 'movies', 'music', 'soccer', 'tv\_series'] * 'seed\_entity': a 'string' feature. Wikidata ID of the topic entity. * 'seed\_entity\_text': a 'string' feature. Surface form of the topic entity. * 'questions': a 'list' of 'string' features. List of questions (initial question and follow-up questions). * 'answers': a 'list' of 'lists' of 'string' features. List of answers, given as Wikidata IDs or literals (e.g. timestamps or names). * 'answer\_texts': a 'list' of 'string' features. List of surface forms of the answers. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process With insights from a meticulous in-house pilot study with ten students over two weeks, the authors posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants. #### Who are the annotators? Local students (Saarland Informatics Campus) and AMT Master Workers. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The ConvQuestions benchmark is licensed under a Creative Commons Attribution 4.0 International License. ### Contributions Thanks to @PhilippChr for adding this dataset.
[ "### Dataset Summary\n\n\nConvQuestions is the first realistic benchmark for conversational question answering over\nknowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata.\nThey are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk,\nwith conversations from five domains: Books, Movies, Soccer, Music, and TV Series.\nThe questions feature a variety of complex question phenomena like comparisons, aggregations,\ncompositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable\nfair comparison across diverse methods. The data gathering setup was kept as natural as\npossible, with the annotators selecting entities of their choice from each of the five domains,\nand formulating the entire conversation in one session. All questions in a conversation are\nfrom the same Turker, who also provided gold answers to the questions. For suitability to knowledge\ngraphs, questions were constrained to be objective or factoid in nature, but no other restrictive\nguidelines were set. A notable property of ConvQuestions is that several questions are not\nanswerable by Wikidata alone (as of September 2019), but the required facts can, for example,\nbe found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper\n(URL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nen\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\n* 'domain': a 'string' feature. Any of: ['books', 'movies', 'music', 'soccer', 'tv\\_series']\n* 'seed\\_entity': a 'string' feature. Wikidata ID of the topic entity.\n* 'seed\\_entity\\_text': a 'string' feature. Surface form of the topic entity.\n* 'questions': a 'list' of 'string' features. List of questions (initial question and follow-up questions).\n* 'answers': a 'list' of 'lists' of 'string' features. List of answers, given as Wikidata IDs or literals (e.g. timestamps or names).\n* 'answer\\_texts': a 'list' of 'string' features. List of surface forms of the answers.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nWith insights from a meticulous in-house pilot study with ten students over two weeks, the authors posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants.", "#### Who are the annotators?\n\n\nLocal students (Saarland Informatics Campus) and AMT Master Workers.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe ConvQuestions benchmark is licensed under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\n\nThanks to @PhilippChr for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-fill-mask #task_ids-open-domain-qa #task_ids-dialogue-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1910.03262 #region-us \n", "### Dataset Summary\n\n\nConvQuestions is the first realistic benchmark for conversational question answering over\nknowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata.\nThey are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk,\nwith conversations from five domains: Books, Movies, Soccer, Music, and TV Series.\nThe questions feature a variety of complex question phenomena like comparisons, aggregations,\ncompositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable\nfair comparison across diverse methods. The data gathering setup was kept as natural as\npossible, with the annotators selecting entities of their choice from each of the five domains,\nand formulating the entire conversation in one session. All questions in a conversation are\nfrom the same Turker, who also provided gold answers to the questions. For suitability to knowledge\ngraphs, questions were constrained to be objective or factoid in nature, but no other restrictive\nguidelines were set. A notable property of ConvQuestions is that several questions are not\nanswerable by Wikidata alone (as of September 2019), but the required facts can, for example,\nbe found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper\n(URL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nen\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\n* 'domain': a 'string' feature. Any of: ['books', 'movies', 'music', 'soccer', 'tv\\_series']\n* 'seed\\_entity': a 'string' feature. Wikidata ID of the topic entity.\n* 'seed\\_entity\\_text': a 'string' feature. Surface form of the topic entity.\n* 'questions': a 'list' of 'string' features. List of questions (initial question and follow-up questions).\n* 'answers': a 'list' of 'lists' of 'string' features. List of answers, given as Wikidata IDs or literals (e.g. timestamps or names).\n* 'answer\\_texts': a 'list' of 'string' features. List of surface forms of the answers.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nWith insights from a meticulous in-house pilot study with ten students over two weeks, the authors posed the conversation generation task on Amazon Mechanical Turk (AMT) in the most natural setup: Each crowdworker was asked to build a conversation by asking five sequential questions starting from any seed entity of his/her choice, as this is an intuitive mental model that humans may have when satisfying their real information needs via their search assistants.", "#### Who are the annotators?\n\n\nLocal students (Saarland Informatics Campus) and AMT Master Workers.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe ConvQuestions benchmark is licensed under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\n\nThanks to @PhilippChr for adding this dataset." ]
[ 139, 289, 10, 12, 18, 206, 11, 7, 4, 10, 10, 5, 102, 26, 18, 7, 8, 14, 6, 24, 18 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-fill-mask #task_ids-open-domain-qa #task_ids-dialogue-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1910.03262 #region-us \n### Dataset Summary\n\n\nConvQuestions is the first realistic benchmark for conversational question answering over\nknowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata.\nThey are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk,\nwith conversations from five domains: Books, Movies, Soccer, Music, and TV Series.\nThe questions feature a variety of complex question phenomena like comparisons, aggregations,\ncompositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable\nfair comparison across diverse methods. The data gathering setup was kept as natural as\npossible, with the annotators selecting entities of their choice from each of the five domains,\nand formulating the entire conversation in one session. All questions in a conversation are\nfrom the same Turker, who also provided gold answers to the questions. For suitability to knowledge\ngraphs, questions were constrained to be objective or factoid in nature, but no other restrictive\nguidelines were set. A notable property of ConvQuestions is that several questions are not\nanswerable by Wikidata alone (as of September 2019), but the required facts can, for example,\nbe found in the open Web or in Wikipedia. For details, please refer to the CIKM 2019 full paper\n(URL### Supported Tasks and Leaderboards### Languages\n\n\nen\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows." ]
0d9e9952f1ef6e5415492d3d84b5873259137e3c
# Dataset Card for "coqa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://stanfordnlp.github.io/coqa/](https://stanfordnlp.github.io/coqa/) - **Repository:** https://github.com/stanfordnlp/coqa-baselines - **Paper:** [CoQA: A Conversational Question Answering Challenge](https://arxiv.org/abs/1808.07042) - **Point of Contact:** [Google Group](https://groups.google.com/forum/#!forum/coqa), [Siva Reddy](mailto:siva.reddy@mila.quebec), [Danqi Chen](mailto:danqic@cs.princeton.edu) - **Size of downloaded dataset files:** 58.09 MB - **Size of the generated dataset:** 19.24 MB - **Total amount of disk used:** 77.33 MB ### Dataset Summary CoQA is a large-scale dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 58.09 MB - **Size of the generated dataset:** 19.24 MB - **Total amount of disk used:** 77.33 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": "{\"answer_end\": [179, 494, 511, 545, 879, 1127, 1128, 94, 150, 412, 1009, 1046, 643, -1, 764, 724, 125, 1384, 881, 910], \"answer_...", "questions": "[\"When was the Vat formally opened?\", \"what is the library for?\", \"for what subjects?\", \"and?\", \"what was started in 2014?\", \"ho...", "source": "wikipedia", "story": "\"The Vatican Apostolic Library (), more commonly called the Vatican Library or simply the Vat, is the library of the Holy See, l..." } ``` ### Data Fields The data fields are the same among all splits. #### default - `source`: a `string` feature. - `story`: a `string` feature. - `questions`: a `list` of `string` features. - `answers`: a dictionary feature containing: - `input_text`: a `string` feature. - `answer_start`: a `int32` feature. - `answer_end`: a `int32` feature. ### Data Splits | name |train|validation| |-------|----:|---------:| |default| 7199| 500| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information CoQA contains passages from seven domains. We make five of these public under the following licenses: - Literature and Wikipedia passages are shared under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) license. - Children's stories are collected from [MCTest](https://www.microsoft.com/en-us/research/publication/mctest-challenge-dataset-open-domain-machine-comprehension-text/) which comes with [MSR-LA](https://github.com/mcobzarenco/mctest/blob/master/data/MCTest/LICENSE.pdf) license. - Middle/High school exam passages are collected from [RACE](https://arxiv.org/abs/1704.04683) which comes with its [own](http://www.cs.cmu.edu/~glai1/data/race/) license. - News passages are collected from the [DeepMind CNN dataset](https://arxiv.org/abs/1506.03340) which comes with [Apache](https://github.com/deepmind/rc-data/blob/master/LICENSE) license. ### Citation Information ``` @article{reddy-etal-2019-coqa, title = "{C}o{QA}: A Conversational Question Answering Challenge", author = "Reddy, Siva and Chen, Danqi and Manning, Christopher D.", journal = "Transactions of the Association for Computational Linguistics", volume = "7", year = "2019", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/Q19-1016", doi = "10.1162/tacl_a_00266", pages = "249--266", } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@ojasaar](https://github.com/ojasaar), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
stanfordnlp/coqa
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|race", "source_datasets:extended|cnn_dailymail", "source_datasets:extended|wikipedia", "source_datasets:extended|other", "language:en", "license:other", "conversational-qa", "arxiv:1808.07042", "arxiv:1704.04683", "arxiv:1506.03340", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|race", "extended|cnn_dailymail", "extended|wikipedia", "extended|other"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "paperswithcode_id": "coqa", "pretty_name": "CoQA: Conversational Question Answering Challenge", "tags": ["conversational-qa"], "dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "story", "dtype": "string"}, {"name": "questions", "sequence": "string"}, {"name": "answers", "sequence": [{"name": "input_text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}, {"name": "answer_end", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 17953365, "num_examples": 7199}, {"name": "validation", "num_bytes": 1223427, "num_examples": 500}], "download_size": 12187487, "dataset_size": 19176792}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-04T07:47:32+00:00
[ "1808.07042", "1704.04683", "1506.03340" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|race #source_datasets-extended|cnn_dailymail #source_datasets-extended|wikipedia #source_datasets-extended|other #language-English #license-other #conversational-qa #arxiv-1808.07042 #arxiv-1704.04683 #arxiv-1506.03340 #region-us
Dataset Card for "coqa" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: CoQA: A Conversational Question Answering Challenge * Point of Contact: Google Group, Siva Reddy, Danqi Chen * Size of downloaded dataset files: 58.09 MB * Size of the generated dataset: 19.24 MB * Total amount of disk used: 77.33 MB ### Dataset Summary CoQA is a large-scale dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 58.09 MB * Size of the generated dataset: 19.24 MB * Total amount of disk used: 77.33 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'source': a 'string' feature. * 'story': a 'string' feature. * 'questions': a 'list' of 'string' features. * 'answers': a dictionary feature containing: + 'input\_text': a 'string' feature. + 'answer\_start': a 'int32' feature. + 'answer\_end': a 'int32' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information CoQA contains passages from seven domains. We make five of these public under the following licenses: * Literature and Wikipedia passages are shared under CC BY-SA 4.0 license. * Children's stories are collected from MCTest which comes with MSR-LA license. * Middle/High school exam passages are collected from RACE which comes with its own license. * News passages are collected from the DeepMind CNN dataset which comes with Apache license. ### Contributions Thanks to @patrickvonplaten, @lewtun, @thomwolf, @mariamabarham, @ojasaar, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nCoQA is a large-scale dataset for building Conversational Question Answering systems.\n\n\nOur dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 58.09 MB\n* Size of the generated dataset: 19.24 MB\n* Total amount of disk used: 77.33 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'source': a 'string' feature.\n* 'story': a 'string' feature.\n* 'questions': a 'list' of 'string' features.\n* 'answers': a dictionary feature containing:\n\t+ 'input\\_text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n\t+ 'answer\\_end': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCoQA contains passages from seven domains. We make five of these public under the following licenses:\n\n\n* Literature and Wikipedia passages are shared under CC BY-SA 4.0 license.\n* Children's stories are collected from MCTest which comes with MSR-LA license.\n* Middle/High school exam passages are collected from RACE which comes with its own license.\n* News passages are collected from the DeepMind CNN dataset which comes with Apache license.", "### Contributions\n\n\nThanks to @patrickvonplaten, @lewtun, @thomwolf, @mariamabarham, @ojasaar, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|race #source_datasets-extended|cnn_dailymail #source_datasets-extended|wikipedia #source_datasets-extended|other #language-English #license-other #conversational-qa #arxiv-1808.07042 #arxiv-1704.04683 #arxiv-1506.03340 #region-us \n", "### Dataset Summary\n\n\nCoQA is a large-scale dataset for building Conversational Question Answering systems.\n\n\nOur dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 58.09 MB\n* Size of the generated dataset: 19.24 MB\n* Total amount of disk used: 77.33 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'source': a 'string' feature.\n* 'story': a 'string' feature.\n* 'questions': a 'list' of 'string' features.\n* 'answers': a dictionary feature containing:\n\t+ 'input\\_text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n\t+ 'answer\\_end': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCoQA contains passages from seven domains. We make five of these public under the following licenses:\n\n\n* Literature and Wikipedia passages are shared under CC BY-SA 4.0 license.\n* Children's stories are collected from MCTest which comes with MSR-LA license.\n* Middle/High school exam passages are collected from RACE which comes with its own license.\n* News passages are collected from the DeepMind CNN dataset which comes with Apache license.", "### Contributions\n\n\nThanks to @patrickvonplaten, @lewtun, @thomwolf, @mariamabarham, @ojasaar, @lhoestq for adding this dataset." ]
[ 161, 81, 10, 11, 6, 51, 17, 102, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 108, 44 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|race #source_datasets-extended|cnn_dailymail #source_datasets-extended|wikipedia #source_datasets-extended|other #language-English #license-other #conversational-qa #arxiv-1808.07042 #arxiv-1704.04683 #arxiv-1506.03340 #region-us \n### Dataset Summary\n\n\nCoQA is a large-scale dataset for building Conversational Question Answering systems.\n\n\nOur dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 58.09 MB\n* Size of the generated dataset: 19.24 MB\n* Total amount of disk used: 77.33 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'source': a 'string' feature.\n* 'story': a 'string' feature.\n* 'questions': a 'list' of 'string' features.\n* 'answers': a dictionary feature containing:\n\t+ 'input\\_text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n\t+ 'answer\\_end': a 'int32' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
50508938c74b7faee130f9b164b1d4d55d4e77e0
# Dataset Card for CORD-19 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.semanticscholar.org/cord19](https://www.semanticscholar.org/cord19) - **Repository:** [https://github.com/allenai/cord19](https://github.com/allenai/cord19) - **Paper:** [CORD-19: The COVID-19 Open Research Dataset](https://www.aclweb.org/anthology/2020.nlpcovid19-acl.1/) - **Leaderboard:** [Kaggle challenge](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge) ### Dataset Summary CORD-19 is a corpus of academic papers about COVID-19 and related coronavirus research. It's curated and maintained by the Semantic Scholar team at the Allen Institute for AI to support text mining and NLP research. ### Supported Tasks and Leaderboards See tasks defined in the related [Kaggle challenge](https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge/tasks) ### Languages The dataset is in english (en). ## Dataset Structure ### Data Instances The following code block present an overview of a sample in json-like syntax (abbreviated since some fields are very long): ``` { "abstract": "OBJECTIVE: This retrospective chart review describes the epidemiology and clinical features of 40 patients with culture-proven Mycoplasma pneumoniae infections at King Abdulaziz University Hospital, Jeddah, Saudi Arabia. METHODS: Patients with positive M. pneumoniae cultures from respiratory specimens from January 1997 through December 1998 were identified through the Microbiology records. Charts of patients were reviewed. RESULTS: 40 patients were identified [...]", "authors": "Madani, Tariq A; Al-Ghamdi, Aisha A", "cord_uid": "ug7v899j", "doc_embeddings": [ -2.939983606338501, -6.312200546264648, -1.0459030866622925, [...] 766 values in total [...] -4.107113361358643, -3.8174145221710205, 1.8976187705993652, 5.811529159545898, -2.9323840141296387 ], "doi": "10.1186/1471-2334-1-6", "journal": "BMC Infect Dis", "publish_time": "2001-07-04", "sha": "d1aafb70c066a2068b02786f8929fd9c900897fb", "source_x": "PMC", "title": "Clinical features of culture-proven Mycoplasma pneumoniae infections at King Abdulaziz University Hospital, Jeddah, Saudi Arabia", "url": "https: //www.ncbi.nlm.nih.gov/pmc/articles/PMC35282/" } ``` ### Data Fields Currently only the following fields are integrated: `cord_uid`, `sha`,`source_x`, `title`, `doi`, `abstract`, `publish_time`, `authors`, `journal`. With `fulltext` configuration, the sections transcribed in `pdf_json_files` are converted in `fulltext` feature. - `cord_uid`: A `str`-valued field that assigns a unique identifier to each CORD-19 paper. This is not necessariy unique per row, which is explained in the FAQs. - `sha`: A `List[str]`-valued field that is the SHA1 of all PDFs associated with the CORD-19 paper. Most papers will have either zero or one value here (since we either have a PDF or we don't), but some papers will have multiple. For example, the main paper might have supplemental information saved in a separate PDF. Or we might have two separate PDF copies of the same paper. If multiple PDFs exist, their SHA1 will be semicolon-separated (e.g. `'4eb6e165ee705e2ae2a24ed2d4e67da42831ff4a; d4f0247db5e916c20eae3f6d772e8572eb828236'`) - `source_x`: A `List[str]`-valued field that is the names of sources from which we received this paper. Also semicolon-separated. For example, `'ArXiv; Elsevier; PMC; WHO'`. There should always be at least one source listed. - `title`: A `str`-valued field for the paper title - `doi`: A `str`-valued field for the paper DOI - `pmcid`: A `str`-valued field for the paper's ID on PubMed Central. Should begin with `PMC` followed by an integer. - `pubmed_id`: An `int`-valued field for the paper's ID on PubMed. - `license`: A `str`-valued field with the most permissive license we've found associated with this paper. Possible values include: `'cc0', 'hybrid-oa', 'els-covid', 'no-cc', 'cc-by-nc-sa', 'cc-by', 'gold-oa', 'biorxiv', 'green-oa', 'bronze-oa', 'cc-by-nc', 'medrxiv', 'cc-by-nd', 'arxiv', 'unk', 'cc-by-sa', 'cc-by-nc-nd'` - `abstract`: A `str`-valued field for the paper's abstract - `publish_time`: A `str`-valued field for the published date of the paper. This is in `yyyy-mm-dd` format. Not always accurate as some publishers will denote unknown dates with future dates like `yyyy-12-31` - `authors`: A `List[str]`-valued field for the authors of the paper. Each author name is in `Last, First Middle` format and semicolon-separated. - `journal`: A `str`-valued field for the paper journal. Strings are not normalized (e.g. `BMJ` and `British Medical Journal` can both exist). Empty string if unknown. - `mag_id`: Deprecated, but originally an `int`-valued field for the paper as represented in the Microsoft Academic Graph. - `who_covidence_id`: A `str`-valued field for the ID assigned by the WHO for this paper. Format looks like `#72306`. - `arxiv_id`: A `str`-valued field for the arXiv ID of this paper. - `pdf_json_files`: A `List[str]`-valued field containing paths from the root of the current data dump version to the parses of the paper PDFs into JSON format. Multiple paths are semicolon-separated. Example: `document_parses/pdf_json/4eb6e165ee705e2ae2a24ed2d4e67da42831ff4a.json; document_parses/pdf_json/d4f0247db5e916c20eae3f6d772e8572eb828236.json` - `pmc_json_files`: A `List[str]`-valued field. Same as above, but corresponding to the full text XML files downloaded from PMC, parsed into the same JSON format as above. - `url`: A `List[str]`-valued field containing all URLs associated with this paper. Semicolon-separated. - `s2_id`: A `str`-valued field containing the Semantic Scholar ID for this paper. Can be used with the Semantic Scholar API (e.g. `s2_id=9445722` corresponds to `http://api.semanticscholar.org/corpusid:9445722`) Extra fields based on selected configuration during loading: - `fulltext`: A `str`-valued field containing the concatenation of all text sections from json (itself extracted from pdf) - `doc_embeddings`: A `sequence` of float-valued elements containing document embeddings as a vector of floats (parsed from string of values separated by ','). Details on the system used to extract the embeddings are available in: [SPECTER: Document-level Representation Learning using Citation-informed Transformers](https://arxiv.org/abs/2004.07180). TL;DR: it's relying on a BERT model pre-trained on document-level relatedness using the citation graph. The system can be queried through REST (see [public API documentation](https://github.com/allenai/paper-embedding-public-apis)). ### Data Splits No annotation provided in this dataset so all instances are provided in training split. The sizes of each configuration are: | | train | |------------|-------:| | metadata | 368618 | | fulltext | 368618 | | embeddings | 368618 | ## Dataset Creation ### Curation Rationale See [official readme](https://github.com/allenai/cord19/blob/master/README.md) ### Source Data See [official readme](https://github.com/allenai/cord19/blob/master/README.md) #### Initial Data Collection and Normalization See [official readme](https://github.com/allenai/cord19/blob/master/README.md) #### Who are the source language producers? See [official readme](https://github.com/allenai/cord19/blob/master/README.md) ### Annotations No annotations #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{Wang2020CORD19TC, title={CORD-19: The Covid-19 Open Research Dataset}, author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier}, journal={ArXiv}, year={2020} } ``` ### Contributions Thanks to [@ggdupont](https://github.com/ggdupont) for adding this dataset.
allenai/cord19
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-nd-4.0", "license:cc-by-sa-4.0", "license:other", "arxiv:2004.07180", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nd-4.0", "cc-by-sa-4.0", "other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "cord-19", "pretty_name": "CORD-19", "dataset_info": [{"config_name": "metadata", "features": [{"name": "cord_uid", "dtype": "string"}, {"name": "sha", "dtype": "string"}, {"name": "source_x", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "publish_time", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "journal", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 496247275, "num_examples": 368618}], "download_size": 6142360818, "dataset_size": 496247275}, {"config_name": "fulltext", "features": [{"name": "cord_uid", "dtype": "string"}, {"name": "sha", "dtype": "string"}, {"name": "source_x", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "publish_time", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "journal", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "fulltext", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3718245736, "num_examples": 368618}], "download_size": 6142360818, "dataset_size": 3718245736}, {"config_name": "embeddings", "features": [{"name": "cord_uid", "dtype": "string"}, {"name": "sha", "dtype": "string"}, {"name": "source_x", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "publish_time", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "journal", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "doc_embeddings", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 2759561943, "num_examples": 368618}], "download_size": 6142360818, "dataset_size": 2759561943}]}
2022-11-03T16:31:53+00:00
[ "2004.07180" ]
[ "en" ]
TAGS #task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-nd-4.0 #license-cc-by-sa-4.0 #license-other #arxiv-2004.07180 #region-us
Dataset Card for CORD-19 ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: CORD-19: The COVID-19 Open Research Dataset * Leaderboard: Kaggle challenge ### Dataset Summary CORD-19 is a corpus of academic papers about COVID-19 and related coronavirus research. It's curated and maintained by the Semantic Scholar team at the Allen Institute for AI to support text mining and NLP research. ### Supported Tasks and Leaderboards See tasks defined in the related Kaggle challenge ### Languages The dataset is in english (en). Dataset Structure ----------------- ### Data Instances The following code block present an overview of a sample in json-like syntax (abbreviated since some fields are very long): ### Data Fields Currently only the following fields are integrated: 'cord\_uid', 'sha','source\_x', 'title', 'doi', 'abstract', 'publish\_time', 'authors', 'journal'. With 'fulltext' configuration, the sections transcribed in 'pdf\_json\_files' are converted in 'fulltext' feature. * 'cord\_uid': A 'str'-valued field that assigns a unique identifier to each CORD-19 paper. This is not necessariy unique per row, which is explained in the FAQs. * 'sha': A 'List[str]'-valued field that is the SHA1 of all PDFs associated with the CORD-19 paper. Most papers will have either zero or one value here (since we either have a PDF or we don't), but some papers will have multiple. For example, the main paper might have supplemental information saved in a separate PDF. Or we might have two separate PDF copies of the same paper. If multiple PDFs exist, their SHA1 will be semicolon-separated (e.g. ''4eb6e165ee705e2ae2a24ed2d4e67da42831ff4a; d4f0247db5e916c20eae3f6d772e8572eb828236'') * 'source\_x': A 'List[str]'-valued field that is the names of sources from which we received this paper. Also semicolon-separated. For example, ''ArXiv; Elsevier; PMC; WHO''. There should always be at least one source listed. * 'title': A 'str'-valued field for the paper title * 'doi': A 'str'-valued field for the paper DOI * 'pmcid': A 'str'-valued field for the paper's ID on PubMed Central. Should begin with 'PMC' followed by an integer. * 'pubmed\_id': An 'int'-valued field for the paper's ID on PubMed. * 'license': A 'str'-valued field with the most permissive license we've found associated with this paper. Possible values include: ''cc0', 'hybrid-oa', 'els-covid', 'no-cc', 'cc-by-nc-sa', 'cc-by', 'gold-oa', 'biorxiv', 'green-oa', 'bronze-oa', 'cc-by-nc', 'medrxiv', 'cc-by-nd', 'arxiv', 'unk', 'cc-by-sa', 'cc-by-nc-nd'' * 'abstract': A 'str'-valued field for the paper's abstract * 'publish\_time': A 'str'-valued field for the published date of the paper. This is in 'yyyy-mm-dd' format. Not always accurate as some publishers will denote unknown dates with future dates like 'yyyy-12-31' * 'authors': A 'List[str]'-valued field for the authors of the paper. Each author name is in 'Last, First Middle' format and semicolon-separated. * 'journal': A 'str'-valued field for the paper journal. Strings are not normalized (e.g. 'BMJ' and 'British Medical Journal' can both exist). Empty string if unknown. * 'mag\_id': Deprecated, but originally an 'int'-valued field for the paper as represented in the Microsoft Academic Graph. * 'who\_covidence\_id': A 'str'-valued field for the ID assigned by the WHO for this paper. Format looks like '#72306'. * 'arxiv\_id': A 'str'-valued field for the arXiv ID of this paper. * 'pdf\_json\_files': A 'List[str]'-valued field containing paths from the root of the current data dump version to the parses of the paper PDFs into JSON format. Multiple paths are semicolon-separated. Example: 'document\_parses/pdf\_json/URL; document\_parses/pdf\_json/URL' * 'pmc\_json\_files': A 'List[str]'-valued field. Same as above, but corresponding to the full text XML files downloaded from PMC, parsed into the same JSON format as above. * 'url': A 'List[str]'-valued field containing all URLs associated with this paper. Semicolon-separated. * 's2\_id': A 'str'-valued field containing the Semantic Scholar ID for this paper. Can be used with the Semantic Scholar API (e.g. 's2\_id=9445722' corresponds to 'URL Extra fields based on selected configuration during loading: * 'fulltext': A 'str'-valued field containing the concatenation of all text sections from json (itself extracted from pdf) * 'doc\_embeddings': A 'sequence' of float-valued elements containing document embeddings as a vector of floats (parsed from string of values separated by ','). Details on the system used to extract the embeddings are available in: SPECTER: Document-level Representation Learning using Citation-informed Transformers. TL;DR: it's relying on a BERT model pre-trained on document-level relatedness using the citation graph. The system can be queried through REST (see public API documentation). ### Data Splits No annotation provided in this dataset so all instances are provided in training split. The sizes of each configuration are: Dataset Creation ---------------- ### Curation Rationale See official readme ### Source Data See official readme #### Initial Data Collection and Normalization See official readme #### Who are the source language producers? See official readme ### Annotations No annotations #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @ggdupont for adding this dataset.
[ "### Dataset Summary\n\n\nCORD-19 is a corpus of academic papers about COVID-19 and related coronavirus research. It's curated and maintained by the Semantic Scholar team at the Allen Institute for AI to support text mining and NLP research.", "### Supported Tasks and Leaderboards\n\n\nSee tasks defined in the related Kaggle challenge", "### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe following code block present an overview of a sample in json-like syntax (abbreviated since some fields are very long):", "### Data Fields\n\n\nCurrently only the following fields are integrated: 'cord\\_uid', 'sha','source\\_x', 'title', 'doi', 'abstract', 'publish\\_time', 'authors', 'journal'. With 'fulltext' configuration, the sections transcribed in 'pdf\\_json\\_files' are converted in 'fulltext' feature.\n\n\n* 'cord\\_uid': A 'str'-valued field that assigns a unique identifier to each CORD-19 paper. This is not necessariy unique per row, which is explained in the FAQs.\n* 'sha': A 'List[str]'-valued field that is the SHA1 of all PDFs associated with the CORD-19 paper. Most papers will have either zero or one value here (since we either have a PDF or we don't), but some papers will have multiple. For example, the main paper might have supplemental information saved in a separate PDF. Or we might have two separate PDF copies of the same paper. If multiple PDFs exist, their SHA1 will be semicolon-separated (e.g. ''4eb6e165ee705e2ae2a24ed2d4e67da42831ff4a; d4f0247db5e916c20eae3f6d772e8572eb828236'')\n* 'source\\_x': A 'List[str]'-valued field that is the names of sources from which we received this paper. Also semicolon-separated. For example, ''ArXiv; Elsevier; PMC; WHO''. There should always be at least one source listed.\n* 'title': A 'str'-valued field for the paper title\n* 'doi': A 'str'-valued field for the paper DOI\n* 'pmcid': A 'str'-valued field for the paper's ID on PubMed Central. Should begin with 'PMC' followed by an integer.\n* 'pubmed\\_id': An 'int'-valued field for the paper's ID on PubMed.\n* 'license': A 'str'-valued field with the most permissive license we've found associated with this paper. Possible values include: ''cc0', 'hybrid-oa', 'els-covid', 'no-cc', 'cc-by-nc-sa', 'cc-by', 'gold-oa', 'biorxiv', 'green-oa', 'bronze-oa', 'cc-by-nc', 'medrxiv', 'cc-by-nd', 'arxiv', 'unk', 'cc-by-sa', 'cc-by-nc-nd''\n* 'abstract': A 'str'-valued field for the paper's abstract\n* 'publish\\_time': A 'str'-valued field for the published date of the paper. This is in 'yyyy-mm-dd' format. Not always accurate as some publishers will denote unknown dates with future dates like 'yyyy-12-31'\n* 'authors': A 'List[str]'-valued field for the authors of the paper. Each author name is in 'Last, First Middle' format and semicolon-separated.\n* 'journal': A 'str'-valued field for the paper journal. Strings are not normalized (e.g. 'BMJ' and 'British Medical Journal' can both exist). Empty string if unknown.\n* 'mag\\_id': Deprecated, but originally an 'int'-valued field for the paper as represented in the Microsoft Academic Graph.\n* 'who\\_covidence\\_id': A 'str'-valued field for the ID assigned by the WHO for this paper. Format looks like '#72306'.\n* 'arxiv\\_id': A 'str'-valued field for the arXiv ID of this paper.\n* 'pdf\\_json\\_files': A 'List[str]'-valued field containing paths from the root of the current data dump version to the parses of the paper PDFs into JSON format. Multiple paths are semicolon-separated. Example: 'document\\_parses/pdf\\_json/URL; document\\_parses/pdf\\_json/URL'\n* 'pmc\\_json\\_files': A 'List[str]'-valued field. Same as above, but corresponding to the full text XML files downloaded from PMC, parsed into the same JSON format as above.\n* 'url': A 'List[str]'-valued field containing all URLs associated with this paper. Semicolon-separated.\n* 's2\\_id': A 'str'-valued field containing the Semantic Scholar ID for this paper. Can be used with the Semantic Scholar API (e.g. 's2\\_id=9445722' corresponds to 'URL\n\n\nExtra fields based on selected configuration during loading:\n\n\n* 'fulltext': A 'str'-valued field containing the concatenation of all text sections from json (itself extracted from pdf)\n* 'doc\\_embeddings': A 'sequence' of float-valued elements containing document embeddings as a vector of floats (parsed from string of values separated by ','). Details on the system used to extract the embeddings are available in: SPECTER: Document-level Representation Learning using Citation-informed Transformers. TL;DR: it's relying on a BERT model pre-trained on document-level relatedness using the citation graph. The system can be queried through REST (see public API documentation).", "### Data Splits\n\n\nNo annotation provided in this dataset so all instances are provided in training split.\n\n\nThe sizes of each configuration are:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nSee official readme", "### Source Data\n\n\nSee official readme", "#### Initial Data Collection and Normalization\n\n\nSee official readme", "#### Who are the source language producers?\n\n\nSee official readme", "### Annotations\n\n\nNo annotations", "#### Annotation process\n\n\nN/A", "#### Who are the annotators?\n\n\nN/A", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ggdupont for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-nd-4.0 #license-cc-by-sa-4.0 #license-other #arxiv-2004.07180 #region-us \n", "### Dataset Summary\n\n\nCORD-19 is a corpus of academic papers about COVID-19 and related coronavirus research. It's curated and maintained by the Semantic Scholar team at the Allen Institute for AI to support text mining and NLP research.", "### Supported Tasks and Leaderboards\n\n\nSee tasks defined in the related Kaggle challenge", "### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe following code block present an overview of a sample in json-like syntax (abbreviated since some fields are very long):", "### Data Fields\n\n\nCurrently only the following fields are integrated: 'cord\\_uid', 'sha','source\\_x', 'title', 'doi', 'abstract', 'publish\\_time', 'authors', 'journal'. With 'fulltext' configuration, the sections transcribed in 'pdf\\_json\\_files' are converted in 'fulltext' feature.\n\n\n* 'cord\\_uid': A 'str'-valued field that assigns a unique identifier to each CORD-19 paper. This is not necessariy unique per row, which is explained in the FAQs.\n* 'sha': A 'List[str]'-valued field that is the SHA1 of all PDFs associated with the CORD-19 paper. Most papers will have either zero or one value here (since we either have a PDF or we don't), but some papers will have multiple. For example, the main paper might have supplemental information saved in a separate PDF. Or we might have two separate PDF copies of the same paper. If multiple PDFs exist, their SHA1 will be semicolon-separated (e.g. ''4eb6e165ee705e2ae2a24ed2d4e67da42831ff4a; d4f0247db5e916c20eae3f6d772e8572eb828236'')\n* 'source\\_x': A 'List[str]'-valued field that is the names of sources from which we received this paper. Also semicolon-separated. For example, ''ArXiv; Elsevier; PMC; WHO''. There should always be at least one source listed.\n* 'title': A 'str'-valued field for the paper title\n* 'doi': A 'str'-valued field for the paper DOI\n* 'pmcid': A 'str'-valued field for the paper's ID on PubMed Central. Should begin with 'PMC' followed by an integer.\n* 'pubmed\\_id': An 'int'-valued field for the paper's ID on PubMed.\n* 'license': A 'str'-valued field with the most permissive license we've found associated with this paper. Possible values include: ''cc0', 'hybrid-oa', 'els-covid', 'no-cc', 'cc-by-nc-sa', 'cc-by', 'gold-oa', 'biorxiv', 'green-oa', 'bronze-oa', 'cc-by-nc', 'medrxiv', 'cc-by-nd', 'arxiv', 'unk', 'cc-by-sa', 'cc-by-nc-nd''\n* 'abstract': A 'str'-valued field for the paper's abstract\n* 'publish\\_time': A 'str'-valued field for the published date of the paper. This is in 'yyyy-mm-dd' format. Not always accurate as some publishers will denote unknown dates with future dates like 'yyyy-12-31'\n* 'authors': A 'List[str]'-valued field for the authors of the paper. Each author name is in 'Last, First Middle' format and semicolon-separated.\n* 'journal': A 'str'-valued field for the paper journal. Strings are not normalized (e.g. 'BMJ' and 'British Medical Journal' can both exist). Empty string if unknown.\n* 'mag\\_id': Deprecated, but originally an 'int'-valued field for the paper as represented in the Microsoft Academic Graph.\n* 'who\\_covidence\\_id': A 'str'-valued field for the ID assigned by the WHO for this paper. Format looks like '#72306'.\n* 'arxiv\\_id': A 'str'-valued field for the arXiv ID of this paper.\n* 'pdf\\_json\\_files': A 'List[str]'-valued field containing paths from the root of the current data dump version to the parses of the paper PDFs into JSON format. Multiple paths are semicolon-separated. Example: 'document\\_parses/pdf\\_json/URL; document\\_parses/pdf\\_json/URL'\n* 'pmc\\_json\\_files': A 'List[str]'-valued field. Same as above, but corresponding to the full text XML files downloaded from PMC, parsed into the same JSON format as above.\n* 'url': A 'List[str]'-valued field containing all URLs associated with this paper. Semicolon-separated.\n* 's2\\_id': A 'str'-valued field containing the Semantic Scholar ID for this paper. Can be used with the Semantic Scholar API (e.g. 's2\\_id=9445722' corresponds to 'URL\n\n\nExtra fields based on selected configuration during loading:\n\n\n* 'fulltext': A 'str'-valued field containing the concatenation of all text sections from json (itself extracted from pdf)\n* 'doc\\_embeddings': A 'sequence' of float-valued elements containing document embeddings as a vector of floats (parsed from string of values separated by ','). Details on the system used to extract the embeddings are available in: SPECTER: Document-level Representation Learning using Citation-informed Transformers. TL;DR: it's relying on a BERT model pre-trained on document-level relatedness using the citation graph. The system can be queried through REST (see public API documentation).", "### Data Splits\n\n\nNo annotation provided in this dataset so all instances are provided in training split.\n\n\nThe sizes of each configuration are:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nSee official readme", "### Source Data\n\n\nSee official readme", "#### Initial Data Collection and Normalization\n\n\nSee official readme", "#### Who are the source language producers?\n\n\nSee official readme", "### Annotations\n\n\nNo annotations", "#### Annotation process\n\n\nN/A", "#### Who are the annotators?\n\n\nN/A", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ggdupont for adding this dataset." ]
[ 103, 57, 21, 20, 37, 1373, 37, 11, 8, 14, 14, 9, 8, 12, 18, 7, 8, 14, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-nd-4.0 #license-cc-by-sa-4.0 #license-other #arxiv-2004.07180 #region-us \n### Dataset Summary\n\n\nCORD-19 is a corpus of academic papers about COVID-19 and related coronavirus research. It's curated and maintained by the Semantic Scholar team at the Allen Institute for AI to support text mining and NLP research.### Supported Tasks and Leaderboards\n\n\nSee tasks defined in the related Kaggle challenge### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThe following code block present an overview of a sample in json-like syntax (abbreviated since some fields are very long):" ]
162b426644a6826f921b08ead0a65d4c08494f90
# Dataset Card for "cornell_movie_dialog" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html](http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 9.92 MB - **Size of the generated dataset:** 19.55 MB - **Total amount of disk used:** 29.46 MB ### Dataset Summary This corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts: - 220,579 conversational exchanges between 10,292 pairs of movie characters - involves 9,035 characters from 617 movies - in total 304,713 utterances - movie metadata included: - genres - release year - IMDB rating - number of IMDB votes - IMDB rating - character metadata included: - gender (for 3,774 characters) - position on movie credits (3,321 characters) ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 9.92 MB - **Size of the generated dataset:** 19.55 MB - **Total amount of disk used:** 29.46 MB An example of 'train' looks as follows. ``` { "characterID1": "u0 ", "characterID2": " u2 ", "characterName1": " m0 ", "characterName2": " m0 ", "movieGenres": ["comedy", "romance"], "movieID": " m0 ", "movieIMDBRating": " 6.90 ", "movieNoIMDBVotes": " 62847 ", "movieTitle": " f ", "movieYear": " 1999 ", "utterance": { "LineID": ["L1"], "text": ["L1 "] } } ``` ### Data Fields The data fields are the same among all splits. #### default - `movieID`: a `string` feature. - `movieTitle`: a `string` feature. - `movieYear`: a `string` feature. - `movieIMDBRating`: a `string` feature. - `movieNoIMDBVotes`: a `string` feature. - `movieGenres`: a `list` of `string` features. - `characterID1`: a `string` feature. - `characterID2`: a `string` feature. - `characterName1`: a `string` feature. - `characterName2`: a `string` feature. - `utterance`: a dictionary feature containing: - `text`: a `string` feature. - `LineID`: a `string` feature. ### Data Splits | name |train| |-------|----:| |default|83097| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{Danescu-Niculescu-Mizil+Lee:11a, author={Cristian Danescu-Niculescu-Mizil and Lillian Lee}, title={Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs.}, booktitle={Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011}, year={2011} } ``` ### Contributions Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
cornell_movie_dialog
[ "language:en", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "paperswithcode_id": "cornell-movie-dialogs-corpus", "pretty_name": "Cornell Movie-Dialogs Corpus", "dataset_info": {"features": [{"name": "movieID", "dtype": "string"}, {"name": "movieTitle", "dtype": "string"}, {"name": "movieYear", "dtype": "string"}, {"name": "movieIMDBRating", "dtype": "string"}, {"name": "movieNoIMDBVotes", "dtype": "string"}, {"name": "movieGenres", "sequence": "string"}, {"name": "characterID1", "dtype": "string"}, {"name": "characterID2", "dtype": "string"}, {"name": "characterName1", "dtype": "string"}, {"name": "characterName2", "dtype": "string"}, {"name": "utterance", "sequence": [{"name": "text", "dtype": "string"}, {"name": "LineID", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 19548840, "num_examples": 83097}], "download_size": 9916637, "dataset_size": 19548840}}
2024-01-18T09:43:11+00:00
[]
[ "en" ]
TAGS #language-English #region-us
Dataset Card for "cornell\_movie\_dialog" ========================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 9.92 MB * Size of the generated dataset: 19.55 MB * Total amount of disk used: 29.46 MB ### Dataset Summary This corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts: * 220,579 conversational exchanges between 10,292 pairs of movie characters * involves 9,035 characters from 617 movies * in total 304,713 utterances * movie metadata included: + genres + release year + IMDB rating + number of IMDB votes + IMDB rating * character metadata included: + gender (for 3,774 characters) + position on movie credits (3,321 characters) ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 9.92 MB * Size of the generated dataset: 19.55 MB * Total amount of disk used: 29.46 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'movieID': a 'string' feature. * 'movieTitle': a 'string' feature. * 'movieYear': a 'string' feature. * 'movieIMDBRating': a 'string' feature. * 'movieNoIMDBVotes': a 'string' feature. * 'movieGenres': a 'list' of 'string' features. * 'characterID1': a 'string' feature. * 'characterID2': a 'string' feature. * 'characterName1': a 'string' feature. * 'characterName2': a 'string' feature. * 'utterance': a dictionary feature containing: + 'text': a 'string' feature. + 'LineID': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @mariamabarham, @patrickvonplaten, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nThis corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts:\n\n\n* 220,579 conversational exchanges between 10,292 pairs of movie characters\n* involves 9,035 characters from 617 movies\n* in total 304,713 utterances\n* movie metadata included:\n\t+ genres\n\t+ release year\n\t+ IMDB rating\n\t+ number of IMDB votes\n\t+ IMDB rating\n* character metadata included:\n\t+ gender (for 3,774 characters)\n\t+ position on movie credits (3,321 characters)", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 9.92 MB\n* Size of the generated dataset: 19.55 MB\n* Total amount of disk used: 29.46 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'movieID': a 'string' feature.\n* 'movieTitle': a 'string' feature.\n* 'movieYear': a 'string' feature.\n* 'movieIMDBRating': a 'string' feature.\n* 'movieNoIMDBVotes': a 'string' feature.\n* 'movieGenres': a 'list' of 'string' features.\n* 'characterID1': a 'string' feature.\n* 'characterID2': a 'string' feature.\n* 'characterName1': a 'string' feature.\n* 'characterName2': a 'string' feature.\n* 'utterance': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'LineID': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @mariamabarham, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ "TAGS\n#language-English #region-us \n", "### Dataset Summary\n\n\nThis corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts:\n\n\n* 220,579 conversational exchanges between 10,292 pairs of movie characters\n* involves 9,035 characters from 617 movies\n* in total 304,713 utterances\n* movie metadata included:\n\t+ genres\n\t+ release year\n\t+ IMDB rating\n\t+ number of IMDB votes\n\t+ IMDB rating\n* character metadata included:\n\t+ gender (for 3,774 characters)\n\t+ position on movie credits (3,321 characters)", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 9.92 MB\n* Size of the generated dataset: 19.55 MB\n* Total amount of disk used: 29.46 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'movieID': a 'string' feature.\n* 'movieTitle': a 'string' feature.\n* 'movieYear': a 'string' feature.\n* 'movieIMDBRating': a 'string' feature.\n* 'movieNoIMDBVotes': a 'string' feature.\n* 'movieGenres': a 'list' of 'string' features.\n* 'characterID1': a 'string' feature.\n* 'characterID2': a 'string' feature.\n* 'characterName1': a 'string' feature.\n* 'characterName2': a 'string' feature.\n* 'utterance': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'LineID': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @mariamabarham, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ 10, 122, 10, 11, 6, 49, 17, 187, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 30 ]
[ "passage: TAGS\n#language-English #region-us \n### Dataset Summary\n\n\nThis corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts:\n\n\n* 220,579 conversational exchanges between 10,292 pairs of movie characters\n* involves 9,035 characters from 617 movies\n* in total 304,713 utterances\n* movie metadata included:\n\t+ genres\n\t+ release year\n\t+ IMDB rating\n\t+ number of IMDB votes\n\t+ IMDB rating\n* character metadata included:\n\t+ gender (for 3,774 characters)\n\t+ position on movie credits (3,321 characters)### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 9.92 MB\n* Size of the generated dataset: 19.55 MB\n* Total amount of disk used: 29.46 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'movieID': a 'string' feature.\n* 'movieTitle': a 'string' feature.\n* 'movieYear': a 'string' feature.\n* 'movieIMDBRating': a 'string' feature.\n* 'movieNoIMDBVotes': a 'string' feature.\n* 'movieGenres': a 'list' of 'string' features.\n* 'characterID1': a 'string' feature.\n* 'characterID2': a 'string' feature.\n* 'characterName1': a 'string' feature.\n* 'characterName2': a 'string' feature.\n* 'utterance': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'LineID': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases" ]
f01396594b768e0026841176bf770afe58b82fba
# Dataset Card for "cos_e" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/salesforce/cos-e - **Paper:** [Explain Yourself! Leveraging Language Models for Commonsense Reasoning](https://arxiv.org/abs/1906.02361) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 10.83 MB - **Size of the generated dataset:** 5.39 MB - **Total amount of disk used:** 16.22 MB ### Dataset Summary Common Sense Explanations (CoS-E) allows for training language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### v1.0 - **Size of downloaded dataset files:** 4.30 MB - **Size of the generated dataset:** 2.34 MB - **Total amount of disk used:** 6.64 MB An example of 'train' looks as follows. ``` { "abstractive_explanation": "this is open-ended", "answer": "b", "choices": ["a", "b", "c"], "extractive_explanation": "this is selected train", "id": "42", "question": "question goes here." } ``` #### v1.11 - **Size of downloaded dataset files:** 6.53 MB - **Size of the generated dataset:** 3.05 MB - **Total amount of disk used:** 9.58 MB An example of 'train' looks as follows. ``` { "abstractive_explanation": "this is open-ended", "answer": "b", "choices": ["a", "b", "c"], "extractive_explanation": "this is selected train", "id": "42", "question": "question goes here." } ``` ### Data Fields The data fields are the same among all splits. #### v1.0 - `id`: a `string` feature. - `question`: a `string` feature. - `choices`: a `list` of `string` features. - `answer`: a `string` feature. - `abstractive_explanation`: a `string` feature. - `extractive_explanation`: a `string` feature. #### v1.11 - `id`: a `string` feature. - `question`: a `string` feature. - `choices`: a `list` of `string` features. - `answer`: a `string` feature. - `abstractive_explanation`: a `string` feature. - `extractive_explanation`: a `string` feature. ### Data Splits |name |train|validation| |-----|----:|---------:| |v1.0 | 7610| 950| |v1.11| 9741| 1221| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Unknown. ### Citation Information ``` @inproceedings{rajani2019explain, title = "Explain Yourself! Leveraging Language models for Commonsense Reasoning", author = "Rajani, Nazneen Fatema and McCann, Bryan and Xiong, Caiming and Socher, Richard", year="2019", booktitle = "Proceedings of the 2019 Conference of the Association for Computational Linguistics (ACL2019)", url ="https://arxiv.org/abs/1906.02361" } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
cos_e
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|commonsense_qa", "language:en", "license:unknown", "arxiv:1906.02361", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|commonsense_qa"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "paperswithcode_id": "cos-e", "pretty_name": "Commonsense Explanations", "dataset_info": [{"config_name": "v1.0", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "string"}, {"name": "abstractive_explanation", "dtype": "string"}, {"name": "extractive_explanation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2067971, "num_examples": 7610}, {"name": "validation", "num_bytes": 260669, "num_examples": 950}], "download_size": 1588340, "dataset_size": 2328640}, {"config_name": "v1.11", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "string"}, {"name": "abstractive_explanation", "dtype": "string"}, {"name": "extractive_explanation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2702777, "num_examples": 9741}, {"name": "validation", "num_bytes": 329897, "num_examples": 1221}], "download_size": 1947552, "dataset_size": 3032674}], "configs": [{"config_name": "v1.0", "data_files": [{"split": "train", "path": "v1.0/train-*"}, {"split": "validation", "path": "v1.0/validation-*"}]}, {"config_name": "v1.11", "data_files": [{"split": "train", "path": "v1.11/train-*"}, {"split": "validation", "path": "v1.11/validation-*"}]}]}
2024-01-04T07:50:49+00:00
[ "1906.02361" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|commonsense_qa #language-English #license-unknown #arxiv-1906.02361 #region-us
Dataset Card for "cos\_e" ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: Explain Yourself! Leveraging Language Models for Commonsense Reasoning * Point of Contact: * Size of downloaded dataset files: 10.83 MB * Size of the generated dataset: 5.39 MB * Total amount of disk used: 16.22 MB ### Dataset Summary Common Sense Explanations (CoS-E) allows for training language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### v1.0 * Size of downloaded dataset files: 4.30 MB * Size of the generated dataset: 2.34 MB * Total amount of disk used: 6.64 MB An example of 'train' looks as follows. #### v1.11 * Size of downloaded dataset files: 6.53 MB * Size of the generated dataset: 3.05 MB * Total amount of disk used: 9.58 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### v1.0 * 'id': a 'string' feature. * 'question': a 'string' feature. * 'choices': a 'list' of 'string' features. * 'answer': a 'string' feature. * 'abstractive\_explanation': a 'string' feature. * 'extractive\_explanation': a 'string' feature. #### v1.11 * 'id': a 'string' feature. * 'question': a 'string' feature. * 'choices': a 'list' of 'string' features. * 'answer': a 'string' feature. * 'abstractive\_explanation': a 'string' feature. * 'extractive\_explanation': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Unknown. ### Contributions Thanks to @lewtun, @thomwolf, @mariamabarham, @patrickvonplaten, @albertvillanova, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nCommon Sense Explanations (CoS-E) allows for training language models to\nautomatically generate explanations that can be used during training and\ninference in a novel Commonsense Auto-Generated Explanation (CAGE) framework.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### v1.0\n\n\n* Size of downloaded dataset files: 4.30 MB\n* Size of the generated dataset: 2.34 MB\n* Total amount of disk used: 6.64 MB\n\n\nAn example of 'train' looks as follows.", "#### v1.11\n\n\n* Size of downloaded dataset files: 6.53 MB\n* Size of the generated dataset: 3.05 MB\n* Total amount of disk used: 9.58 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### v1.0\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a 'list' of 'string' features.\n* 'answer': a 'string' feature.\n* 'abstractive\\_explanation': a 'string' feature.\n* 'extractive\\_explanation': a 'string' feature.", "#### v1.11\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a 'list' of 'string' features.\n* 'answer': a 'string' feature.\n* 'abstractive\\_explanation': a 'string' feature.\n* 'extractive\\_explanation': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nUnknown.", "### Contributions\n\n\nThanks to @lewtun, @thomwolf, @mariamabarham, @patrickvonplaten, @albertvillanova, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|commonsense_qa #language-English #license-unknown #arxiv-1906.02361 #region-us \n", "### Dataset Summary\n\n\nCommon Sense Explanations (CoS-E) allows for training language models to\nautomatically generate explanations that can be used during training and\ninference in a novel Commonsense Auto-Generated Explanation (CAGE) framework.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### v1.0\n\n\n* Size of downloaded dataset files: 4.30 MB\n* Size of the generated dataset: 2.34 MB\n* Total amount of disk used: 6.64 MB\n\n\nAn example of 'train' looks as follows.", "#### v1.11\n\n\n* Size of downloaded dataset files: 6.53 MB\n* Size of the generated dataset: 3.05 MB\n* Total amount of disk used: 9.58 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### v1.0\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a 'list' of 'string' features.\n* 'answer': a 'string' feature.\n* 'abstractive\\_explanation': a 'string' feature.\n* 'extractive\\_explanation': a 'string' feature.", "#### v1.11\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a 'list' of 'string' features.\n* 'answer': a 'string' feature.\n* 'abstractive\\_explanation': a 'string' feature.\n* 'extractive\\_explanation': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nUnknown.", "### Contributions\n\n\nThanks to @lewtun, @thomwolf, @mariamabarham, @patrickvonplaten, @albertvillanova, @lhoestq for adding this dataset." ]
[ 109, 55, 10, 11, 6, 50, 51, 17, 92, 93, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 10, 45 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|commonsense_qa #language-English #license-unknown #arxiv-1906.02361 #region-us \n### Dataset Summary\n\n\nCommon Sense Explanations (CoS-E) allows for training language models to\nautomatically generate explanations that can be used during training and\ninference in a novel Commonsense Auto-Generated Explanation (CAGE) framework.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### v1.0\n\n\n* Size of downloaded dataset files: 4.30 MB\n* Size of the generated dataset: 2.34 MB\n* Total amount of disk used: 6.64 MB\n\n\nAn example of 'train' looks as follows.#### v1.11\n\n\n* Size of downloaded dataset files: 6.53 MB\n* Size of the generated dataset: 3.05 MB\n* Total amount of disk used: 9.58 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### v1.0\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a 'list' of 'string' features.\n* 'answer': a 'string' feature.\n* 'abstractive\\_explanation': a 'string' feature.\n* 'extractive\\_explanation': a 'string' feature.#### v1.11\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a 'list' of 'string' features.\n* 'answer': a 'string' feature.\n* 'abstractive\\_explanation': a 'string' feature.\n* 'extractive\\_explanation': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------" ]
28d9d5e2aae025e73e11177891a88dba51190013
# Dataset Card for "cosmos_qa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://wilburone.github.io/cosmos/](https://wilburone.github.io/cosmos/) - **Repository:** https://github.com/wilburOne/cosmosqa/ - **Paper:** [Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning](https://arxiv.org/abs/1909.00277) - **Point of Contact:** [Lifu Huang](mailto:warrior.fu@gmail.com) - **Size of downloaded dataset files:** 24.40 MB - **Size of the generated dataset:** 24.51 MB - **Total amount of disk used:** 48.91 MB ### Dataset Summary Cosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 24.40 MB - **Size of the generated dataset:** 24.51 MB - **Total amount of disk used:** 48.91 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answer0": "If he gets married in the church he wo nt have to get a divorce .", "answer1": "He wants to get married to a different person .", "answer2": "He wants to know if he does nt like this girl can he divorce her ?", "answer3": "None of the above choices .", "context": "\"Do i need to go for a legal divorce ? I wanted to marry a woman but she is not in the same religion , so i am not concern of th...", "id": "3BFF0DJK8XA7YNK4QYIGCOG1A95STE##3180JW2OT5AF02OISBX66RFOCTG5J7##A2LTOS0AZ3B28A##Blog_56156##q1_a1##378G7J1SJNCDAAIN46FM2P7T6KZEW2", "label": 1, "question": "Why is this person asking about divorce ?" } ``` ### Data Fields The data fields are the same among all splits. #### default - `id`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answer0`: a `string` feature. - `answer1`: a `string` feature. - `answer2`: a `string` feature. - `answer3`: a `string` feature. - `label`: a `int32` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|25262| 2985|6963| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information As reported via email by Yejin Choi, the dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license. ### Citation Information ``` @inproceedings{huang-etal-2019-cosmos, title = "Cosmos {QA}: Machine Reading Comprehension with Contextual Commonsense Reasoning", author = "Huang, Lifu and Le Bras, Ronan and Bhagavatula, Chandra and Choi, Yejin", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1243", doi = "10.18653/v1/D19-1243", pages = "2391--2401", } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
cosmos_qa
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1909.00277", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["multiple-choice"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "cosmosqa", "pretty_name": "CosmosQA", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer0", "dtype": "string"}, {"name": "answer1", "dtype": "string"}, {"name": "answer2", "dtype": "string"}, {"name": "answer3", "dtype": "string"}, {"name": "label", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 17159918, "num_examples": 25262}, {"name": "test", "num_bytes": 5121479, "num_examples": 6963}, {"name": "validation", "num_bytes": 2186987, "num_examples": 2985}], "download_size": 24399475, "dataset_size": 24468384}}
2024-01-18T09:43:51+00:00
[ "1909.00277" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1909.00277 #region-us
Dataset Card for "cosmos\_qa" ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning * Point of Contact: Lifu Huang * Size of downloaded dataset files: 24.40 MB * Size of the generated dataset: 24.51 MB * Total amount of disk used: 48.91 MB ### Dataset Summary Cosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 24.40 MB * Size of the generated dataset: 24.51 MB * Total amount of disk used: 48.91 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'id': a 'string' feature. * 'context': a 'string' feature. * 'question': a 'string' feature. * 'answer0': a 'string' feature. * 'answer1': a 'string' feature. * 'answer2': a 'string' feature. * 'answer3': a 'string' feature. * 'label': a 'int32' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information As reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0 license. ### Contributions Thanks to @patrickvonplaten, @lewtun, @albertvillanova, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nCosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 24.40 MB\n* Size of the generated dataset: 24.51 MB\n* Total amount of disk used: 48.91 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'id': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answer0': a 'string' feature.\n* 'answer1': a 'string' feature.\n* 'answer2': a 'string' feature.\n* 'answer3': a 'string' feature.\n* 'label': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nAs reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0 license.", "### Contributions\n\n\nThanks to @patrickvonplaten, @lewtun, @albertvillanova, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1909.00277 #region-us \n", "### Dataset Summary\n\n\nCosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 24.40 MB\n* Size of the generated dataset: 24.51 MB\n* Total amount of disk used: 48.91 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'id': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answer0': a 'string' feature.\n* 'answer1': a 'string' feature.\n* 'answer2': a 'string' feature.\n* 'answer3': a 'string' feature.\n* 'label': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nAs reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0 license.", "### Contributions\n\n\nThanks to @patrickvonplaten, @lewtun, @albertvillanova, @thomwolf for adding this dataset." ]
[ 101, 88, 10, 11, 6, 51, 17, 102, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 28, 34 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1909.00277 #region-us \n### Dataset Summary\n\n\nCosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 24.40 MB\n* Size of the generated dataset: 24.51 MB\n* Total amount of disk used: 48.91 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'id': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answer0': a 'string' feature.\n* 'answer1': a 'string' feature.\n* 'answer2': a 'string' feature.\n* 'answer3': a 'string' feature.\n* 'label': a 'int32' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators" ]
496ad9cf3f09218a2049bd8abb8f19d9e4812077
# Dataset Card for COUNTER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://ucrel.lancs.ac.uk/textreuse/counter.php - **Repository:** [More Information Needed] - **Paper:** https://link.springer.com/article/10.1007%2Fs10579-016-9367-2 - **Leaderboard:** [More Information Needed] - **Point of Contact:** [UCREL](ucrel@lancaster.ac.uk) ### Dataset Summary The COrpus ofUrdu News TExt Reuse (COUNTER) corpus contains 1200 documents with realexamples of text reuse from the field of journalism. It has been manually annotatedat document level with three levels of reuse: wholly derived, partially derived andnon derived ### Supported Tasks and Leaderboards other:text-reuse ### Languages ur ## Dataset Structure ### Data Instances Here is one example from the dataset: ``` {"derived": { "body" :"میر پور(وقت نیوز) بنگلہ دیش نے 5 میچوں کی سیریز کےآ خری میچ میں بھی فتح حاصل کر کے سیریز میں وائٹ واش کر دیا،زمبابوے ایک میچ بھی نہ جیت سکا۔آخری میچ میں زمبابوے کے 129 رنز کا ہدف بنگال ٹائیگرز نے 24.3 اوورز میں 5 وکٹوں کے نقصان پر حاصل کر لیا۔بنگلہ دیش کے شیر بنگلہ سٹیڈیم میر پور میں کھیلے گئے آخری ایک روزہ میچ میں زمبابوے کے کپتان چکمبورا نے ٹاس جیت کے بینٹگ کا فیصلہ کیا جو ان کی ٹیم کیلئے ڈراؤنا خواب ثابت ہوا اور پوری ٹیم 30 اوورز میں 128 رنز بنا کر پویلین لوٹ گئی زمبابوے کی پہلی وکٹ 16 رنز پر گری جب سکندر رضا صرف 9 رنز بنا کر مشرقی مرتضی کی بال پر آؤٹ ہوئے اس کے بعد مساکد ازااور سباندا کی پارٹنرشپنے ٹیم کا سکور95 رنز تک پہنچا دیا ۔مساکدازا 52 رنز بنا کر جبیر الحسن کا شکار بنے جبکہ سباندا نے 37 رنز کی اننگز کھیلی اس کے بعد کئی بھی زمبابوے کا کھلاڑی جم کر نہ کھیل سکا۔بنگال ٹائیگرز کی جانب سے عمدہ باؤلنگ کے نتیجے میں کپتان چکمبورا سمیت 8 کھلاڑی ڈبل فیگر کراس نہ کر سکے ۔بنگلہ دیش کی جانب سے ایک روزہ میچوں میں ڈیبیو کرنے والے تیج السلام نے اپنے پہلے ہی میچ میں ہیٹرک کی اسلام نے 7 اوورز میں صرف 14 رنز دئے اور چار کھلاڑیوں کع آؤٹ کیا جبکہ شکیب الحسن نے 30 رنز دیکر 3 اور جبیر الحسن نے41 رنز دیکر2 کھلاڑیوں کو پویلین کی راہ دکھائی ۔ 128 رنز کے جواب میں بنگال ٹائیگرز نے بیٹنگ شروع کی مشکلات کا سامنا رہا ان کے بھی ابتدائی 3 کھلاڑی 47 رنز پر پویلین لوٹ گئے۔ تمیم اقبال 10، انعام الحق8 رنز بنا کر آؤٹ ہوئے،آل راؤنڈر شکیب الحسن بغیر کوئی رنز بنائیپویلین لوٹ گئے وکٹ کیپر مشفق الرحیم صرف 11 رنز بنا کر چتارہ کا شکار بن گئے۔محمد اللہ نے51 رنز کی میچ وننگ اننگز کھیلی جبکہ صابر رحمٰن13 رنز بنا کر ناٹ آؤٹ رہے۔ زمبابوے کی جانب سے چتارہ نے 3 اور پنیا نگارا نے 2 کھلاڑیوں کو آؤٹ کیا ۔فتح کے ساتھ بنگلہ دیش نے سیریز میں وائٹ واش کر دیا۔زمبابوے کی ٹیم کوئی میچ نہ جیت سکی،تیج السلام کو میچ کا بہترین ایوارڈ دیا گیا جبکہ سیریز کا بہترین کھلاڑی مشفق الرحیم کو قرار دیا گیا۔", "classification": 1, # partially_derived "domain": 1, # sports "filename": "0001p.xml", "headline": "بنگلہ دیش کا زمبابوے کا ون ڈے سیریز میں 5-0 سے وائٹ واش", "newsdate": "02.12.14", "newspaper": "daily_waqt", "number_of_words_with_swr": 265, "total_number_of_sentences": 13, "total_number_of_words": 393}, "source": { "body": "ڈھاکہ ۔ یکم دسمبر (اے پی پی) بنگلہ دیش نے زمبابوے کو ٹیسٹ کے بعد ون ڈے سیریز میں بھی وائٹ واش کر دیا۔ سیریز کے پانچویں اور آخری ون ڈے میچ میں بنگال ٹائیگرز نے زمبابوے کو 5 وکٹوں سے شکست دے دی، مہمان ٹیم پہلے بیٹنگ کرتے ہوئے 128 رنز پر ڈھیر ہوگئی۔ تیج الاسلام نے کیریئر کے پہلے ون ڈے میچ میں ہیٹ ٹرک کرکے نئی تاریخ رقم کر دی، انہوں نے 4 کھلاڑیوں کو آؤٹ کیا۔ جواب میں بنگلہ دیش نے ہدف 24.3 اوورز میں 5 وکٹوں کے نقصان پر حاصل کر لیا۔ محمد اللہ نے 51 رنز کی ناقابل شکست اننگز کھیلی۔ تفصیلات کے مطابق پیر کو شیر بنگلہ نیشنل سٹیڈیم، میرپور میں پانچویں اور آخری ون ڈے میچ میں زمبابوے کے کپتان ایلٹن چگمبورا نے ٹاس جیت کر پہلے بیٹنگ کا فیصلہ کیا جو غلط ثابت ہوا۔ زمبابوے کی پوری ٹیم ڈیبیو ون ڈے کھیلنے والے نوجوان لیفٹ آرم سپنر تیج الاسلام اور شکیب الحسن کی تباہ کن باؤلنگ کے باعث 30 اوورز میں 128 رنز پر ڈھیر ہوگئی۔ ہیملٹن ماساکڈزا 52 اور ووسی سبانڈا 37 رنز کے ساتھ نمایاں رہے، ان کے علاوہ کوئی بھی بلے باز دوہرا ہندسہ عبور نہ کر سکا۔ اپنا پہلا ون ڈے کھیلنے والے تیج الاسلام نے 11 رنز کے عوض 4 وکٹیں حاصل کیں جس میں شاندار ہیٹ ٹرک بھی شامل ہے، اس طرح وہ ڈیبیو میں ہیٹ ٹرک کرنے والے دنیا کے پہلے باؤلر بن گئے ہیں۔ شکیب الحسن نے تین اور زبیر حسین نے دو وکٹیں حاصل کیں۔ جواب میں بنگلہ دیش نے ہدف 24.3 اوورز میں 5 وکٹوں کے نقصان پر حاصل کر لیا۔ محمد اللہ نے 51 رنز کی ناقابل شکست اننگز کھیل کر ٹیم کی فتح میں اہم کردار ادا کیا۔ زمبابوے کی جانب سے ٹینڈائی چتارا نے تین اور تناشے پینگارا نے دو وکٹیں حاصل کیں۔", "classification": 1, # partially_derived "domain": 1, # sports "filename": "0001.xml", "headline": "بنگال ٹائیگرز نے کمزور زمبابوے کو ٹیسٹ کے بعد ون ڈے سیریز میں بھی وائٹ واش کر دیا، پانچویں اور آخری ون ڈے میچ میں بنگلہ دیش 5 وکٹوں سے فتح یاب، تیج الاسلام نے ڈیبیو ون ڈے میں ہیٹ ٹرک کرکے نئی تاریخ رقم کر دی" "newsdate": "01.12.14", "newspaper": "APP", "number_of_words_with_swr": 245, "total_number_of_sentences": 15, "total_number_of_words": 352}} ``` ### Data Fields ```source```: The source document ```derived```: The derived document For each pair of source and derived documents. we have the following fields: ```filename (str)```: Name of the file in dataset ```headline(str)```: Headline of the news item ```body(str)```: Main text of the news item ```total_number_of_words(int)```: Number of words in document ```total_number_of_sentences(int)```: Number of sentences in document ```number_of_words_with_swr(int)```: Number of words after stop word removal ```newspaper(str)```: The newspaper in which the news item was published ```newsdate(str)```: The date on which the news item is published DD.MM.YY ```domain(int)```: The category of news item from this list: "business", "sports", "national", "foreign", "showbiz". ```classification (int)```: Three classes of reuse from this list: Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND) ### Data Splits One split train with 600 pairs of documents. The corpus is composed of two main document types: (1) source documents and (2) derived documents. There are total 1200 documents in the corpus: 600 are newsagency articles (source documents) and 600 are newspapers stories (derived documents). The corpus contains in total 275,387 words (tokens8), 21,426 unique words and 10,841 sentences. The average length of a source document is 227 words while for derived documents it is 254 words. ## Dataset Creation ### Curation Rationale Our main intention was to develop a standard benchmark resource for the evaluation of existing systems available for text reuse detection in general and specifically for Urdu language. To generate a corpus with realistic examples, we opted for the field of journalism. In journalism, the same news story is published in different newspapers in different forms. It is a standard practice followed by all the newspapers (reporters and editors) to reuse (verbatim or modified) a news story released by the news agency. ### Source Data #### Initial Data Collection and Normalization The COUNTER corpus consists of news articles (source documents) released by five news agencies in Pakistan i.e. Associated Press of Pakistan (APP), InternationalNews Network (INN), Independent News Pakistan (INP), News Network International (NNI) and South Asian News Agency (SANA). The corresponding news stories (derived documents) were extracted from nine daily published and large circulation national news papers of the All Pakistan Newspapers Society (APNS), who are subscribed to these news agencies. These include Nawa-e-Waqt, Daily Dunya, Express, Jang, Daily Waqt, Daily Insaf, Daily Aaj, Daily Islam and DailyPakistan. All of them are part of the mainstream national press, long established dailies with total circulation figures of over four million.7News agency texts (source documents) were provided (in electronic form) by the news agencies on a daily basis when they released the news. Newspaper stories (derived documents) were collected by three volunteers over a period of six months (from July to December 2014).National, Foreign, Business, Sports and Showbiz were the domains targeted for data collection. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process The corpus has been annotated at the document level with three classes of reuse i.e.Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND). The derived collection contains documents with various degrees of text reuse. Some of the newspaper stories (derived documents)are rewritten (either verbatim or paraphrased) from the new agencys text (source document) while others have been written by the journalists independently on their own. For the former case, source-derived document pairs are either tagged as Wholly Derived (WD) or Partially Derived (PD) depending on the volume of text reused from the news agencys text for creating the newspaper article while for the latter case, they are tagged as Non Derived (ND) as the journalists have not reused anything from the news agencys text but based on their own observations and findings, developed and documented the story. The annotations were carried out in three phases: (1) training phase, (2) annotations, (3)conflict resolving. During the training phase, annotators A and B manually annotated 60 document pairs, following a preliminary version of the annotation guidelines. A detailed meeting was carried out afterwards, discussing the problems and disagreements. It was observed that the highest number of disagreements were between PD and ND cases, as both found it difficult to distinguish between these two classes. The reason being that adjusting the threshold where a text is heavily paraphrased or new information added to it that it becomes independently written(ND). Following the discussion, the annotation guidelines were slightly revised, and the first 60 annotations results were saved. In the annotation phase, the remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories:remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories: Wholly Derived (WD)The News agency text is the only source for the reused newspaper text, which means it is a verbatim copy of the source. In this case, most of the reused text is word-to-word copy of the source text.Partially Derived (PD)The Newspaper text has been either derived from more than one news agency or most of the text is paraphrased by the editor when rewriting from news agency text source. In this case, most parts of the derived document contain paraphrased text or new facts and figures added by the journalists own findings. Non Derived (ND)The News agency text has not been used in the production of the newspaper text (though words may still co-occur in both documents), it has completely different facts and figures or is heavily paraphrased from the newsagencys copy. In this case, the derived document is independently written and has a lot more new text. #### Who are the annotators? The annotations were performed by three annotators (A, B and C), who were native Urdu language speakers and experts of paraphrasing mechanisms. All three were graduates, experienced in text annotations and having an advanced Urdu level. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This dataset is licensed under the Creative Common Attribution-NonCommercial-ShareAlike 4.0 International License. [(CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information ``` @Article{Sharjeel2016, author="Sharjeel, Muhammad and Nawab, Rao Muhammad Adeel and Rayson, Paul", title="COUNTER: corpus of Urdu news text reuse", journal="Language Resources and Evaluation", year="2016", pages="1--27", issn="1574-0218", doi="10.1007/s10579-016-9367-2", url="http://dx.doi.org/10.1007/s10579-016-9367-2" } ``` ### Contributions Thanks to [@arkhalid](https://github.com/arkhalid) for adding this dataset.
counter
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "task_ids:topic-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:ur", "license:cc-by-nc-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ur"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "semantic-similarity-scoring", "topic-classification"], "paperswithcode_id": "counter", "pretty_name": "COUNTER", "dataset_info": {"features": [{"name": "source", "struct": [{"name": "filename", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "total_number_of_words", "dtype": "int64"}, {"name": "total_number_of_sentences", "dtype": "int64"}, {"name": "number_of_words_with_swr", "dtype": "int64"}, {"name": "newspaper", "dtype": "string"}, {"name": "newsdate", "dtype": "string"}, {"name": "domain", "dtype": {"class_label": {"names": {"0": "business", "1": "sports", "2": "national", "3": "foreign", "4": "showbiz"}}}}, {"name": "classification", "dtype": {"class_label": {"names": {"0": "wholly_derived", "1": "partially_derived", "2": "not_derived"}}}}]}, {"name": "derived", "struct": [{"name": "filename", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "total_number_of_words", "dtype": "int64"}, {"name": "total_number_of_sentences", "dtype": "int64"}, {"name": "number_of_words_with_swr", "dtype": "int64"}, {"name": "newspaper", "dtype": "string"}, {"name": "newsdate", "dtype": "string"}, {"name": "domain", "dtype": {"class_label": {"names": {"0": "business", "1": "sports", "2": "national", "3": "foreign", "4": "showbiz"}}}}, {"name": "classification", "dtype": {"class_label": {"names": {"0": "wholly_derived", "1": "partially_derived", "2": "not_derived"}}}}]}], "splits": [{"name": "train", "num_bytes": 2598872, "num_examples": 600}], "download_size": 1356306, "dataset_size": 2598872}}
2024-01-18T09:44:30+00:00
[]
[ "ur" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Urdu #license-cc-by-nc-sa-4.0 #region-us
# Dataset Card for COUNTER ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Leaderboard: - Point of Contact: UCREL ### Dataset Summary The COrpus ofUrdu News TExt Reuse (COUNTER) corpus contains 1200 documents with realexamples of text reuse from the field of journalism. It has been manually annotatedat document level with three levels of reuse: wholly derived, partially derived andnon derived ### Supported Tasks and Leaderboards other:text-reuse ### Languages ur ## Dataset Structure ### Data Instances Here is one example from the dataset: ### Data Fields : The source document : The derived document For each pair of source and derived documents. we have the following fields: : Name of the file in dataset : Headline of the news item : Main text of the news item : Number of words in document : Number of sentences in document : Number of words after stop word removal : The newspaper in which the news item was published : The date on which the news item is published DD.MM.YY : The category of news item from this list: "business", "sports", "national", "foreign", "showbiz". : Three classes of reuse from this list: Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND) ### Data Splits One split train with 600 pairs of documents. The corpus is composed of two main document types: (1) source documents and (2) derived documents. There are total 1200 documents in the corpus: 600 are newsagency articles (source documents) and 600 are newspapers stories (derived documents). The corpus contains in total 275,387 words (tokens8), 21,426 unique words and 10,841 sentences. The average length of a source document is 227 words while for derived documents it is 254 words. ## Dataset Creation ### Curation Rationale Our main intention was to develop a standard benchmark resource for the evaluation of existing systems available for text reuse detection in general and specifically for Urdu language. To generate a corpus with realistic examples, we opted for the field of journalism. In journalism, the same news story is published in different newspapers in different forms. It is a standard practice followed by all the newspapers (reporters and editors) to reuse (verbatim or modified) a news story released by the news agency. ### Source Data #### Initial Data Collection and Normalization The COUNTER corpus consists of news articles (source documents) released by five news agencies in Pakistan i.e. Associated Press of Pakistan (APP), InternationalNews Network (INN), Independent News Pakistan (INP), News Network International (NNI) and South Asian News Agency (SANA). The corresponding news stories (derived documents) were extracted from nine daily published and large circulation national news papers of the All Pakistan Newspapers Society (APNS), who are subscribed to these news agencies. These include Nawa-e-Waqt, Daily Dunya, Express, Jang, Daily Waqt, Daily Insaf, Daily Aaj, Daily Islam and DailyPakistan. All of them are part of the mainstream national press, long established dailies with total circulation figures of over four million.7News agency texts (source documents) were provided (in electronic form) by the news agencies on a daily basis when they released the news. Newspaper stories (derived documents) were collected by three volunteers over a period of six months (from July to December 2014).National, Foreign, Business, Sports and Showbiz were the domains targeted for data collection. #### Who are the source language producers? ### Annotations #### Annotation process The corpus has been annotated at the document level with three classes of reuse i.e.Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND). The derived collection contains documents with various degrees of text reuse. Some of the newspaper stories (derived documents)are rewritten (either verbatim or paraphrased) from the new agencys text (source document) while others have been written by the journalists independently on their own. For the former case, source-derived document pairs are either tagged as Wholly Derived (WD) or Partially Derived (PD) depending on the volume of text reused from the news agencys text for creating the newspaper article while for the latter case, they are tagged as Non Derived (ND) as the journalists have not reused anything from the news agencys text but based on their own observations and findings, developed and documented the story. The annotations were carried out in three phases: (1) training phase, (2) annotations, (3)conflict resolving. During the training phase, annotators A and B manually annotated 60 document pairs, following a preliminary version of the annotation guidelines. A detailed meeting was carried out afterwards, discussing the problems and disagreements. It was observed that the highest number of disagreements were between PD and ND cases, as both found it difficult to distinguish between these two classes. The reason being that adjusting the threshold where a text is heavily paraphrased or new information added to it that it becomes independently written(ND). Following the discussion, the annotation guidelines were slightly revised, and the first 60 annotations results were saved. In the annotation phase, the remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories:remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories: Wholly Derived (WD)The News agency text is the only source for the reused newspaper text, which means it is a verbatim copy of the source. In this case, most of the reused text is word-to-word copy of the source text.Partially Derived (PD)The Newspaper text has been either derived from more than one news agency or most of the text is paraphrased by the editor when rewriting from news agency text source. In this case, most parts of the derived document contain paraphrased text or new facts and figures added by the journalists own findings. Non Derived (ND)The News agency text has not been used in the production of the newspaper text (though words may still co-occur in both documents), it has completely different facts and figures or is heavily paraphrased from the newsagencys copy. In this case, the derived document is independently written and has a lot more new text. #### Who are the annotators? The annotations were performed by three annotators (A, B and C), who were native Urdu language speakers and experts of paraphrasing mechanisms. All three were graduates, experienced in text annotations and having an advanced Urdu level. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators ### Licensing Information This dataset is licensed under the Creative Common Attribution-NonCommercial-ShareAlike 4.0 International License. (CC BY-NC-SA 4.0). ### Contributions Thanks to @arkhalid for adding this dataset.
[ "# Dataset Card for COUNTER", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard: \n- Point of Contact: UCREL", "### Dataset Summary\n\nThe COrpus ofUrdu News TExt Reuse (COUNTER) corpus contains 1200 documents with realexamples of text reuse from the field of journalism. It has been manually annotatedat document level with three levels of reuse: wholly derived, partially derived andnon derived", "### Supported Tasks and Leaderboards\n\nother:text-reuse", "### Languages\n\nur", "## Dataset Structure", "### Data Instances\n\nHere is one example from the dataset:", "### Data Fields\n\n: The source document\n\n: The derived document\nFor each pair of source and derived documents. we have the following fields:\n\n: Name of the file in dataset\n\n: Headline of the news item\n\n: Main text of the news item\n\n: Number of words in document\n\n: Number of sentences in document\n\n: Number of words after stop word removal\n\n: The newspaper in which the news item was published\n\n: The date on which the news item is published DD.MM.YY\n\n: The category of news item from this list: \"business\", \"sports\", \"national\", \"foreign\", \"showbiz\".\n\n: Three classes of reuse from this list: Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND)", "### Data Splits\n\nOne split train with 600 pairs of documents.\n\nThe corpus is composed of two main document types: (1) source documents and (2) derived documents. There are total 1200 documents in the corpus: 600 are newsagency articles (source documents) and 600 are newspapers stories (derived documents). The corpus contains in total 275,387 words (tokens8), 21,426 unique words and 10,841 sentences. The average length of a source document is 227 words while for derived documents it is 254 words.", "## Dataset Creation", "### Curation Rationale\n\nOur main intention was to develop a standard benchmark resource for the evaluation of existing systems available for text reuse detection in general and specifically for Urdu language. To generate a corpus with realistic examples, we opted for the field of journalism. In journalism, the same news story is published in different newspapers in different forms. It is a standard practice followed by all the newspapers (reporters and editors) to reuse (verbatim or modified) a news story released by the news agency.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe COUNTER corpus consists of news articles (source documents) released by five news agencies in Pakistan i.e. Associated Press of Pakistan (APP), InternationalNews Network (INN), Independent News Pakistan (INP), News Network International (NNI) and South Asian News Agency (SANA). The corresponding news stories (derived documents) were extracted from nine daily published and large circulation national news papers of the All Pakistan Newspapers Society (APNS), who are subscribed to these news agencies.\nThese include Nawa-e-Waqt, Daily Dunya, Express, Jang, Daily Waqt, Daily Insaf, Daily Aaj, Daily Islam and DailyPakistan. All of them are part of the mainstream national press, long established dailies with total circulation figures of over four million.7News agency texts (source documents) were provided (in electronic form) by the news agencies on a daily basis when they released the news. Newspaper stories (derived documents) were collected by three volunteers over a period of six months (from July to December 2014).National, Foreign, Business, Sports and Showbiz were the domains targeted for data collection.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nThe corpus has been annotated at the document level with three classes of reuse i.e.Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND).\nThe derived collection contains documents with various degrees of text reuse. Some of the newspaper stories (derived documents)are rewritten (either verbatim or paraphrased) from the new agency\u0019s text (source document) while others have been written by the journalists independently on their own. For the former case, source-derived document pairs are either tagged as Wholly Derived (WD) or Partially Derived (PD) depending on the volume of text reused from the news agency\u0019s text for creating the newspaper article while for the latter case, they are tagged as Non Derived (ND) as the journalists have not reused anything from the news agency\u0019s text but based on their own observations and findings, developed and documented the story.\n\nThe annotations were carried out in three phases: (1) training phase, (2) annotations, (3)conflict resolving. During the training phase, annotators A and B manually annotated 60 document pairs, following a preliminary version of the annotation guidelines. A detailed meeting was carried out afterwards, discussing the problems and disagreements. It was observed that the highest number of disagreements were between PD and ND cases, as both found it difficult to distinguish between these two classes. The reason being that adjusting the threshold where a text is heavily paraphrased or new information added to it that it becomes independently written(ND). Following the discussion, the annotation guidelines were slightly revised, and the first 60 annotations results were saved. In the annotation phase, the remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories:remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories:\nWholly Derived (WD)The News agency text is the only source for the reused newspaper text, which means it is a verbatim copy of the source. In this case, most of the reused text is word-to-word copy of the source text.Partially Derived (PD)The Newspaper text has been either derived from more than one news agency or most of the text is paraphrased by the editor when rewriting from news agency text source. In this case, most parts of the derived document contain paraphrased text or new facts and figures added by the journalist\u0019s own findings. Non Derived (ND)The News agency text has not been used in the production of the newspaper text (though words may still co-occur in both documents), it has completely different facts and figures or is heavily paraphrased from the newsagency\u0019s copy. In this case, the derived document is independently written and has a lot more new text.", "#### Who are the annotators?\n\nThe annotations were performed by three annotators (A, B and C), who were native Urdu language speakers and experts of paraphrasing mechanisms. All three were graduates, experienced in text annotations and having an advanced Urdu level.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis dataset is licensed under the Creative Common Attribution-NonCommercial-ShareAlike 4.0 International License.\n(CC BY-NC-SA 4.0).", "### Contributions\n\nThanks to @arkhalid for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Urdu #license-cc-by-nc-sa-4.0 #region-us \n", "# Dataset Card for COUNTER", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard: \n- Point of Contact: UCREL", "### Dataset Summary\n\nThe COrpus ofUrdu News TExt Reuse (COUNTER) corpus contains 1200 documents with realexamples of text reuse from the field of journalism. It has been manually annotatedat document level with three levels of reuse: wholly derived, partially derived andnon derived", "### Supported Tasks and Leaderboards\n\nother:text-reuse", "### Languages\n\nur", "## Dataset Structure", "### Data Instances\n\nHere is one example from the dataset:", "### Data Fields\n\n: The source document\n\n: The derived document\nFor each pair of source and derived documents. we have the following fields:\n\n: Name of the file in dataset\n\n: Headline of the news item\n\n: Main text of the news item\n\n: Number of words in document\n\n: Number of sentences in document\n\n: Number of words after stop word removal\n\n: The newspaper in which the news item was published\n\n: The date on which the news item is published DD.MM.YY\n\n: The category of news item from this list: \"business\", \"sports\", \"national\", \"foreign\", \"showbiz\".\n\n: Three classes of reuse from this list: Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND)", "### Data Splits\n\nOne split train with 600 pairs of documents.\n\nThe corpus is composed of two main document types: (1) source documents and (2) derived documents. There are total 1200 documents in the corpus: 600 are newsagency articles (source documents) and 600 are newspapers stories (derived documents). The corpus contains in total 275,387 words (tokens8), 21,426 unique words and 10,841 sentences. The average length of a source document is 227 words while for derived documents it is 254 words.", "## Dataset Creation", "### Curation Rationale\n\nOur main intention was to develop a standard benchmark resource for the evaluation of existing systems available for text reuse detection in general and specifically for Urdu language. To generate a corpus with realistic examples, we opted for the field of journalism. In journalism, the same news story is published in different newspapers in different forms. It is a standard practice followed by all the newspapers (reporters and editors) to reuse (verbatim or modified) a news story released by the news agency.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe COUNTER corpus consists of news articles (source documents) released by five news agencies in Pakistan i.e. Associated Press of Pakistan (APP), InternationalNews Network (INN), Independent News Pakistan (INP), News Network International (NNI) and South Asian News Agency (SANA). The corresponding news stories (derived documents) were extracted from nine daily published and large circulation national news papers of the All Pakistan Newspapers Society (APNS), who are subscribed to these news agencies.\nThese include Nawa-e-Waqt, Daily Dunya, Express, Jang, Daily Waqt, Daily Insaf, Daily Aaj, Daily Islam and DailyPakistan. All of them are part of the mainstream national press, long established dailies with total circulation figures of over four million.7News agency texts (source documents) were provided (in electronic form) by the news agencies on a daily basis when they released the news. Newspaper stories (derived documents) were collected by three volunteers over a period of six months (from July to December 2014).National, Foreign, Business, Sports and Showbiz were the domains targeted for data collection.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nThe corpus has been annotated at the document level with three classes of reuse i.e.Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND).\nThe derived collection contains documents with various degrees of text reuse. Some of the newspaper stories (derived documents)are rewritten (either verbatim or paraphrased) from the new agency\u0019s text (source document) while others have been written by the journalists independently on their own. For the former case, source-derived document pairs are either tagged as Wholly Derived (WD) or Partially Derived (PD) depending on the volume of text reused from the news agency\u0019s text for creating the newspaper article while for the latter case, they are tagged as Non Derived (ND) as the journalists have not reused anything from the news agency\u0019s text but based on their own observations and findings, developed and documented the story.\n\nThe annotations were carried out in three phases: (1) training phase, (2) annotations, (3)conflict resolving. During the training phase, annotators A and B manually annotated 60 document pairs, following a preliminary version of the annotation guidelines. A detailed meeting was carried out afterwards, discussing the problems and disagreements. It was observed that the highest number of disagreements were between PD and ND cases, as both found it difficult to distinguish between these two classes. The reason being that adjusting the threshold where a text is heavily paraphrased or new information added to it that it becomes independently written(ND). Following the discussion, the annotation guidelines were slightly revised, and the first 60 annotations results were saved. In the annotation phase, the remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories:remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories:\nWholly Derived (WD)The News agency text is the only source for the reused newspaper text, which means it is a verbatim copy of the source. In this case, most of the reused text is word-to-word copy of the source text.Partially Derived (PD)The Newspaper text has been either derived from more than one news agency or most of the text is paraphrased by the editor when rewriting from news agency text source. In this case, most parts of the derived document contain paraphrased text or new facts and figures added by the journalist\u0019s own findings. Non Derived (ND)The News agency text has not been used in the production of the newspaper text (though words may still co-occur in both documents), it has completely different facts and figures or is heavily paraphrased from the newsagency\u0019s copy. In this case, the derived document is independently written and has a lot more new text.", "#### Who are the annotators?\n\nThe annotations were performed by three annotators (A, B and C), who were native Urdu language speakers and experts of paraphrasing mechanisms. All three were graduates, experienced in text annotations and having an advanced Urdu level.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis dataset is licensed under the Creative Common Attribution-NonCommercial-ShareAlike 4.0 International License.\n(CC BY-NC-SA 4.0).", "### Contributions\n\nThanks to @arkhalid for adding this dataset." ]
[ 120, 8, 120, 28, 72, 16, 5, 6, 15, 161, 113, 5, 113, 4, 256, 10, 5, 755, 64, 8, 8, 7, 8, 25, 5, 6, 37, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Urdu #license-cc-by-nc-sa-4.0 #region-us \n# Dataset Card for COUNTER## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard: \n- Point of Contact: UCREL### Dataset Summary\n\nThe COrpus ofUrdu News TExt Reuse (COUNTER) corpus contains 1200 documents with realexamples of text reuse from the field of journalism. It has been manually annotatedat document level with three levels of reuse: wholly derived, partially derived andnon derived### Supported Tasks and Leaderboards\n\nother:text-reuse### Languages\n\nur## Dataset Structure### Data Instances\n\nHere is one example from the dataset:", "passage: ### Data Fields\n\n: The source document\n\n: The derived document\nFor each pair of source and derived documents. we have the following fields:\n\n: Name of the file in dataset\n\n: Headline of the news item\n\n: Main text of the news item\n\n: Number of words in document\n\n: Number of sentences in document\n\n: Number of words after stop word removal\n\n: The newspaper in which the news item was published\n\n: The date on which the news item is published DD.MM.YY\n\n: The category of news item from this list: \"business\", \"sports\", \"national\", \"foreign\", \"showbiz\".\n\n: Three classes of reuse from this list: Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND)### Data Splits\n\nOne split train with 600 pairs of documents.\n\nThe corpus is composed of two main document types: (1) source documents and (2) derived documents. There are total 1200 documents in the corpus: 600 are newsagency articles (source documents) and 600 are newspapers stories (derived documents). The corpus contains in total 275,387 words (tokens8), 21,426 unique words and 10,841 sentences. The average length of a source document is 227 words while for derived documents it is 254 words.## Dataset Creation### Curation Rationale\n\nOur main intention was to develop a standard benchmark resource for the evaluation of existing systems available for text reuse detection in general and specifically for Urdu language. To generate a corpus with realistic examples, we opted for the field of journalism. In journalism, the same news story is published in different newspapers in different forms. It is a standard practice followed by all the newspapers (reporters and editors) to reuse (verbatim or modified) a news story released by the news agency.### Source Data#### Initial Data Collection and Normalization\n\nThe COUNTER corpus consists of news articles (source documents) released by five news agencies in Pakistan i.e. Associated Press of Pakistan (APP), InternationalNews Network (INN), Independent News Pakistan (INP), News Network International (NNI) and South Asian News Agency (SANA). The corresponding news stories (derived documents) were extracted from nine daily published and large circulation national news papers of the All Pakistan Newspapers Society (APNS), who are subscribed to these news agencies.\nThese include Nawa-e-Waqt, Daily Dunya, Express, Jang, Daily Waqt, Daily Insaf, Daily Aaj, Daily Islam and DailyPakistan. All of them are part of the mainstream national press, long established dailies with total circulation figures of over four million.7News agency texts (source documents) were provided (in electronic form) by the news agencies on a daily basis when they released the news. Newspaper stories (derived documents) were collected by three volunteers over a period of six months (from July to December 2014).National, Foreign, Business, Sports and Showbiz were the domains targeted for data collection.#### Who are the source language producers?### Annotations" ]
57df4e6454af2e89c9b71f8a9d9a0fcc21ba0a60
# Dataset Card for [covid_qa_castorini] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://covidqa.ai - **Repository:** https://github.com/castorini/pygaggle - **Paper:** https://arxiv.org/abs/2004.11339 - **Point of Contact:** [Castorini research group @UWaterloo](https://github.com/castorini/) ### Dataset Summary CovidQA is a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle’s COVID-19 Open Research Dataset Challenge. The dataset comprises 156 question-article pairs with 27 questions (topics) and 85 unique articles. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances **What do the instances that comprise the dataset represent?** Each represents a question, a context (document passage from the CORD19 dataset) and an answer. **How many instances are there in total?** **What data does each instance consist of?** Each instance is a query (natural language question and keyword-based), a set of answers, and a document id with its title associated with each answer. [More Information Needed] ### Data Fields The data was annotated in SQuAD style fashion, where each row contains: * **question_query**: Natural language question query * **keyword_query**: Keyword-based query * **category_name**: Category in which the queries are part of * **answers**: List of answers * **id**: The document ID the answer is found on * **title**: Title of the document of the answer * **exact_answer**: Text (string) of the exact answer ### Data Splits **data/kaggle-lit-review-0.2.json**: 156 question-article pairs with 27 questions (topics) and 85 unique articles from CORD-19. [More Information Needed] ## Dataset Creation The dataset aims to help for guiding research until more substantial evaluation resources become available. Being a smaller dataset, it can be helpful for evaluating the zero-shot or transfer capabilities of existing models on topics specifically related to COVID-19. ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? [More Information Needed] ### Annotations Five of the co-authors participated in this annotation effort, applying the aforementioned approach, with one lead annotator responsible for approving topics and answering technical questions from the other annotators. Two annotators are undergraduate students majoring in computer science, one is a science alumna, another is a computer science professor, and the lead annotator is a graduate student in computer science—all affiliated with the University of Waterloo. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset The dataset was intended as a stopgap measure for guiding research until more substantial evaluation resources become available. ### Discussion of Biases [More Information Needed] ### Other Known Limitations While this dataset, comprising 124 question–article pairs as of the present version 0.1 release, does not have sufficient examples for supervised machine learning, it can be helpful for evaluating the zero-shot or transfer capabilities of existing models on topics specifically related to COVID-19. ## Additional Information The listed authors in the homepage are maintaining/supporting the dataset. ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is licensed under the [Apache License 2.0](https://github.com/castorini/pygaggle/blob/master/LICENSE). ### Citation Information ``` @article{tang2020rapidly, title={Rapidly Bootstrapping a Question Answering Dataset for COVID-19}, author={Tang, Raphael and Nogueira, Rodrigo and Zhang, Edwin and Gupta, Nikhil and Cam, Phuong and Cho, Kyunghyun and Lin, Jimmy}, journal={arXiv preprint arXiv:2004.11339}, year={2020} } ``` ### Contributions Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset.
covid_qa_castorini
[ "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:2004.11339", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "extractive-qa"], "paperswithcode_id": "covidqa", "pretty_name": "CovidQaCastorini", "dataset_info": [{"config_name": "covid_qa_deepset", "features": [{"name": "document_id", "dtype": "int32"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "is_impossible", "dtype": "bool"}, {"name": "id", "dtype": "int32"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 65151262, "num_examples": 2019}], "download_size": 4418117, "dataset_size": 65151262}, {"config_name": "covidqa", "features": [{"name": "category_name", "dtype": "string"}, {"name": "question_query", "dtype": "string"}, {"name": "keyword_query", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "exact_answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 33757, "num_examples": 27}], "download_size": 51438, "dataset_size": 33757}, {"config_name": "covid_qa_castorini", "features": [{"name": "category_name", "dtype": "string"}, {"name": "question_query", "dtype": "string"}, {"name": "keyword_query", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "exact_answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 33757, "num_examples": 27}], "download_size": 51438, "dataset_size": 33757}]}
2024-01-18T09:45:02+00:00
[ "2004.11339" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2004.11339 #region-us
# Dataset Card for [covid_qa_castorini] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Point of Contact: Castorini research group @UWaterloo ### Dataset Summary CovidQA is a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle’s COVID-19 Open Research Dataset Challenge. The dataset comprises 156 question-article pairs with 27 questions (topics) and 85 unique articles. ### Supported Tasks and Leaderboards ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances What do the instances that comprise the dataset represent? Each represents a question, a context (document passage from the CORD19 dataset) and an answer. How many instances are there in total? What data does each instance consist of? Each instance is a query (natural language question and keyword-based), a set of answers, and a document id with its title associated with each answer. ### Data Fields The data was annotated in SQuAD style fashion, where each row contains: * question_query: Natural language question query * keyword_query: Keyword-based query * category_name: Category in which the queries are part of * answers: List of answers * id: The document ID the answer is found on * title: Title of the document of the answer * exact_answer: Text (string) of the exact answer ### Data Splits data/kaggle-lit-review-0.2.json: 156 question-article pairs with 27 questions (topics) and 85 unique articles from CORD-19. ## Dataset Creation The dataset aims to help for guiding research until more substantial evaluation resources become available. Being a smaller dataset, it can be helpful for evaluating the zero-shot or transfer capabilities of existing models on topics specifically related to COVID-19. ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations Five of the co-authors participated in this annotation effort, applying the aforementioned approach, with one lead annotator responsible for approving topics and answering technical questions from the other annotators. Two annotators are undergraduate students majoring in computer science, one is a science alumna, another is a computer science professor, and the lead annotator is a graduate student in computer science—all affiliated with the University of Waterloo. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset The dataset was intended as a stopgap measure for guiding research until more substantial evaluation resources become available. ### Discussion of Biases ### Other Known Limitations While this dataset, comprising 124 question–article pairs as of the present version 0.1 release, does not have sufficient examples for supervised machine learning, it can be helpful for evaluating the zero-shot or transfer capabilities of existing models on topics specifically related to COVID-19. ## Additional Information The listed authors in the homepage are maintaining/supporting the dataset. ### Dataset Curators ### Licensing Information The dataset is licensed under the Apache License 2.0. ### Contributions Thanks to @olinguyen for adding this dataset.
[ "# Dataset Card for [covid_qa_castorini]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: Castorini research group @UWaterloo", "### Dataset Summary\n\nCovidQA is a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered \nfrom Kaggle’s COVID-19 Open Research Dataset Challenge.\nThe dataset comprises 156 question-article pairs with 27 questions (topics) and 85 unique articles.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nWhat do the instances that comprise the dataset represent?\nEach represents a question, a context (document passage from the CORD19 dataset) and an answer.\n\nHow many instances are there in total?\n\nWhat data does each instance consist of?\nEach instance is a query (natural language question and keyword-based), a set of answers, and a document id with its title associated with each answer.", "### Data Fields\n\nThe data was annotated in SQuAD style fashion, where each row contains:\n\n* question_query: Natural language question query\n* keyword_query: Keyword-based query\n* category_name: Category in which the queries are part of\n* answers: List of answers\n * id: The document ID the answer is found on\n * title: Title of the document of the answer\n * exact_answer: Text (string) of the exact answer", "### Data Splits\n\ndata/kaggle-lit-review-0.2.json: 156 question-article pairs with 27 questions (topics) and 85 unique articles from\nCORD-19.", "## Dataset Creation\n\nThe dataset aims to help for guiding research until more substantial evaluation resources become available. Being a smaller dataset,\nit can be helpful for evaluating the zero-shot or transfer capabilities of existing models on topics specifically related to COVID-19.", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations\n\nFive of the co-authors participated in this annotation effort, applying the aforementioned approach, with one lead \nannotator responsible for approving topics and answering technical questions from the other annotators. Two annotators are\nundergraduate students majoring in computer science, one is a science alumna, another is a computer science professor, \nand the lead annotator is a graduate student in computer science—all affiliated with the University of Waterloo.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset was intended as a stopgap measure for guiding research until more substantial evaluation resources become available.", "### Discussion of Biases", "### Other Known Limitations\n\nWhile this dataset, comprising 124 question–article pairs as of the present version 0.1 release, does not have sufficient\nexamples for supervised machine learning, it can be helpful for evaluating the zero-shot or transfer capabilities\nof existing models on topics specifically related to COVID-19.", "## Additional Information\n\nThe listed authors in the homepage are maintaining/supporting the dataset.", "### Dataset Curators", "### Licensing Information\n\nThe dataset is licensed under the Apache License 2.0.", "### Contributions\n\nThanks to @olinguyen for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2004.11339 #region-us \n", "# Dataset Card for [covid_qa_castorini]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: Castorini research group @UWaterloo", "### Dataset Summary\n\nCovidQA is a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered \nfrom Kaggle’s COVID-19 Open Research Dataset Challenge.\nThe dataset comprises 156 question-article pairs with 27 questions (topics) and 85 unique articles.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nWhat do the instances that comprise the dataset represent?\nEach represents a question, a context (document passage from the CORD19 dataset) and an answer.\n\nHow many instances are there in total?\n\nWhat data does each instance consist of?\nEach instance is a query (natural language question and keyword-based), a set of answers, and a document id with its title associated with each answer.", "### Data Fields\n\nThe data was annotated in SQuAD style fashion, where each row contains:\n\n* question_query: Natural language question query\n* keyword_query: Keyword-based query\n* category_name: Category in which the queries are part of\n* answers: List of answers\n * id: The document ID the answer is found on\n * title: Title of the document of the answer\n * exact_answer: Text (string) of the exact answer", "### Data Splits\n\ndata/kaggle-lit-review-0.2.json: 156 question-article pairs with 27 questions (topics) and 85 unique articles from\nCORD-19.", "## Dataset Creation\n\nThe dataset aims to help for guiding research until more substantial evaluation resources become available. Being a smaller dataset,\nit can be helpful for evaluating the zero-shot or transfer capabilities of existing models on topics specifically related to COVID-19.", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations\n\nFive of the co-authors participated in this annotation effort, applying the aforementioned approach, with one lead \nannotator responsible for approving topics and answering technical questions from the other annotators. Two annotators are\nundergraduate students majoring in computer science, one is a science alumna, another is a computer science professor, \nand the lead annotator is a graduate student in computer science—all affiliated with the University of Waterloo.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset was intended as a stopgap measure for guiding research until more substantial evaluation resources become available.", "### Discussion of Biases", "### Other Known Limitations\n\nWhile this dataset, comprising 124 question–article pairs as of the present version 0.1 release, does not have sufficient\nexamples for supervised machine learning, it can be helpful for evaluating the zero-shot or transfer capabilities\nof existing models on topics specifically related to COVID-19.", "## Additional Information\n\nThe listed authors in the homepage are maintaining/supporting the dataset.", "### Dataset Curators", "### Licensing Information\n\nThe dataset is licensed under the Apache License 2.0.", "### Contributions\n\nThanks to @olinguyen for adding this dataset." ]
[ 111, 15, 120, 33, 68, 10, 14, 6, 92, 104, 41, 58, 7, 4, 10, 10, 109, 5, 9, 8, 8, 30, 8, 70, 23, 6, 19, 17 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2004.11339 #region-us \n# Dataset Card for [covid_qa_castorini]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: Castorini research group @UWaterloo### Dataset Summary\n\nCovidQA is a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered \nfrom Kaggle’s COVID-19 Open Research Dataset Challenge.\nThe dataset comprises 156 question-article pairs with 27 questions (topics) and 85 unique articles.### Supported Tasks and Leaderboards### Languages\n\nThe text in the dataset is in English.## Dataset Structure### Data Instances\n\nWhat do the instances that comprise the dataset represent?\nEach represents a question, a context (document passage from the CORD19 dataset) and an answer.\n\nHow many instances are there in total?\n\nWhat data does each instance consist of?\nEach instance is a query (natural language question and keyword-based), a set of answers, and a document id with its title associated with each answer." ]
e1e05fb4a3a3f581bf0811e854305299e835523f
# Dataset Card for COVID-QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/deepset-ai/COVID-QA - **Paper:** https://openreview.net/forum?id=JENSKEEzsoU - **Point of Contact:** [deepset AI](https://github.com/deepset-ai) ### Dataset Summary COVID-QA is a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19. A total of 147 scientific articles from the CORD-19 dataset were annotated by 15 experts. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances **What do the instances that comprise the dataset represent?** Each represents a question, a context (document passage from the CORD19 dataset) and an answer. **How many instances are there in total?** 2019 instances **What data does each instance consist of?** Each instance is a question, a set of answers, and an id associated with each answer. [More Information Needed] ### Data Fields The data was annotated in SQuAD style fashion, where each row contains: * **question**: Query question * **context**: Context text to obtain the answer from * **document_id** The document ID of the context text * **answer**: Dictionary containing the answer string and the start index ### Data Splits **data/COVID-QA.json**: 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19. [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The inital data collected comes from 147 scientific articles from the CORD-19 dataset. Question and answers were then annotated afterwards. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process While annotators were volunteers, they were required to have at least a Master’s degree in biomedical sciences. The annotation team was led by a medical doctor (G.A.R.) who vetted the volunteer’s credentials and manually verified each question/answer pair produced. We used an existing, web-based annotation tool that had been created by deepset and is available at their Neural Search framework [haystack](https://github.com/deepset-ai/haystack). #### Who are the annotators? The annotators are 15 volunteer biomedical experts on scientific articles related to COVID-19. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset The dataset aims to help build question answering models serving clinical and scientific researchers, public health authorities, and frontline workers. These QA systems can help them find answers and patterns in research papers by locating relevant answers to common questions from scientific articles. ### Discussion of Biases [More Information Needed] ### Other Known Limitations ## Additional Information The listed authors in the homepage are maintaining/supporting the dataset. ### Dataset Curators [More Information Needed] ### Licensing Information The Proto_qa dataset is licensed under the [Apache License 2.0](https://github.com/deepset-ai/COVID-QA/blob/master/LICENSE) ### Citation Information ``` @inproceedings{moller2020covid, title={COVID-QA: A Question Answering Dataset for COVID-19}, author={M{\"o}ller, Timo and Reina, Anthony and Jayakumar, Raghavan and Pietsch, Malte}, booktitle={Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020}, year={2020} } ``` ### Contributions Thanks to [@olinguyen](https://github.com/olinguyen) for adding this dataset.
covid_qa_deepset
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa", "extractive-qa"], "pretty_name": "COVID-QA", "dataset_info": {"features": [{"name": "document_id", "dtype": "int32"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "is_impossible", "dtype": "bool"}, {"name": "id", "dtype": "int32"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "config_name": "covid_qa_deepset", "splits": [{"name": "train", "num_bytes": 65151262, "num_examples": 2019}], "download_size": 4418117, "dataset_size": 65151262}}
2024-01-18T09:45:30+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-closed-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #region-us
# Dataset Card for COVID-QA ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: URL - Paper: URL - Point of Contact: deepset AI ### Dataset Summary COVID-QA is a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19. A total of 147 scientific articles from the CORD-19 dataset were annotated by 15 experts. ### Supported Tasks and Leaderboards ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances What do the instances that comprise the dataset represent? Each represents a question, a context (document passage from the CORD19 dataset) and an answer. How many instances are there in total? 2019 instances What data does each instance consist of? Each instance is a question, a set of answers, and an id associated with each answer. ### Data Fields The data was annotated in SQuAD style fashion, where each row contains: * question: Query question * context: Context text to obtain the answer from * document_id The document ID of the context text * answer: Dictionary containing the answer string and the start index ### Data Splits data/URL: 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The inital data collected comes from 147 scientific articles from the CORD-19 dataset. Question and answers were then annotated afterwards. #### Who are the source language producers? ### Annotations #### Annotation process While annotators were volunteers, they were required to have at least a Master’s degree in biomedical sciences. The annotation team was led by a medical doctor (G.A.R.) who vetted the volunteer’s credentials and manually verified each question/answer pair produced. We used an existing, web-based annotation tool that had been created by deepset and is available at their Neural Search framework haystack. #### Who are the annotators? The annotators are 15 volunteer biomedical experts on scientific articles related to COVID-19. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset The dataset aims to help build question answering models serving clinical and scientific researchers, public health authorities, and frontline workers. These QA systems can help them find answers and patterns in research papers by locating relevant answers to common questions from scientific articles. ### Discussion of Biases ### Other Known Limitations ## Additional Information The listed authors in the homepage are maintaining/supporting the dataset. ### Dataset Curators ### Licensing Information The Proto_qa dataset is licensed under the Apache License 2.0 ### Contributions Thanks to @olinguyen for adding this dataset.
[ "# Dataset Card for COVID-QA", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: deepset AI", "### Dataset Summary\n\nCOVID-QA is a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.\nA total of 147 scientific articles from the CORD-19 dataset were annotated by 15 experts.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nWhat do the instances that comprise the dataset represent?\nEach represents a question, a context (document passage from the CORD19 dataset) and an answer.\n\nHow many instances are there in total?\n2019 instances\n\nWhat data does each instance consist of?\nEach instance is a question, a set of answers, and an id associated with each answer.", "### Data Fields\n\nThe data was annotated in SQuAD style fashion, where each row contains:\n\n* question: Query question\n* context: Context text to obtain the answer from\n* document_id The document ID of the context text\n* answer: Dictionary containing the answer string and the start index", "### Data Splits\n\ndata/URL: 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe inital data collected comes from 147 scientific articles from the CORD-19 dataset. Question and answers were then\nannotated afterwards.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nWhile annotators were volunteers, they were required to have at least a Master’s degree in biomedical sciences.\nThe annotation team was led by a medical doctor (G.A.R.) who vetted the volunteer’s credentials and\nmanually verified each question/answer pair produced. We used an existing, web-based annotation tool that had been\ncreated by deepset and is available at their Neural Search framework haystack.", "#### Who are the annotators?\n\nThe annotators are 15 volunteer biomedical experts on scientific articles related to COVID-19.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset aims to help build question answering models serving clinical and scientific researchers, public health authorities, and frontline workers.\nThese QA systems can help them find answers and patterns in research papers by locating relevant answers to common questions from scientific articles.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information\n\nThe listed authors in the homepage are maintaining/supporting the dataset.", "### Dataset Curators", "### Licensing Information\n\nThe Proto_qa dataset is licensed under the Apache License 2.0", "### Contributions\n\nThanks to @olinguyen for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for COVID-QA", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: deepset AI", "### Dataset Summary\n\nCOVID-QA is a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.\nA total of 147 scientific articles from the CORD-19 dataset were annotated by 15 experts.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nWhat do the instances that comprise the dataset represent?\nEach represents a question, a context (document passage from the CORD19 dataset) and an answer.\n\nHow many instances are there in total?\n2019 instances\n\nWhat data does each instance consist of?\nEach instance is a question, a set of answers, and an id associated with each answer.", "### Data Fields\n\nThe data was annotated in SQuAD style fashion, where each row contains:\n\n* question: Query question\n* context: Context text to obtain the answer from\n* document_id The document ID of the context text\n* answer: Dictionary containing the answer string and the start index", "### Data Splits\n\ndata/URL: 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe inital data collected comes from 147 scientific articles from the CORD-19 dataset. Question and answers were then\nannotated afterwards.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nWhile annotators were volunteers, they were required to have at least a Master’s degree in biomedical sciences.\nThe annotation team was led by a medical doctor (G.A.R.) who vetted the volunteer’s credentials and\nmanually verified each question/answer pair produced. We used an existing, web-based annotation tool that had been\ncreated by deepset and is available at their Neural Search framework haystack.", "#### Who are the annotators?\n\nThe annotators are 15 volunteer biomedical experts on scientific articles related to COVID-19.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset aims to help build question answering models serving clinical and scientific researchers, public health authorities, and frontline workers.\nThese QA systems can help them find answers and patterns in research papers by locating relevant answers to common questions from scientific articles.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information\n\nThe listed authors in the homepage are maintaining/supporting the dataset.", "### Dataset Curators", "### Licensing Information\n\nThe Proto_qa dataset is licensed under the Apache License 2.0", "### Contributions\n\nThanks to @olinguyen for adding this dataset." ]
[ 103, 9, 120, 22, 67, 10, 14, 6, 82, 66, 35, 5, 7, 4, 42, 10, 5, 104, 29, 8, 8, 66, 8, 7, 23, 6, 21, 17 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #region-us \n# Dataset Card for COVID-QA## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: deepset AI### Dataset Summary\n\nCOVID-QA is a Question Answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19.\nA total of 147 scientific articles from the CORD-19 dataset were annotated by 15 experts.### Supported Tasks and Leaderboards### Languages\n\nThe text in the dataset is in English.## Dataset Structure### Data Instances\n\nWhat do the instances that comprise the dataset represent?\nEach represents a question, a context (document passage from the CORD19 dataset) and an answer.\n\nHow many instances are there in total?\n2019 instances\n\nWhat data does each instance consist of?\nEach instance is a question, a set of answers, and an id associated with each answer.### Data Fields\n\nThe data was annotated in SQuAD style fashion, where each row contains:\n\n* question: Query question\n* context: Context text to obtain the answer from\n* document_id The document ID of the context text\n* answer: Dictionary containing the answer string and the start index" ]
e9411605649d8c887fd3f10e1425fdf4ef01fc2c
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/UCSD-AI4H/COVID-Dialogue - **Repository:** The data is also present in the same [GIT](https://github.com/UCSD-AI4H/COVID-Dialogue) repository - **Paper:** https://pengtaoxie.github.io/coviddiag.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary COVID-Dialogue-Dataset-English is an English medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 603 consultations. COVID-Dialogue-Dataset-Chinese is a Chinese medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 1393 consultations. The dataset is present as a single text file. COVID-Dialogue-Dataset-Chinese.txt for Chinese and COVID-Dialogue-Dataset-English.txt for English. ### Supported Tasks and Leaderboards Used for QA tasks. There is also a COVID-19 dialogue generation model available for the Chinese Data. The pre-print and more information is available in [this arxiv pre-print](https://arxiv.org/abs/2005.05442). ### Languages Monolingual. The datasets are in English (EN) and Chinese (ZH) ## Dataset Structure ### Data Instances An example of dialogue is: ``` { 'dialogue_id': 602, 'dialogue_url': 'https://www.healthtap.com/member/fg?page=/search/covid', 'dialogue_turns': [{'speaker': 'Patient', 'utterance': 'Can coronavirus symptoms be mild for some people versus severe? For example, could it just involve being very fatigued, low grade fever for a few days and not the extreme symptoms? Or is it always a full blown cold and struggle to breathe?Can coronavirus symptoms be mild for some people versus severe? For example, could it just involve being very fatigued, low grade fever for a few days and not the extreme symptoms? Or is it always a full blown cold and struggle to breathe?'}, {'speaker': 'Doctor', 'utterance': 'In brief: Symptoms vary. Some may have no symptoms at all. Some can be life threatening. Would you like to video or text chat with me?'}] } ``` The dataset is built from [icliniq.com](https://www.icliniq.com/), [healthcaremagic.com](https://www.healthcaremagic.com/), [healthtap.com](https://www.healthtap.com/) and all copyrights of the data belong to these websites. _(for English)_ The dataset is built from [Haodf.com](https://www.haodf.com/) and all copyrights of the data belong to [Haodf.com](https://www.haodf.com/). _(for Chinese)_ ### Data Fields Each consultation consists of the below: - ID - URL - Description of patient’s medical condition - Dialogue - Diagnosis and suggestions (Optional, mostly for Chinese) For generating the QA only the below fields have been considered: - ID : Consultatation Identifier (restarts for each file) - URL: The url link of the extracted conversation - Dialogue : The conversation between the doctor and the patient. These are arranged as below in the prepared dataset. Each item will be represented with these parameters. - "file_name": string - signifies the file from which the conversation was extracted - "dialogue_id": int32 - the dialogue id - "dialogue_url": string - url of the conversation - "dialogue_turns": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=["病人", "医生"]), and "utterance"(string) for each turn. (ClassLable(names=["Patient", "Doctor"]) for english) ### Data Splits There are no data splits on the original data ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @article{ju2020CovidDialog, title={CovidDialog: Medical Dialogue Datasets about COVID-19}, author={Ju, Zeqian and Chakravorty, Subrato and He, Xuehai and Chen, Shu and Yang, Xingyi and Xie, Pengtao}, journal={ https://github.com/UCSD-AI4H/COVID-Dialogue}, year={2020} } ### Contributions Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset.
covid_qa_ucsd
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:found", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "language:zh", "license:unknown", "arxiv:2005.05442", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["expert-generated", "found"], "language": ["en", "zh"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K", "n<1K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "CovidQaUcsd", "config_names": ["en", "zh"], "dataset_info": [{"config_name": "en", "features": [{"name": "dialogue_id", "dtype": "int32"}, {"name": "dialogue_url", "dtype": "string"}, {"name": "dialogue_turns", "sequence": [{"name": "speaker", "dtype": {"class_label": {"names": {"0": "Patient", "1": "Doctor"}}}}, {"name": "utterance", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 484944, "num_examples": 572}], "download_size": 0, "dataset_size": 484944}, {"config_name": "zh", "features": [{"name": "dialogue_id", "dtype": "int32"}, {"name": "dialogue_url", "dtype": "string"}, {"name": "dialogue_turns", "sequence": [{"name": "speaker", "dtype": {"class_label": {"names": {"0": "\u75c5\u4eba", "1": "\u533b\u751f"}}}}, {"name": "utterance", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1352377, "num_examples": 1088}], "download_size": 0, "dataset_size": 1352377}]}
2024-01-18T09:46:01+00:00
[ "2005.05442" ]
[ "en", "zh" ]
TAGS #task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-found #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #language-Chinese #license-unknown #arxiv-2005.05442 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: The data is also present in the same GIT repository - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary COVID-Dialogue-Dataset-English is an English medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 603 consultations. COVID-Dialogue-Dataset-Chinese is a Chinese medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 1393 consultations. The dataset is present as a single text file. URL for Chinese and URL for English. ### Supported Tasks and Leaderboards Used for QA tasks. There is also a COVID-19 dialogue generation model available for the Chinese Data. The pre-print and more information is available in this arxiv pre-print. ### Languages Monolingual. The datasets are in English (EN) and Chinese (ZH) ## Dataset Structure ### Data Instances An example of dialogue is: The dataset is built from URL, URL, URL and all copyrights of the data belong to these websites. _(for English)_ The dataset is built from URL and all copyrights of the data belong to URL. _(for Chinese)_ ### Data Fields Each consultation consists of the below: - ID - URL - Description of patient’s medical condition - Dialogue - Diagnosis and suggestions (Optional, mostly for Chinese) For generating the QA only the below fields have been considered: - ID : Consultatation Identifier (restarts for each file) - URL: The url link of the extracted conversation - Dialogue : The conversation between the doctor and the patient. These are arranged as below in the prepared dataset. Each item will be represented with these parameters. - "file_name": string - signifies the file from which the conversation was extracted - "dialogue_id": int32 - the dialogue id - "dialogue_url": string - url of the conversation - "dialogue_turns": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=["病人", "医生"]), and "utterance"(string) for each turn. (ClassLable(names=["Patient", "Doctor"]) for english) ### Data Splits There are no data splits on the original data ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information @article{ju2020CovidDialog, title={CovidDialog: Medical Dialogue Datasets about COVID-19}, author={Ju, Zeqian and Chakravorty, Subrato and He, Xuehai and Chen, Shu and Yang, Xingyi and Xie, Pengtao}, journal={ URL year={2020} } ### Contributions Thanks to @vrindaprabhu for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: The data is also present in the same GIT repository\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nCOVID-Dialogue-Dataset-English is an English medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 603 consultations.\n\nCOVID-Dialogue-Dataset-Chinese is a Chinese medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 1393 consultations.\n\nThe dataset is present as a single text file. URL for Chinese and URL for English.", "### Supported Tasks and Leaderboards\n\nUsed for QA tasks. There is also a COVID-19 dialogue generation model available for the Chinese Data. The pre-print and more information is available in this arxiv pre-print.", "### Languages\n\nMonolingual. The datasets are in English (EN) and Chinese (ZH)", "## Dataset Structure", "### Data Instances\n\nAn example of dialogue is:\n\n\n\nThe dataset is built from URL, URL, URL and all copyrights of the data belong to these websites. _(for English)_\n\nThe dataset is built from URL and all copyrights of the data belong to URL. _(for Chinese)_", "### Data Fields\n\nEach consultation consists of the below:\n- ID\n- URL\n- Description of patient’s medical condition\n- Dialogue\n- Diagnosis and suggestions (Optional, mostly for Chinese)\n\nFor generating the QA only the below fields have been considered:\n- ID : Consultatation Identifier (restarts for each file)\n- URL: The url link of the extracted conversation\n- Dialogue : The conversation between the doctor and the patient.\n\nThese are arranged as below in the prepared dataset. Each item will be represented with these parameters.\n\n- \"file_name\": string - signifies the file from which the conversation was extracted\n- \"dialogue_id\": int32 - the dialogue id\n- \"dialogue_url\": string - url of the conversation\n- \"dialogue_turns\": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=[\"病人\", \"医生\"]), and \"utterance\"(string) for each turn. (ClassLable(names=[\"Patient\", \"Doctor\"]) for english)", "### Data Splits\n\nThere are no data splits on the original data", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@article{ju2020CovidDialog,\n title={CovidDialog: Medical Dialogue Datasets about COVID-19},\n author={Ju, Zeqian and Chakravorty, Subrato and He, Xuehai and Chen, Shu and Yang, Xingyi and Xie, Pengtao},\n journal={ URL \n year={2020}\n}", "### Contributions\n\nThanks to @vrindaprabhu for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-found #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #language-Chinese #license-unknown #arxiv-2005.05442 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: The data is also present in the same GIT repository\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nCOVID-Dialogue-Dataset-English is an English medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 603 consultations.\n\nCOVID-Dialogue-Dataset-Chinese is a Chinese medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 1393 consultations.\n\nThe dataset is present as a single text file. URL for Chinese and URL for English.", "### Supported Tasks and Leaderboards\n\nUsed for QA tasks. There is also a COVID-19 dialogue generation model available for the Chinese Data. The pre-print and more information is available in this arxiv pre-print.", "### Languages\n\nMonolingual. The datasets are in English (EN) and Chinese (ZH)", "## Dataset Structure", "### Data Instances\n\nAn example of dialogue is:\n\n\n\nThe dataset is built from URL, URL, URL and all copyrights of the data belong to these websites. _(for English)_\n\nThe dataset is built from URL and all copyrights of the data belong to URL. _(for Chinese)_", "### Data Fields\n\nEach consultation consists of the below:\n- ID\n- URL\n- Description of patient’s medical condition\n- Dialogue\n- Diagnosis and suggestions (Optional, mostly for Chinese)\n\nFor generating the QA only the below fields have been considered:\n- ID : Consultatation Identifier (restarts for each file)\n- URL: The url link of the extracted conversation\n- Dialogue : The conversation between the doctor and the patient.\n\nThese are arranged as below in the prepared dataset. Each item will be represented with these parameters.\n\n- \"file_name\": string - signifies the file from which the conversation was extracted\n- \"dialogue_id\": int32 - the dialogue id\n- \"dialogue_url\": string - url of the conversation\n- \"dialogue_turns\": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=[\"病人\", \"医生\"]), and \"utterance\"(string) for each turn. (ClassLable(names=[\"Patient\", \"Doctor\"]) for english)", "### Data Splits\n\nThere are no data splits on the original data", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@article{ju2020CovidDialog,\n title={CovidDialog: Medical Dialogue Datasets about COVID-19},\n author={Ju, Zeqian and Chakravorty, Subrato and He, Xuehai and Chen, Shu and Yang, Xingyi and Xie, Pengtao},\n journal={ URL \n year={2020}\n}", "### Contributions\n\nThanks to @vrindaprabhu for adding this dataset." ]
[ 122, 10, 120, 39, 154, 50, 23, 6, 67, 257, 15, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 87, 18 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-found #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #language-Chinese #license-unknown #arxiv-2005.05442 #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: The data is also present in the same GIT repository\n- Paper: URL\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nCOVID-Dialogue-Dataset-English is an English medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 603 consultations.\n\nCOVID-Dialogue-Dataset-Chinese is a Chinese medical dialogue dataset about COVID-19 and other types of pneumonia. Patients who are concerned that they may be infected by COVID-19 or other pneumonia consult doctors and doctors provide advice. There are 1393 consultations.\n\nThe dataset is present as a single text file. URL for Chinese and URL for English.### Supported Tasks and Leaderboards\n\nUsed for QA tasks. There is also a COVID-19 dialogue generation model available for the Chinese Data. The pre-print and more information is available in this arxiv pre-print.", "passage: ### Languages\n\nMonolingual. The datasets are in English (EN) and Chinese (ZH)## Dataset Structure### Data Instances\n\nAn example of dialogue is:\n\n\n\nThe dataset is built from URL, URL, URL and all copyrights of the data belong to these websites. _(for English)_\n\nThe dataset is built from URL and all copyrights of the data belong to URL. _(for Chinese)_### Data Fields\n\nEach consultation consists of the below:\n- ID\n- URL\n- Description of patient’s medical condition\n- Dialogue\n- Diagnosis and suggestions (Optional, mostly for Chinese)\n\nFor generating the QA only the below fields have been considered:\n- ID : Consultatation Identifier (restarts for each file)\n- URL: The url link of the extracted conversation\n- Dialogue : The conversation between the doctor and the patient.\n\nThese are arranged as below in the prepared dataset. Each item will be represented with these parameters.\n\n- \"file_name\": string - signifies the file from which the conversation was extracted\n- \"dialogue_id\": int32 - the dialogue id\n- \"dialogue_url\": string - url of the conversation\n- \"dialogue_turns\": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=[\"病人\", \"医生\"]), and \"utterance\"(string) for each turn. (ClassLable(names=[\"Patient\", \"Doctor\"]) for english)### Data Splits\n\nThere are no data splits on the original data## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators" ]
bf8fe190b00b3142fc729259529b3e795185f62f
# Dataset Card for COVID-19 日本語Twitterデータセット (COVID-19 Japanese Twitter Dataset) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [COVID-19 日本語Twitterデータセット homepage](http://www.db.info.gifu-u.ac.jp/data/Data_5f02db873363f976fce930d1) - **Repository:** [N/A] - **Paper:** [N/A] - **Leaderboard:** [N/A] - **Point of Contact:** Check the homepage. ### Dataset Summary 53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example. ### Supported Tasks and Leaderboards Text-classification, Whether the tweet is related to COVID-19, and whether it is fact or opinion. ### Languages The text can be gotten using the IDs in this dataset is Japanese, posted on Twitter. ## Dataset Structure ### Data Instances CSV file with the 1st column is Twitter ID and the 2nd column is assessment option ID. ### Data Fields - `tweet_id`: Twitter ID. - `assessment_option_id`: The selection result. It has the following meanings: - 63: a general fact: generally published information, such as news. - 64: a personal fact: personal news. For example, a person heard that the next-door neighbor, XX, has infected COVID-19, which has not been in a news. - 65: an opinion/feeling - 66: difficult to determine if they are related to COVID-19 (it is definitely the tweet is not "67: unrelated", but 63, 64, 65 cannot be determined) - 67: unrelated - 68: it is a fact, but difficult to determine whether general facts, personal facts, or impressions (it may be irrelevant to COVID-19 since it is indistinguishable between 63 - 65 and 67). ### Data Splits No articles have been published for this dataset, and it appears that the author of the dataset is willing to publish an article (it is not certain that the splitting information will be included). Therefore, at this time, information on data splits is not provided. ## Dataset Creation ### Curation Rationale [More Information Needed] because the paper is not yet published. ### Source Data #### Initial Data Collection and Normalization 53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020. #### Who are the source language producers? The language producers are users of Twitter. ### Annotations #### Annotation process The annotation is by majority decision by 5 - 10 crowd workers. #### Who are the annotators? Crowd workers. ### Personal and Sensitive Information The author does not contain original tweets. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset is hosted by Suzuki Laboratory, Gifu University, Japan. ### Licensing Information CC-BY-ND 4.0 ### Citation Information A related paper has not yet published. The author shows how to cite as「鈴木 優: COVID-19 日本語 Twitter データセット ( http://www.db.info.gifu-u.ac.jp/data/Data_5f02db873363f976fce930d1 ) 」. ### Contributions Thanks to [@forest1988](https://github.com/forest1988) for adding this dataset.
covid_tweets_japanese
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ja", "license:cc-by-nd-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["ja"], "license": ["cc-by-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "pretty_name": "COVID-19 \u65e5\u672c\u8a9eTwitter\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8 (COVID-19 Japanese Twitter Dataset)", "dataset_info": {"features": [{"name": "tweet_id", "dtype": "string"}, {"name": "assessment_option_id", "dtype": {"class_label": {"names": {"0": "63", "1": "64", "2": "65", "3": "66", "4": "67", "5": "68"}}}}], "splits": [{"name": "train", "num_bytes": 1662833, "num_examples": 53639}], "download_size": 406005, "dataset_size": 1662833}}
2024-01-18T09:46:42+00:00
[]
[ "ja" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Japanese #license-cc-by-nd-4.0 #region-us
# Dataset Card for COVID-19 日本語Twitterデータセット (COVID-19 Japanese Twitter Dataset) ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: COVID-19 日本語Twitterデータセット homepage - Repository: [N/A] - Paper: [N/A] - Leaderboard: [N/A] - Point of Contact: Check the homepage. ### Dataset Summary 53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example. ### Supported Tasks and Leaderboards Text-classification, Whether the tweet is related to COVID-19, and whether it is fact or opinion. ### Languages The text can be gotten using the IDs in this dataset is Japanese, posted on Twitter. ## Dataset Structure ### Data Instances CSV file with the 1st column is Twitter ID and the 2nd column is assessment option ID. ### Data Fields - 'tweet_id': Twitter ID. - 'assessment_option_id': The selection result. It has the following meanings: - 63: a general fact: generally published information, such as news. - 64: a personal fact: personal news. For example, a person heard that the next-door neighbor, XX, has infected COVID-19, which has not been in a news. - 65: an opinion/feeling - 66: difficult to determine if they are related to COVID-19 (it is definitely the tweet is not "67: unrelated", but 63, 64, 65 cannot be determined) - 67: unrelated - 68: it is a fact, but difficult to determine whether general facts, personal facts, or impressions (it may be irrelevant to COVID-19 since it is indistinguishable between 63 - 65 and 67). ### Data Splits No articles have been published for this dataset, and it appears that the author of the dataset is willing to publish an article (it is not certain that the splitting information will be included). Therefore, at this time, information on data splits is not provided. ## Dataset Creation ### Curation Rationale because the paper is not yet published. ### Source Data #### Initial Data Collection and Normalization 53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. Target tweets include "COVID" or "コロナ". The period of the tweets is from around January 2020 to around June 2020. #### Who are the source language producers? The language producers are users of Twitter. ### Annotations #### Annotation process The annotation is by majority decision by 5 - 10 crowd workers. #### Who are the annotators? Crowd workers. ### Personal and Sensitive Information The author does not contain original tweets. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The dataset is hosted by Suzuki Laboratory, Gifu University, Japan. ### Licensing Information CC-BY-ND 4.0 A related paper has not yet published. The author shows how to cite as「鈴木 優: COVID-19 日本語 Twitter データセット ( URL ) 」. ### Contributions Thanks to @forest1988 for adding this dataset.
[ "# Dataset Card for COVID-19 日本語Twitterデータセット (COVID-19 Japanese Twitter Dataset)", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: COVID-19 日本語Twitterデータセット homepage\n- Repository: [N/A]\n- Paper: [N/A]\n- Leaderboard: [N/A]\n- Point of Contact: Check the homepage.", "### Dataset Summary\n\n53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include \"COVID\" or \"コロナ\". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example.", "### Supported Tasks and Leaderboards\n\nText-classification, Whether the tweet is related to COVID-19, and whether it is fact or opinion.", "### Languages\n\nThe text can be gotten using the IDs in this dataset is Japanese, posted on Twitter.", "## Dataset Structure", "### Data Instances\n\nCSV file with the 1st column is Twitter ID and the 2nd column is assessment option ID.", "### Data Fields\n\n- 'tweet_id': Twitter ID.\n- 'assessment_option_id': The selection result. It has the following meanings:\n - 63: a general fact: generally published information, such as news.\n - 64: a personal fact: personal news. For example, a person heard that the next-door neighbor, XX, has infected COVID-19, which has not been in a news.\n - 65: an opinion/feeling\n - 66: difficult to determine if they are related to COVID-19 (it is definitely the tweet is not \"67: unrelated\", but 63, 64, 65 cannot be determined)\n - 67: unrelated\n - 68: it is a fact, but difficult to determine whether general facts, personal facts, or impressions (it may be irrelevant to COVID-19 since it is indistinguishable between 63 - 65 and 67).", "### Data Splits\n\nNo articles have been published for this dataset, and it appears that the author of the dataset is willing to publish an article (it is not certain that the splitting information will be included). Therefore, at this time, information on data splits is not provided.", "## Dataset Creation", "### Curation Rationale\n\n because the paper is not yet published.", "### Source Data", "#### Initial Data Collection and Normalization\n\n53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. Target tweets include \"COVID\" or \"コロナ\". The period of the tweets is from around January 2020 to around June 2020.", "#### Who are the source language producers?\n\nThe language producers are users of Twitter.", "### Annotations", "#### Annotation process\n\nThe annotation is by majority decision by 5 - 10 crowd workers.", "#### Who are the annotators?\n\nCrowd workers.", "### Personal and Sensitive Information\n\nThe author does not contain original tweets.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset is hosted by Suzuki Laboratory, Gifu University, Japan.", "### Licensing Information\n\nCC-BY-ND 4.0\n\n\n\nA related paper has not yet published.\nThe author shows how to cite as「鈴木 優: COVID-19 日本語 Twitter データセット ( URL ) 」.", "### Contributions\n\nThanks to @forest1988 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Japanese #license-cc-by-nd-4.0 #region-us \n", "# Dataset Card for COVID-19 日本語Twitterデータセット (COVID-19 Japanese Twitter Dataset)", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: COVID-19 日本語Twitterデータセット homepage\n- Repository: [N/A]\n- Paper: [N/A]\n- Leaderboard: [N/A]\n- Point of Contact: Check the homepage.", "### Dataset Summary\n\n53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include \"COVID\" or \"コロナ\". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example.", "### Supported Tasks and Leaderboards\n\nText-classification, Whether the tweet is related to COVID-19, and whether it is fact or opinion.", "### Languages\n\nThe text can be gotten using the IDs in this dataset is Japanese, posted on Twitter.", "## Dataset Structure", "### Data Instances\n\nCSV file with the 1st column is Twitter ID and the 2nd column is assessment option ID.", "### Data Fields\n\n- 'tweet_id': Twitter ID.\n- 'assessment_option_id': The selection result. It has the following meanings:\n - 63: a general fact: generally published information, such as news.\n - 64: a personal fact: personal news. For example, a person heard that the next-door neighbor, XX, has infected COVID-19, which has not been in a news.\n - 65: an opinion/feeling\n - 66: difficult to determine if they are related to COVID-19 (it is definitely the tweet is not \"67: unrelated\", but 63, 64, 65 cannot be determined)\n - 67: unrelated\n - 68: it is a fact, but difficult to determine whether general facts, personal facts, or impressions (it may be irrelevant to COVID-19 since it is indistinguishable between 63 - 65 and 67).", "### Data Splits\n\nNo articles have been published for this dataset, and it appears that the author of the dataset is willing to publish an article (it is not certain that the splitting information will be included). Therefore, at this time, information on data splits is not provided.", "## Dataset Creation", "### Curation Rationale\n\n because the paper is not yet published.", "### Source Data", "#### Initial Data Collection and Normalization\n\n53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. Target tweets include \"COVID\" or \"コロナ\". The period of the tweets is from around January 2020 to around June 2020.", "#### Who are the source language producers?\n\nThe language producers are users of Twitter.", "### Annotations", "#### Annotation process\n\nThe annotation is by majority decision by 5 - 10 crowd workers.", "#### Who are the annotators?\n\nCrowd workers.", "### Personal and Sensitive Information\n\nThe author does not contain original tweets.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset is hosted by Suzuki Laboratory, Gifu University, Japan.", "### Licensing Information\n\nCC-BY-ND 4.0\n\n\n\nA related paper has not yet published.\nThe author shows how to cite as「鈴木 優: COVID-19 日本語 Twitter データセット ( URL ) 」.", "### Contributions\n\nThanks to @forest1988 for adding this dataset." ]
[ 93, 22, 120, 54, 90, 33, 25, 6, 31, 193, 60, 5, 15, 4, 60, 19, 5, 19, 13, 17, 8, 7, 8, 7, 5, 24, 48, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Japanese #license-cc-by-nd-4.0 #region-us \n# Dataset Card for COVID-19 日本語Twitterデータセット (COVID-19 Japanese Twitter Dataset)## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: COVID-19 日本語Twitterデータセット homepage\n- Repository: [N/A]\n- Paper: [N/A]\n- Leaderboard: [N/A]\n- Point of Contact: Check the homepage.### Dataset Summary\n\n53,640 Japanese tweets with annotation if a tweet is related to COVID-19 or not. The annotation is by majority decision by 5 - 10 crowd workers. Target tweets include \"COVID\" or \"コロナ\". The period of the tweets is from around January 2020 to around June 2020. The original tweets are not contained. Please use Twitter API to get them, for example.### Supported Tasks and Leaderboards\n\nText-classification, Whether the tweet is related to COVID-19, and whether it is fact or opinion.### Languages\n\nThe text can be gotten using the IDs in this dataset is Japanese, posted on Twitter.## Dataset Structure### Data Instances\n\nCSV file with the 1st column is Twitter ID and the 2nd column is assessment option ID." ]
369b47c4c20aff1193b8edeeedc37d14ae28226b
# Dataset Card for covost2 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/facebookresearch/covost - **Repository:** https://github.com/facebookresearch/covost - **Paper:** https://arxiv.org/abs/2007.10310 - **Leaderboard:** [Needs More Information] - **Point of Contact:** Changhan Wang (changhan@fb.com), Juan Miguel Pino (juancarabina@fb.com), Jiatao Gu (jgu@fb.com) ### Dataset Summary CoVoST 2 is a large-scale multilingual speech translation corpus covering translations from 21 languages into English \ and from English into 15 languages. The dataset is created using Mozillas open-source Common Voice database of \ crowdsourced voice recordings. There are 2,900 hours of speech represented in the corpus. ### Supported Tasks and Leaderboards `speech-translation`: The dataset can be used for Speech-to-text translation (ST). The model is presented with an audio file in one language and asked to transcribe the audio file to written text in another language. The most common evaluation metric is the BLEU score. Examples can be found at https://github.com/pytorch/fairseq/blob/master/examples/speech_to_text/docs/covost_example.md . ### Languages The dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese. ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, usually called `file`, its transcription, called `sentence`, and the translation in target language called `translation`. ``` {'client_id': 'd277a1f3904ae00b09b73122b87674e7c2c78e08120721f37b5577013ead08d1ea0c053ca5b5c2fb948df2c81f27179aef2c741057a17249205d251a8fe0e658', 'file': '/home/suraj/projects/fairseq_s2t/covst/dataset/en/clips/common_voice_en_18540003.mp3', 'audio': {'path': '/home/suraj/projects/fairseq_s2t/covst/dataset/en/clips/common_voice_en_18540003.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000}, 'id': 'common_voice_en_18540003', 'sentence': 'When water is scarce, avoid wasting it.', 'translation': 'Wenn Wasser knapp ist, verschwenden Sie es nicht.'} ``` ### Data Fields - file: A path to the downloaded audio file in .mp3 format. - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - sentence: The transcription of the audio file in source language. - translation: The transcription of the audio file in the target language. - id: unique id of the data sample. ### Data Splits | config | train | validation | test | |----------|--------|------------|-------| | en_de | 289430 | 15531 | 15531 | | en_tr | 289430 | 15531 | 15531 | | en_fa | 289430 | 15531 | 15531 | | en_sv-SE | 289430 | 15531 | 15531 | | en_mn | 289430 | 15531 | 15531 | | en_zh-CN | 289430 | 15531 | 15531 | | en_cy | 289430 | 15531 | 15531 | | en_ca | 289430 | 15531 | 15531 | | en_sl | 289430 | 15531 | 15531 | | en_et | 289430 | 15531 | 15531 | | en_id | 289430 | 15531 | 15531 | | en_ar | 289430 | 15531 | 15531 | | en_ta | 289430 | 15531 | 15531 | | en_lv | 289430 | 15531 | 15531 | | en_ja | 289430 | 15531 | 15531 | | fr_en | 207374 | 14760 | 14760 | | de_en | 127834 | 13511 | 13511 | | es_en | 79015 | 13221 | 13221 | | ca_en | 95854 | 12730 | 12730 | | it_en | 31698 | 8940 | 8951 | | ru_en | 12112 | 6110 | 6300 | | zh-CN_en | 7085 | 4843 | 4898 | | pt_en | 9158 | 3318 | 4023 | | fa_en | 53949 | 3445 | 3445 | | et_en | 1782 | 1576 | 1571 | | mn_en | 2067 | 1761 | 1759 | | nl_en | 7108 | 1699 | 1699 | | tr_en | 3966 | 1624 | 1629 | | ar_en | 2283 | 1758 | 1695 | | sv-SE_en | 2160 | 1349 | 1595 | | lv_en | 2337 | 1125 | 1629 | | sl_en | 1843 | 509 | 360 | | ta_en | 1358 | 384 | 786 | | ja_en | 1119 | 635 | 684 | | id_en | 1243 | 792 | 844 | | cy_en | 1241 | 690 | 690 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [CC BY-NC 4.0](https://github.com/facebookresearch/covost/blob/main/LICENSE) ### Citation Information ``` @misc{wang2020covost, title={CoVoST 2: A Massively Multilingual Speech-to-Text Translation Corpus}, author={Changhan Wang and Anne Wu and Juan Pino}, year={2020}, eprint={2007.10310}, archivePrefix={arXiv}, primaryClass={cs.CL} ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
covost2
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:extended|other-common-voice", "language:ar", "language:ca", "language:cy", "language:de", "language:es", "language:et", "language:fa", "language:fr", "language:id", "language:it", "language:ja", "language:lv", "language:mn", "language:nl", "language:pt", "language:ru", "language:sl", "language:sv", "language:ta", "language:tr", "language:zh", "license:cc-by-nc-4.0", "arxiv:2007.10310", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["ar", "ca", "cy", "de", "es", "et", "fa", "fr", "id", "it", "ja", "lv", "mn", "nl", "pt", "ru", "sl", "sv", "ta", "tr", "zh"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|other-common-voice"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "CoVoST 2", "language_bcp47": ["sv-SE", "zh-CN"], "dataset_info": [{"config_name": "en_de", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 110716293, "num_examples": 289430}, {"name": "validation", "num_bytes": 5971731, "num_examples": 15531}, {"name": "test", "num_bytes": 5689684, "num_examples": 15531}], "download_size": 25779505, "dataset_size": 122377708}, {"config_name": "en_tr", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109474265, "num_examples": 289430}, {"name": "validation", "num_bytes": 5914622, "num_examples": 15531}, {"name": "test", "num_bytes": 5619271, "num_examples": 15531}], "download_size": 23659131, "dataset_size": 121008158}, {"config_name": "en_fa", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 119490720, "num_examples": 289430}, {"name": "validation", "num_bytes": 6423535, "num_examples": 15531}, {"name": "test", "num_bytes": 6103617, "num_examples": 15531}], "download_size": 26148420, "dataset_size": 132017872}, {"config_name": "en_sv-SE", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 108557530, "num_examples": 289430}, {"name": "validation", "num_bytes": 5845918, "num_examples": 15531}, {"name": "test", "num_bytes": 5580039, "num_examples": 15531}], "download_size": 23671482, "dataset_size": 119983487}, {"config_name": "en_mn", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 123950136, "num_examples": 289430}, {"name": "validation", "num_bytes": 6693044, "num_examples": 15531}, {"name": "test", "num_bytes": 6293633, "num_examples": 15531}], "download_size": 27527436, "dataset_size": 136936813}, {"config_name": "en_zh-CN", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 106490939, "num_examples": 289430}, {"name": "validation", "num_bytes": 5735331, "num_examples": 15531}, {"name": "test", "num_bytes": 5487808, "num_examples": 15531}], "download_size": 24280932, "dataset_size": 117714078}, {"config_name": "en_cy", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109317182, "num_examples": 289430}, {"name": "validation", "num_bytes": 5894579, "num_examples": 15531}, {"name": "test", "num_bytes": 5626428, "num_examples": 15531}], "download_size": 24224499, "dataset_size": 120838189}, {"config_name": "en_ca", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109922455, "num_examples": 289430}, {"name": "validation", "num_bytes": 5924345, "num_examples": 15531}, {"name": "test", "num_bytes": 5623227, "num_examples": 15531}], "download_size": 24167201, "dataset_size": 121470027}, {"config_name": "en_sl", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 107987860, "num_examples": 289430}, {"name": "validation", "num_bytes": 5838299, "num_examples": 15531}, {"name": "test", "num_bytes": 5537805, "num_examples": 15531}], "download_size": 23421999, "dataset_size": 119363964}, {"config_name": "en_et", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 107707024, "num_examples": 289430}, {"name": "validation", "num_bytes": 5810185, "num_examples": 15531}, {"name": "test", "num_bytes": 5543309, "num_examples": 15531}], "download_size": 23223843, "dataset_size": 119060518}, {"config_name": "en_id", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109456930, "num_examples": 289430}, {"name": "validation", "num_bytes": 5896953, "num_examples": 15531}, {"name": "test", "num_bytes": 5634939, "num_examples": 15531}], "download_size": 22904065, "dataset_size": 120988822}, {"config_name": "en_ar", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 116732296, "num_examples": 289430}, {"name": "validation", "num_bytes": 6280190, "num_examples": 15531}, {"name": "test", "num_bytes": 5947069, "num_examples": 15531}], "download_size": 25301304, "dataset_size": 128959555}, {"config_name": "en_ta", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 146318684, "num_examples": 289430}, {"name": "validation", "num_bytes": 7944020, "num_examples": 15531}, {"name": "test", "num_bytes": 7411400, "num_examples": 15531}], "download_size": 30037790, "dataset_size": 161674104}, {"config_name": "en_lv", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109532576, "num_examples": 289430}, {"name": "validation", "num_bytes": 5905197, "num_examples": 15531}, {"name": "test", "num_bytes": 5625189, "num_examples": 15531}], "download_size": 24573927, "dataset_size": 121062962}, {"config_name": "en_ja", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 114741253, "num_examples": 289430}, {"name": "validation", "num_bytes": 6161930, "num_examples": 15531}, {"name": "test", "num_bytes": 5883608, "num_examples": 15531}], "download_size": 26664247, "dataset_size": 126786791}, {"config_name": "fr_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75792665, "num_examples": 207374}, {"name": "validation", "num_bytes": 5487082, "num_examples": 14760}, {"name": "test", "num_bytes": 5525498, "num_examples": 14760}], "download_size": 7282129, "dataset_size": 86805245}, {"config_name": "de_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 47678171, "num_examples": 127834}, {"name": "validation", "num_bytes": 5106253, "num_examples": 13511}, {"name": "test", "num_bytes": 5066500, "num_examples": 13511}], "download_size": 9926797, "dataset_size": 57850924}, {"config_name": "es_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29152515, "num_examples": 79015}, {"name": "validation", "num_bytes": 4974593, "num_examples": 13221}, {"name": "test", "num_bytes": 4983920, "num_examples": 13221}], "download_size": 3202080, "dataset_size": 39111028}, {"config_name": "ca_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35902579, "num_examples": 95854}, {"name": "validation", "num_bytes": 4798435, "num_examples": 12730}, {"name": "test", "num_bytes": 4804941, "num_examples": 12730}], "download_size": 5021926, "dataset_size": 45505955}, {"config_name": "it_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11952709, "num_examples": 31698}, {"name": "validation", "num_bytes": 3393315, "num_examples": 8940}, {"name": "test", "num_bytes": 3412207, "num_examples": 8951}], "download_size": 1691247, "dataset_size": 18758231}, {"config_name": "ru_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5610194, "num_examples": 12112}, {"name": "validation", "num_bytes": 2819414, "num_examples": 6110}, {"name": "test", "num_bytes": 2923961, "num_examples": 6300}], "download_size": 1443078, "dataset_size": 11353569}, {"config_name": "zh-CN_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2791288, "num_examples": 7085}, {"name": "validation", "num_bytes": 1918796, "num_examples": 4843}, {"name": "test", "num_bytes": 1908633, "num_examples": 4898}], "download_size": 587550, "dataset_size": 6618717}, {"config_name": "pt_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3095722, "num_examples": 9158}, {"name": "validation", "num_bytes": 1133404, "num_examples": 3318}, {"name": "test", "num_bytes": 1384251, "num_examples": 4023}], "download_size": 476419, "dataset_size": 5613377}, {"config_name": "fa_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18015738, "num_examples": 53949}, {"name": "validation", "num_bytes": 1241531, "num_examples": 3445}, {"name": "test", "num_bytes": 1263271, "num_examples": 3445}], "download_size": 3864623, "dataset_size": 20520540}, {"config_name": "et_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 808508, "num_examples": 1782}, {"name": "validation", "num_bytes": 690694, "num_examples": 1576}, {"name": "test", "num_bytes": 685375, "num_examples": 1571}], "download_size": 246569, "dataset_size": 2184577}, {"config_name": "mn_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 900588, "num_examples": 2067}, {"name": "validation", "num_bytes": 765543, "num_examples": 1761}, {"name": "test", "num_bytes": 762577, "num_examples": 1759}], "download_size": 189710, "dataset_size": 2428708}, {"config_name": "nl_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2468140, "num_examples": 7108}, {"name": "validation", "num_bytes": 594458, "num_examples": 1699}, {"name": "test", "num_bytes": 594979, "num_examples": 1699}], "download_size": 543795, "dataset_size": 3657577}, {"config_name": "tr_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1391148, "num_examples": 3966}, {"name": "validation", "num_bytes": 566458, "num_examples": 1624}, {"name": "test", "num_bytes": 570760, "num_examples": 1629}], "download_size": 280904, "dataset_size": 2528366}, {"config_name": "ar_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 743065, "num_examples": 2283}, {"name": "validation", "num_bytes": 575077, "num_examples": 1758}, {"name": "test", "num_bytes": 552356, "num_examples": 1695}], "download_size": 109802, "dataset_size": 1870498}, {"config_name": "sv-SE_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 698800, "num_examples": 2160}, {"name": "validation", "num_bytes": 438319, "num_examples": 1349}, {"name": "test", "num_bytes": 517738, "num_examples": 1595}], "download_size": 96161, "dataset_size": 1654857}, {"config_name": "lv_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 747290, "num_examples": 2337}, {"name": "validation", "num_bytes": 360941, "num_examples": 1125}, {"name": "test", "num_bytes": 519183, "num_examples": 1629}], "download_size": 88836, "dataset_size": 1627414}, {"config_name": "sl_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 602420, "num_examples": 1843}, {"name": "validation", "num_bytes": 165977, "num_examples": 509}, {"name": "test", "num_bytes": 115414, "num_examples": 360}], "download_size": 58445, "dataset_size": 883811}, {"config_name": "ta_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 534564, "num_examples": 1358}, {"name": "validation", "num_bytes": 150428, "num_examples": 384}, {"name": "test", "num_bytes": 303843, "num_examples": 786}], "download_size": 55659, "dataset_size": 988835}, {"config_name": "ja_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 396334, "num_examples": 1119}, {"name": "validation", "num_bytes": 226054, "num_examples": 635}, {"name": "test", "num_bytes": 241310, "num_examples": 684}], "download_size": 54666, "dataset_size": 863698}, {"config_name": "id_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 406989, "num_examples": 1243}, {"name": "validation", "num_bytes": 259134, "num_examples": 792}, {"name": "test", "num_bytes": 277053, "num_examples": 844}], "download_size": 51755, "dataset_size": 943176}, {"config_name": "cy_en", "features": [{"name": "client_id", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 432071, "num_examples": 1241}, {"name": "validation", "num_bytes": 236107, "num_examples": 690}, {"name": "test", "num_bytes": 236713, "num_examples": 690}], "download_size": 875557, "dataset_size": 904891}]}
2024-01-18T11:02:25+00:00
[ "2007.10310" ]
[ "ar", "ca", "cy", "de", "es", "et", "fa", "fr", "id", "it", "ja", "lv", "mn", "nl", "pt", "ru", "sl", "sv", "ta", "tr", "zh" ]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|other-common-voice #language-Arabic #language-Catalan #language-Welsh #language-German #language-Spanish #language-Estonian #language-Persian #language-French #language-Indonesian #language-Italian #language-Japanese #language-Latvian #language-Mongolian #language-Dutch #language-Portuguese #language-Russian #language-Slovenian #language-Swedish #language-Tamil #language-Turkish #language-Chinese #license-cc-by-nc-4.0 #arxiv-2007.10310 #region-us
Dataset Card for covost2 ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: * Point of Contact: Changhan Wang (changhan@URL), Juan Miguel Pino (juancarabina@URL), Jiatao Gu (jgu@URL) ### Dataset Summary CoVoST 2 is a large-scale multilingual speech translation corpus covering translations from 21 languages into English and from English into 15 languages. The dataset is created using Mozillas open-source Common Voice database of crowdsourced voice recordings. There are 2,900 hours of speech represented in the corpus. ### Supported Tasks and Leaderboards 'speech-translation': The dataset can be used for Speech-to-text translation (ST). The model is presented with an audio file in one language and asked to transcribe the audio file to written text in another language. The most common evaluation metric is the BLEU score. Examples can be found at URL . ### Languages The dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese. Dataset Structure ----------------- ### Data Instances A typical data point comprises the path to the audio file, usually called 'file', its transcription, called 'sentence', and the translation in target language called 'translation'. ### Data Fields * file: A path to the downloaded audio file in .mp3 format. * audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. * sentence: The transcription of the audio file in source language. * translation: The transcription of the audio file in the target language. * id: unique id of the data sample. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information CC BY-NC 4.0 ### Contributions Thanks to @patil-suraj for adding this dataset.
[ "### Dataset Summary\n\n\nCoVoST 2 is a large-scale multilingual speech translation corpus covering translations from 21 languages into English \n\nand from English into 15 languages. The dataset is created using Mozilla\u0019s open-source Common Voice database of \n\ncrowdsourced voice recordings. There are 2,900 hours of speech represented in the corpus.", "### Supported Tasks and Leaderboards\n\n\n'speech-translation': The dataset can be used for Speech-to-text translation (ST). The model is presented with an audio file in one language and asked to transcribe the audio file to written text in another language. The most common evaluation metric is the BLEU score. Examples can be found at URL .", "### Languages\n\n\nThe dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file', its transcription, called 'sentence', and the translation in target language called 'translation'.", "### Data Fields\n\n\n* file: A path to the downloaded audio file in .mp3 format.\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* sentence: The transcription of the audio file in source language.\n* translation: The transcription of the audio file in the target language.\n* id: unique id of the data sample.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC BY-NC 4.0", "### Contributions\n\n\nThanks to @patil-suraj for adding this dataset." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|other-common-voice #language-Arabic #language-Catalan #language-Welsh #language-German #language-Spanish #language-Estonian #language-Persian #language-French #language-Indonesian #language-Italian #language-Japanese #language-Latvian #language-Mongolian #language-Dutch #language-Portuguese #language-Russian #language-Slovenian #language-Swedish #language-Tamil #language-Turkish #language-Chinese #license-cc-by-nc-4.0 #arxiv-2007.10310 #region-us \n", "### Dataset Summary\n\n\nCoVoST 2 is a large-scale multilingual speech translation corpus covering translations from 21 languages into English \n\nand from English into 15 languages. The dataset is created using Mozilla\u0019s open-source Common Voice database of \n\ncrowdsourced voice recordings. There are 2,900 hours of speech represented in the corpus.", "### Supported Tasks and Leaderboards\n\n\n'speech-translation': The dataset can be used for Speech-to-text translation (ST). The model is presented with an audio file in one language and asked to transcribe the audio file to written text in another language. The most common evaluation metric is the BLEU score. Examples can be found at URL .", "### Languages\n\n\nThe dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file', its transcription, called 'sentence', and the translation in target language called 'translation'.", "### Data Fields\n\n\n* file: A path to the downloaded audio file in .mp3 format.\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* sentence: The transcription of the audio file in source language.\n* translation: The transcription of the audio file in the target language.\n* id: unique id of the data sample.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC BY-NC 4.0", "### Contributions\n\n\nThanks to @patil-suraj for adding this dataset." ]
[ 226, 76, 82, 85, 47, 235, 11, 7, 4, 10, 10, 5, 5, 9, 50, 7, 8, 14, 6, 11, 19 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|other-common-voice #language-Arabic #language-Catalan #language-Welsh #language-German #language-Spanish #language-Estonian #language-Persian #language-French #language-Indonesian #language-Italian #language-Japanese #language-Latvian #language-Mongolian #language-Dutch #language-Portuguese #language-Russian #language-Slovenian #language-Swedish #language-Tamil #language-Turkish #language-Chinese #license-cc-by-nc-4.0 #arxiv-2007.10310 #region-us \n### Dataset Summary\n\n\nCoVoST 2 is a large-scale multilingual speech translation corpus covering translations from 21 languages into English \n\nand from English into 15 languages. The dataset is created using Mozilla\u0019s open-source Common Voice database of \n\ncrowdsourced voice recordings. There are 2,900 hours of speech represented in the corpus.### Supported Tasks and Leaderboards\n\n\n'speech-translation': The dataset can be used for Speech-to-text translation (ST). The model is presented with an audio file in one language and asked to transcribe the audio file to written text in another language. The most common evaluation metric is the BLEU score. Examples can be found at URL .### Languages\n\n\nThe dataset contains the audio, transcriptions, and translations in the following languages, French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, Chinese, Welsh, Catalan, Slovenian, Estonian, Indonesian, Arabic, Tamil, Portuguese, Latvian, and Japanese.\n\n\nDataset Structure\n-----------------" ]
66f6a5efd474e35bd7cb94bf15dea27d4c6ad3f8
# Dataset Card for CPPE - 5 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/Rishit-dagli/CPPE-Dataset - **Paper:** [CPPE-5: Medical Personal Protective Equipment Dataset](https://arxiv.org/abs/2112.09569) - **Leaderboard:** https://paperswithcode.com/sota/object-detection-on-cppe-5 - **Point of Contact:** rishit.dagli@gmail.com ### Dataset Summary CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories. Some features of this dataset are: * high quality images and annotations (~4.6 bounding boxes per image) * real-life images unlike any current such dataset * majority of non-iconic images (allowing easy deployment to real-world environments) ### Supported Tasks and Leaderboards - `object-detection`: The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at https://paperswithcode.com/sota/object-detection-on-cppe-5. The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image and its object annotations. ``` { 'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x2373B065C18>, 'width': 943, 'height': 663, 'objects': { 'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [ [302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0] ], 'category': [4, 4, 0, 0] } } ``` ### Data Fields - `image`: the image id - `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: the image width - `height`: the image height - `objects`: a dictionary containing bounding box metadata for the objects present on the image - `id`: the annotation id - `area`: the area of the bounding box - `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) - `category`: the object's category, with possible values including `Coverall` (0),`Face_Shield` (1),`Gloves` (2),`Goggles` (3) and `Mask` (4) ### Data Splits The data is split into training and testing set. The training set contains 1000 images and test set 29 images. ## Dataset Creation ### Curation Rationale From the paper: > With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect worldwide. ### Source Data #### Initial Data Collection and Normalization The images in the CPPE-5 dataset were collected using the following process: * Obtain Images from Flickr: Following the object categories we identified earlier, we first download images from Flickr and save them at the "Original" size. On Flickr, images are served at multiple different sizes (Square 75, Small 240, Large 1024, X-Large 4K etc.), the "Original" size is an exact copy of the image uploaded by author. * Extract relevant metadata: Flickr contains images each with searchable metadata, we extract the following relevant metadata: * A direct link to the original image on Flickr * Width and height of the image * Title given to the image by the author * Date and time the image was uploaded on * Flickr username of the author of the image * Flickr Name of the author of the image * Flickr profile of the author of the image * The License image is licensed under * MD5 hash of the original image * Obtain Images from Google Images: Due to the reasons we mention earlier, we only collect a very small proportion of images from Google Images. For these set of images we extract the following metadata: * A direct link to the original image * Width and height of the image * MD5 hash of the original image * Filter inappropriate images: Though very rare in the collected images, we also remove images containing inappropriate content using the safety filters on Flickr and Google Safe Search. * Filter near-similar images: We then remove near-duplicate images in the dataset using GIST descriptors #### Who are the source language producers? The images for this dataset were collected from Flickr and Google Images. ### Annotations #### Annotation process The dataset was labelled in two phases: the first phase included labelling 416 images and the second phase included labelling 613 images. For all the images in the dataset volunteers were provided the following table: |Item |Description | |------------|--------------------------------------------------------------------- | |coveralls | Coveralls are hospital gowns worn by medical professionals as in order to provide a barrier between patient and professional, these usually cover most of the exposed skin surfaces of the professional medics.| |mask | Mask prevents airborne transmission of infections between patients and/or treating personnel by blocking the movement of pathogens (primarily bacteria and viruses) shed in respiratory droplets and aerosols into and from the wearer’s mouth and nose.| face shield | Face shield aims to protect the wearer’s entire face (or part of it) from hazards such as flying objects and road debris, chemical splashes (in laboratories or in industry), or potentially infectious materials (in medical and laboratory environments).| gloves | Gloves are used during medical examinations and procedures to help prevent cross-contamination between caregivers and patients.| |goggles | Goggles, or safety glasses, are forms of protective eye wear that usually enclose or protect the area surrounding the eye in order to prevent particulates, water or chemicals from striking the eyes.| as well as examples of: correctly labelled images, incorrectly labelled images, and not applicable images. Before the labelling task, each volunteer was provided with an exercise to verify if the volunteer was able to correctly identify categories as well as identify if an annotated image is correctly labelled, incorrectly labelled, or not applicable. The labelling process first involved two volunteers independently labelling an image from the dataset. In any of the cases that: the number of bounding boxes are different, the labels for on or more of the bounding boxes are different or two volunteer annotations are sufficiently different; a third volunteer compiles the result from the two annotations to come up with a correctly labelled image. After this step, a volunteer verifies the bounding box annotations. Following this method of labelling the dataset we ensured that all images were labelled accurately and contained exhaustive annotations. As a result of this, our dataset consists of 1029 high-quality, majorly non-iconic, and accurately annotated images. #### Who are the annotators? In both the phases crowd-sourcing techniques were used with multiple volunteers labelling the dataset using the open-source tool LabelImg. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Dagli, Rishit, and Ali Mustufa Shaikh. ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{dagli2021cppe5, title={CPPE-5: Medical Personal Protective Equipment Dataset}, author={Rishit Dagli and Ali Mustufa Shaikh}, year={2021}, eprint={2112.09569}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ### Contributions Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset.
cppe-5
[ "task_categories:object-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "medical-personal-protective-equipment-detection", "arxiv:2112.09569", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["object-detection"], "task_ids": [], "paperswithcode_id": "cppe-5", "pretty_name": "CPPE - 5", "tags": ["medical-personal-protective-equipment-detection"], "dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "category", "dtype": {"class_label": {"names": {"0": "Coverall", "1": "Face_Shield", "2": "Gloves", "3": "Goggles", "4": "Mask"}}}}]}], "splits": [{"name": "train", "num_bytes": 240463364.0, "num_examples": 1000}, {"name": "test", "num_bytes": 4172164.0, "num_examples": 29}], "download_size": 241152653, "dataset_size": 244635528.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-04T07:54:46+00:00
[ "2112.09569" ]
[ "en" ]
TAGS #task_categories-object-detection #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #medical-personal-protective-equipment-detection #arxiv-2112.09569 #region-us
Dataset Card for CPPE - 5 ========================= Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: CPPE-5: Medical Personal Protective Equipment Dataset * Leaderboard: URL * Point of Contact: URL@URL ### Dataset Summary CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories. Some features of this dataset are: * high quality images and annotations (~4.6 bounding boxes per image) * real-life images unlike any current such dataset * majority of non-iconic images (allowing easy deployment to real-world environments) ### Supported Tasks and Leaderboards * 'object-detection': The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at URL The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales. ### Languages English Dataset Structure ----------------- ### Data Instances A data point comprises an image and its object annotations. ### Data Fields * 'image': the image id * 'image': 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' * 'width': the image width * 'height': the image height * 'objects': a dictionary containing bounding box metadata for the objects present on the image + 'id': the annotation id + 'area': the area of the bounding box + 'bbox': the object's bounding box (in the coco format) + 'category': the object's category, with possible values including 'Coverall' (0),'Face\_Shield' (1),'Gloves' (2),'Goggles' (3) and 'Mask' (4) ### Data Splits The data is split into training and testing set. The training set contains 1000 images and test set 29 images. Dataset Creation ---------------- ### Curation Rationale From the paper: > > With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect > worldwide. > > > ### Source Data #### Initial Data Collection and Normalization The images in the CPPE-5 dataset were collected using the following process: * Obtain Images from Flickr: Following the object categories we identified earlier, we first download images from Flickr and save them at the "Original" size. On Flickr, images are served at multiple different sizes (Square 75, Small 240, Large 1024, X-Large 4K etc.), the "Original" size is an exact copy of the image uploaded by author. * Extract relevant metadata: Flickr contains images each with searchable metadata, we extract the following relevant metadata: + A direct link to the original image on Flickr + Width and height of the image + Title given to the image by the author + Date and time the image was uploaded on + Flickr username of the author of the image + Flickr Name of the author of the image + Flickr profile of the author of the image + The License image is licensed under + MD5 hash of the original image * Obtain Images from Google Images: Due to the reasons we mention earlier, we only collect a very small proportion of images from Google Images. For these set of images we extract the following metadata: + A direct link to the original image + Width and height of the image + MD5 hash of the original image * Filter inappropriate images: Though very rare in the collected images, we also remove images containing inappropriate content using the safety filters on Flickr and Google Safe Search. * Filter near-similar images: We then remove near-duplicate images in the dataset using GIST descriptors #### Who are the source language producers? The images for this dataset were collected from Flickr and Google Images. ### Annotations #### Annotation process The dataset was labelled in two phases: the first phase included labelling 416 images and the second phase included labelling 613 images. For all the images in the dataset volunteers were provided the following table: as well as examples of: correctly labelled images, incorrectly labelled images, and not applicable images. Before the labelling task, each volunteer was provided with an exercise to verify if the volunteer was able to correctly identify categories as well as identify if an annotated image is correctly labelled, incorrectly labelled, or not applicable. The labelling process first involved two volunteers independently labelling an image from the dataset. In any of the cases that: the number of bounding boxes are different, the labels for on or more of the bounding boxes are different or two volunteer annotations are sufficiently different; a third volunteer compiles the result from the two annotations to come up with a correctly labelled image. After this step, a volunteer verifies the bounding box annotations. Following this method of labelling the dataset we ensured that all images were labelled accurately and contained exhaustive annotations. As a result of this, our dataset consists of 1029 high-quality, majorly non-iconic, and accurately annotated images. #### Who are the annotators? In both the phases crowd-sourcing techniques were used with multiple volunteers labelling the dataset using the open-source tool LabelImg. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Dagli, Rishit, and Ali Mustufa Shaikh. ### Licensing Information ### Contributions Thanks to @mariosasko for adding this dataset.
[ "### Dataset Summary\n\n\nCPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.\n\n\nSome features of this dataset are:\n\n\n* high quality images and annotations (~4.6 bounding boxes per image)\n* real-life images unlike any current such dataset\n* majority of non-iconic images (allowing easy deployment to real-world environments)", "### Supported Tasks and Leaderboards\n\n\n* 'object-detection': The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at URL The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA data point comprises an image and its object annotations.", "### Data Fields\n\n\n* 'image': the image id\n* 'image': 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'width': the image width\n* 'height': the image height\n* 'objects': a dictionary containing bounding box metadata for the objects present on the image\n\t+ 'id': the annotation id\n\t+ 'area': the area of the bounding box\n\t+ 'bbox': the object's bounding box (in the coco format)\n\t+ 'category': the object's category, with possible values including 'Coverall' (0),'Face\\_Shield' (1),'Gloves' (2),'Goggles' (3) and 'Mask' (4)", "### Data Splits\n\n\nThe data is split into training and testing set. The training set contains 1000 images and test set 29 images.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nFrom the paper:\n\n\n\n> \n> With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect\n> worldwide.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe images in the CPPE-5 dataset were collected using the following process:\n\n\n* Obtain Images from Flickr: Following the object categories we identified earlier, we first download images from Flickr and save them at the \"Original\" size. On Flickr, images are served at multiple different sizes (Square 75, Small 240, Large 1024, X-Large 4K etc.), the \"Original\" size is an exact copy of the image uploaded by author.\n* Extract relevant metadata: Flickr contains images each with searchable metadata, we extract the following relevant\nmetadata:\n\t+ A direct link to the original image on Flickr\n\t+ Width and height of the image\n\t+ Title given to the image by the author\n\t+ Date and time the image was uploaded on\n\t+ Flickr username of the author of the image\n\t+ Flickr Name of the author of the image\n\t+ Flickr profile of the author of the image\n\t+ The License image is licensed under\n\t+ MD5 hash of the original image\n* Obtain Images from Google Images: Due to the reasons we mention earlier, we only collect a very small proportion\nof images from Google Images. For these set of images we extract the following metadata:\n\t+ A direct link to the original image\n\t+ Width and height of the image\n\t+ MD5 hash of the original image\n* Filter inappropriate images: Though very rare in the collected images, we also remove images containing inappropriate content using the safety filters on Flickr and Google Safe Search.\n* Filter near-similar images: We then remove near-duplicate images in the dataset using GIST descriptors", "#### Who are the source language producers?\n\n\nThe images for this dataset were collected from Flickr and Google Images.", "### Annotations", "#### Annotation process\n\n\nThe dataset was labelled in two phases: the first phase included labelling 416 images and the second phase included labelling 613 images. For all the images in the dataset volunteers were provided the following table:\n\n\n\nas well as examples of: correctly labelled images, incorrectly labelled images, and not applicable images. Before the labelling task, each volunteer was provided with an exercise to verify if the volunteer was able to correctly identify categories as well as identify if an annotated image is correctly labelled, incorrectly labelled, or not applicable. The labelling process first involved two volunteers independently labelling an image from the dataset. In any of the cases that: the number of bounding boxes are different, the labels for on or more of the bounding boxes are different or two volunteer annotations are sufficiently different; a third volunteer compiles the result from the two annotations to come up with a correctly labelled image. After this step, a volunteer verifies the bounding box annotations. Following this method of labelling the dataset we ensured that all images were labelled accurately and contained exhaustive\nannotations. As a result of this, our dataset consists of 1029 high-quality, majorly non-iconic, and accurately annotated images.", "#### Who are the annotators?\n\n\nIn both the phases crowd-sourcing techniques were used with multiple volunteers labelling the dataset using the open-source tool LabelImg.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nDagli, Rishit, and Ali Mustufa Shaikh.", "### Licensing Information", "### Contributions\n\n\nThanks to @mariosasko for adding this dataset." ]
[ "TAGS\n#task_categories-object-detection #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #medical-personal-protective-equipment-detection #arxiv-2112.09569 #region-us \n", "### Dataset Summary\n\n\nCPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.\n\n\nSome features of this dataset are:\n\n\n* high quality images and annotations (~4.6 bounding boxes per image)\n* real-life images unlike any current such dataset\n* majority of non-iconic images (allowing easy deployment to real-world environments)", "### Supported Tasks and Leaderboards\n\n\n* 'object-detection': The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at URL The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA data point comprises an image and its object annotations.", "### Data Fields\n\n\n* 'image': the image id\n* 'image': 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'width': the image width\n* 'height': the image height\n* 'objects': a dictionary containing bounding box metadata for the objects present on the image\n\t+ 'id': the annotation id\n\t+ 'area': the area of the bounding box\n\t+ 'bbox': the object's bounding box (in the coco format)\n\t+ 'category': the object's category, with possible values including 'Coverall' (0),'Face\\_Shield' (1),'Gloves' (2),'Goggles' (3) and 'Mask' (4)", "### Data Splits\n\n\nThe data is split into training and testing set. The training set contains 1000 images and test set 29 images.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nFrom the paper:\n\n\n\n> \n> With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect\n> worldwide.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe images in the CPPE-5 dataset were collected using the following process:\n\n\n* Obtain Images from Flickr: Following the object categories we identified earlier, we first download images from Flickr and save them at the \"Original\" size. On Flickr, images are served at multiple different sizes (Square 75, Small 240, Large 1024, X-Large 4K etc.), the \"Original\" size is an exact copy of the image uploaded by author.\n* Extract relevant metadata: Flickr contains images each with searchable metadata, we extract the following relevant\nmetadata:\n\t+ A direct link to the original image on Flickr\n\t+ Width and height of the image\n\t+ Title given to the image by the author\n\t+ Date and time the image was uploaded on\n\t+ Flickr username of the author of the image\n\t+ Flickr Name of the author of the image\n\t+ Flickr profile of the author of the image\n\t+ The License image is licensed under\n\t+ MD5 hash of the original image\n* Obtain Images from Google Images: Due to the reasons we mention earlier, we only collect a very small proportion\nof images from Google Images. For these set of images we extract the following metadata:\n\t+ A direct link to the original image\n\t+ Width and height of the image\n\t+ MD5 hash of the original image\n* Filter inappropriate images: Though very rare in the collected images, we also remove images containing inappropriate content using the safety filters on Flickr and Google Safe Search.\n* Filter near-similar images: We then remove near-duplicate images in the dataset using GIST descriptors", "#### Who are the source language producers?\n\n\nThe images for this dataset were collected from Flickr and Google Images.", "### Annotations", "#### Annotation process\n\n\nThe dataset was labelled in two phases: the first phase included labelling 416 images and the second phase included labelling 613 images. For all the images in the dataset volunteers were provided the following table:\n\n\n\nas well as examples of: correctly labelled images, incorrectly labelled images, and not applicable images. Before the labelling task, each volunteer was provided with an exercise to verify if the volunteer was able to correctly identify categories as well as identify if an annotated image is correctly labelled, incorrectly labelled, or not applicable. The labelling process first involved two volunteers independently labelling an image from the dataset. In any of the cases that: the number of bounding boxes are different, the labels for on or more of the bounding boxes are different or two volunteer annotations are sufficiently different; a third volunteer compiles the result from the two annotations to come up with a correctly labelled image. After this step, a volunteer verifies the bounding box annotations. Following this method of labelling the dataset we ensured that all images were labelled accurately and contained exhaustive\nannotations. As a result of this, our dataset consists of 1029 high-quality, majorly non-iconic, and accurately annotated images.", "#### Who are the annotators?\n\n\nIn both the phases crowd-sourcing techniques were used with multiple volunteers labelling the dataset using the open-source tool LabelImg.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nDagli, Rishit, and Ali Mustufa Shaikh.", "### Licensing Information", "### Contributions\n\n\nThanks to @mariosasko for adding this dataset." ]
[ 100, 125, 98, 12, 20, 273, 34, 141, 4, 349, 25, 5, 297, 39, 18, 7, 8, 14, 19, 6, 17 ]
[ "passage: TAGS\n#task_categories-object-detection #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #medical-personal-protective-equipment-detection #arxiv-2112.09569 #region-us \n### Dataset Summary\n\n\nCPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization of medical personal protective equipments, which is not possible with other popular data sets that focus on broad level categories.\n\n\nSome features of this dataset are:\n\n\n* high quality images and annotations (~4.6 bounding boxes per image)\n* real-life images unlike any current such dataset\n* majority of non-iconic images (allowing easy deployment to real-world environments)### Supported Tasks and Leaderboards\n\n\n* 'object-detection': The dataset can be used to train a model for Object Detection. This task has an active leaderboard which can be found at URL The metrics for this task are adopted from the COCO detection evaluation criteria, and include the mean Average Precision (AP) across IoU thresholds ranging from 0.50 to 0.95 at different scales.### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA data point comprises an image and its object annotations.", "passage: ### Data Fields\n\n\n* 'image': the image id\n* 'image': 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'width': the image width\n* 'height': the image height\n* 'objects': a dictionary containing bounding box metadata for the objects present on the image\n\t+ 'id': the annotation id\n\t+ 'area': the area of the bounding box\n\t+ 'bbox': the object's bounding box (in the coco format)\n\t+ 'category': the object's category, with possible values including 'Coverall' (0),'Face\\_Shield' (1),'Gloves' (2),'Goggles' (3) and 'Mask' (4)### Data Splits\n\n\nThe data is split into training and testing set. The training set contains 1000 images and test set 29 images.\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nFrom the paper:\n\n\n\n> \n> With CPPE-5 dataset, we hope to facilitate research and use in applications at multiple public places to autonomously identify if a PPE (Personal Protective Equipment) kit has been worn and also which part of the PPE kit has been worn. One of the main aims with this dataset was to also capture a higher ratio of non-iconic images or non-canonical perspectives [5] of the objects in this dataset. We further hope to see high use of this dataset to aid in medical scenarios which would have a huge effect\n> worldwide.\n> \n> \n>### Source Data" ]
cfb6992c5ca9bad209323ed8e42e0cfc7e4178cf
# Dataset Card for CraigslistBargains ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Decoupling Strategy and Generation in Negotiation Dialogues](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/) - **Repository:** [Github: Stanford NLP Cocoa](https://github.com/stanfordnlp/cocoa/tree/master) - **Paper:** [Decoupling Strategy and Generation in Negotiation Dialogues](https://arxiv.org/abs/1808.09637) - **Leaderboard:** []() - **Point of Contact:** [He He](hehe@cs.nyu.edu) ### Dataset Summary We study negotiation dialogues where two agents, a buyer and a seller, negotiate over the price of an time for sale. We collected a dataset of more than 6K negotiation dialogues over multiple categories of products scraped from Craigslist. Our goal is to develop an agent that negotiates with humans through such conversations. The challenge is to handle both the negotiation strategy and the rich language for bargaining. To this end, we develop a modular framework which separates strategy learning from language generation. Specifically, we learn strategies in a coarse dialogue act space and instantiate that into utterances conditioned on dialogue history. ### Supported Tasks and Leaderboards ### Languages This dataset is English ## Dataset Structure ### Data Instances ``` { 'agent_info': { 'Bottomline': [ 'None', 'None' ], 'Role': [ 'buyer', 'seller' ], 'Target': [ 7.0, 10.0 ] }, 'agent_turn': [ 0, 1, ... ], 'dialogue_acts': { 'intent': [ 'init-price', 'unknown', ... ], 'price': [ 5.0, -1.0, ... ] }, 'items': { 'Category': [ 'phone', 'phone' ], 'Description': [ 'Charge two devices simultaneously on the go..., ... ], 'Images': [ 'phone/6149527852_0.jpg', 'phone/6149527852_0.jpg' ], 'Price': [ 10.0, 10.0 ], 'Title': [ 'Verizon Car Charger with Dual Output Micro USB and ...', ... ] }, 'utterance': [ 'Hi, not sure if the charger would work for my car...' 'It will work...', ... ] } ``` ### Data Fields - `agent_info`: Information about each of the agents taking part in the dialogue - `Bottomline`: TBD - `Role`: Whether the agent is buyer or seller - `Target`: Target price that the buyer/seller wants to hit in the negotiation - `agent_turn`: Agent taking the current turn in the dialogue (`int` index corresponding to `Role` above) - `dialogue_acts`: Rules-based information about the strategy of each agent for each turn - `intent`: The intent of the agent at the particular turn (offer, accept, etc.) - `price`: The current item price associated with the intent and turn in the bargaining process. Default value for missing: (`-1`) - `items`: Information about the item the agents are bargaining for. **Note that there is an elembet for each of the fields below for each agent** - `Category`: Category of the item - `Description`: Description(s) of the item - `Images`: (comma delimited) strings of image names of the item - `Price`: Price(s) of the item. Default value for missing: (`-1`) - `Title`: Title(s) of the item - `utterance`: Utterance for each turn in the dialogue, corresponding to the agent in `agent_turns`. The utterance may be an empty string (`''`) for some turns if multiple dialogue acts take place after an utterance (e.g. there are often multiple dialogue acts associated with the closing of the bargaining process after all utterances have completed to describe the conclusion of the bargaining). ### Data Splits This dataset contains three splits, `train`, `validation` and `test`. Note that `test` is not provided with `dialogue_acts` information as described above. To ensure schema consistency across dataset splits, the `dialogue_acts` field in the `test` split is populated with the default values: `{"price": -1.0, "intent": ""}` The counts of examples in each split are as follows: | | Train | Valid | Test | | Input Examples | 5247 | 597 | 838 | | Average Dialogue Length | 9.14 | 9.17 | 9.24 | Note that ## Dataset Creation From the [source paper](https://arxiv.org/pdf/1808.09637.pdf) for this dataset: > To generate the negotiation scenarios, we > scraped postings on sfbay.craigslist.org > from the 6 most popular categories (housing, furniture, cars, bikes, phones, and electronics). Each > posting produces three scenarios with the buyer’s > target prices at 0.5x, 0.7x and 0.9x of the listing > price. Statistics of the scenarios are shown in Table 2. > We collected 6682 human-human dialogues on > AMT using the interface shown in Appendix A > Figure 2. The dataset statistics in Table 3 show > that CRAIGSLISTBARGAIN has longer dialogues > and more diverse utterances compared to prior > datasets. Furthermore, workers were encouraged > to embellish the item and negotiate side offers > such as free delivery or pick-up. This highly relatable scenario leads to richer dialogues such as > the one shown in Table 1. We also observed various persuasion techniques listed in Table 4 such as > embellishment, ### Curation Rationale See **Dataset Creation** ### Source Data See **Dataset Creation** #### Initial Data Collection and Normalization See **Dataset Creation** #### Who are the source language producers? See **Dataset Creation** ### Annotations If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs. #### Annotation process Annotations for the `dialogue_acts` in `train` and `test` were generated via a rules-based system which can be found in [this script](https://github.com/stanfordnlp/cocoa/blob/master/craigslistbargain/parse_dialogue.py) #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information [More Information Needed] ### Dataset Curators He He and Derek Chen and Anusha Balakrishnan and Percy Liang Computer Science Department, Stanford University `{hehe,derekchen14,anusha,pliang}@cs.stanford.edu` The work through which this data was produced was supported by DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF15-1-0462 ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{he2018decoupling, title={Decoupling Strategy and Generation in Negotiation Dialogues}, author={He He and Derek Chen and Anusha Balakrishnan and Percy Liang}, year={2018}, eprint={1808.09637}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset.
craigslist_bargains
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "arxiv:1808.09637", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["dialogue-modeling"], "paperswithcode_id": "craigslistbargains", "pretty_name": "CraigslistBargains", "dataset_info": {"features": [{"name": "agent_info", "sequence": [{"name": "Bottomline", "dtype": "string"}, {"name": "Role", "dtype": "string"}, {"name": "Target", "dtype": "float32"}]}, {"name": "agent_turn", "sequence": "int32"}, {"name": "dialogue_acts", "sequence": [{"name": "intent", "dtype": "string"}, {"name": "price", "dtype": "float32"}]}, {"name": "utterance", "sequence": "string"}, {"name": "items", "sequence": [{"name": "Category", "dtype": "string"}, {"name": "Images", "dtype": "string"}, {"name": "Price", "dtype": "float32"}, {"name": "Description", "dtype": "string"}, {"name": "Title", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8538836, "num_examples": 5247}, {"name": "test", "num_bytes": 1353933, "num_examples": 838}, {"name": "validation", "num_bytes": 966032, "num_examples": 597}], "download_size": 25373618, "dataset_size": 10858801}}
2024-01-18T09:47:33+00:00
[ "1808.09637" ]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #arxiv-1808.09637 #region-us
# Dataset Card for CraigslistBargains ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Decoupling Strategy and Generation in Negotiation Dialogues - Repository: Github: Stanford NLP Cocoa - Paper: Decoupling Strategy and Generation in Negotiation Dialogues - Leaderboard: []() - Point of Contact: He He ### Dataset Summary We study negotiation dialogues where two agents, a buyer and a seller, negotiate over the price of an time for sale. We collected a dataset of more than 6K negotiation dialogues over multiple categories of products scraped from Craigslist. Our goal is to develop an agent that negotiates with humans through such conversations. The challenge is to handle both the negotiation strategy and the rich language for bargaining. To this end, we develop a modular framework which separates strategy learning from language generation. Specifically, we learn strategies in a coarse dialogue act space and instantiate that into utterances conditioned on dialogue history. ### Supported Tasks and Leaderboards ### Languages This dataset is English ## Dataset Structure ### Data Instances ### Data Fields - 'agent_info': Information about each of the agents taking part in the dialogue - 'Bottomline': TBD - 'Role': Whether the agent is buyer or seller - 'Target': Target price that the buyer/seller wants to hit in the negotiation - 'agent_turn': Agent taking the current turn in the dialogue ('int' index corresponding to 'Role' above) - 'dialogue_acts': Rules-based information about the strategy of each agent for each turn - 'intent': The intent of the agent at the particular turn (offer, accept, etc.) - 'price': The current item price associated with the intent and turn in the bargaining process. Default value for missing: ('-1') - 'items': Information about the item the agents are bargaining for. Note that there is an elembet for each of the fields below for each agent - 'Category': Category of the item - 'Description': Description(s) of the item - 'Images': (comma delimited) strings of image names of the item - 'Price': Price(s) of the item. Default value for missing: ('-1') - 'Title': Title(s) of the item - 'utterance': Utterance for each turn in the dialogue, corresponding to the agent in 'agent_turns'. The utterance may be an empty string ('''') for some turns if multiple dialogue acts take place after an utterance (e.g. there are often multiple dialogue acts associated with the closing of the bargaining process after all utterances have completed to describe the conclusion of the bargaining). ### Data Splits This dataset contains three splits, 'train', 'validation' and 'test'. Note that 'test' is not provided with 'dialogue_acts' information as described above. To ensure schema consistency across dataset splits, the 'dialogue_acts' field in the 'test' split is populated with the default values: '{"price": -1.0, "intent": ""}' The counts of examples in each split are as follows: | | Train | Valid | Test | | Input Examples | 5247 | 597 | 838 | | Average Dialogue Length | 9.14 | 9.17 | 9.24 | Note that ## Dataset Creation From the source paper for this dataset: > To generate the negotiation scenarios, we > scraped postings on URL > from the 6 most popular categories (housing, furniture, cars, bikes, phones, and electronics). Each > posting produces three scenarios with the buyer’s > target prices at 0.5x, 0.7x and 0.9x of the listing > price. Statistics of the scenarios are shown in Table 2. > We collected 6682 human-human dialogues on > AMT using the interface shown in Appendix A > Figure 2. The dataset statistics in Table 3 show > that CRAIGSLISTBARGAIN has longer dialogues > and more diverse utterances compared to prior > datasets. Furthermore, workers were encouraged > to embellish the item and negotiate side offers > such as free delivery or pick-up. This highly relatable scenario leads to richer dialogues such as > the one shown in Table 1. We also observed various persuasion techniques listed in Table 4 such as > embellishment, ### Curation Rationale See Dataset Creation ### Source Data See Dataset Creation #### Initial Data Collection and Normalization See Dataset Creation #### Who are the source language producers? See Dataset Creation ### Annotations If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs. #### Annotation process Annotations for the 'dialogue_acts' in 'train' and 'test' were generated via a rules-based system which can be found in this script #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators He He and Derek Chen and Anusha Balakrishnan and Percy Liang Computer Science Department, Stanford University '{hehe,derekchen14,anusha,pliang}@URL' The work through which this data was produced was supported by DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF15-1-0462 ### Licensing Information ### Contributions Thanks to @ZacharySBrown for adding this dataset.
[ "# Dataset Card for CraigslistBargains", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Decoupling Strategy and Generation in Negotiation Dialogues\n- Repository: Github: Stanford NLP Cocoa\n- Paper: Decoupling Strategy and Generation in Negotiation Dialogues\n- Leaderboard: []()\n- Point of Contact: He He", "### Dataset Summary\n\nWe study negotiation dialogues where two agents, a buyer and a seller, negotiate over the price of an time for sale. We collected a dataset of more than 6K negotiation dialogues over multiple categories of products scraped from Craigslist. Our goal is to develop an agent that negotiates with humans through such conversations. The challenge is to handle both the negotiation strategy and the rich language for bargaining. To this end, we develop a modular framework which separates strategy learning from language generation. Specifically, we learn strategies in a coarse dialogue act space and instantiate that into utterances conditioned on dialogue history.", "### Supported Tasks and Leaderboards", "### Languages\n\nThis dataset is English", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n\n- 'agent_info': Information about each of the agents taking part in the dialogue\n - 'Bottomline': TBD\n - 'Role': Whether the agent is buyer or seller\n - 'Target': Target price that the buyer/seller wants to hit in the negotiation\n- 'agent_turn': Agent taking the current turn in the dialogue ('int' index corresponding to 'Role' above)\n- 'dialogue_acts': Rules-based information about the strategy of each agent for each turn\n - 'intent': The intent of the agent at the particular turn (offer, accept, etc.)\n - 'price': The current item price associated with the intent and turn in the bargaining process. Default value for missing: ('-1')\n- 'items': Information about the item the agents are bargaining for. Note that there is an elembet for each of the fields below for each agent\n - 'Category': Category of the item\n - 'Description': Description(s) of the item\n - 'Images': (comma delimited) strings of image names of the item\n - 'Price': Price(s) of the item. Default value for missing: ('-1')\n - 'Title': Title(s) of the item\n- 'utterance': Utterance for each turn in the dialogue, corresponding to the agent in 'agent_turns'. The utterance may be an empty string ('''') for some turns if multiple dialogue acts take place after an utterance (e.g. there are often multiple dialogue acts associated with the closing of the bargaining process after all utterances have completed to describe the conclusion of the bargaining).", "### Data Splits\n\nThis dataset contains three splits, 'train', 'validation' and 'test'. Note that 'test' is not provided with 'dialogue_acts' information as described above. To ensure schema consistency across dataset splits, the 'dialogue_acts' field in the 'test' split is populated with the default values: '{\"price\": -1.0, \"intent\": \"\"}'\n\nThe counts of examples in each split are as follows:\n\n| | Train | Valid | Test |\n| Input Examples | 5247 | 597 | 838 |\n| Average Dialogue Length | 9.14 | 9.17 | 9.24 |\n\nNote that", "## Dataset Creation\n\nFrom the source paper for this dataset: \n\n> To generate the negotiation scenarios, we\n> scraped postings on URL\n> from the 6 most popular categories (housing, furniture, cars, bikes, phones, and electronics). Each\n> posting produces three scenarios with the buyer’s\n> target prices at 0.5x, 0.7x and 0.9x of the listing\n> price. Statistics of the scenarios are shown in Table 2.\n> We collected 6682 human-human dialogues on\n> AMT using the interface shown in Appendix A\n> Figure 2. The dataset statistics in Table 3 show\n> that CRAIGSLISTBARGAIN has longer dialogues\n> and more diverse utterances compared to prior\n> datasets. Furthermore, workers were encouraged\n> to embellish the item and negotiate side offers\n> such as free delivery or pick-up. This highly relatable scenario leads to richer dialogues such as\n> the one shown in Table 1. We also observed various persuasion techniques listed in Table 4 such as\n> embellishment,", "### Curation Rationale\n\nSee Dataset Creation", "### Source Data\n\nSee Dataset Creation", "#### Initial Data Collection and Normalization\n\nSee Dataset Creation", "#### Who are the source language producers?\n\nSee Dataset Creation", "### Annotations\n\nIf the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.", "#### Annotation process\n\nAnnotations for the 'dialogue_acts' in 'train' and 'test' were generated via a rules-based system which can be found in this script", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nHe He and Derek Chen and Anusha Balakrishnan and Percy Liang\nComputer Science Department, Stanford University\n'{hehe,derekchen14,anusha,pliang}@URL'\n\nThe work through which this data was produced was supported by\nDARPA Communicating with Computers (CwC)\nprogram under ARO prime contract no. W911NF15-1-0462", "### Licensing Information", "### Contributions\n\nThanks to @ZacharySBrown for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #arxiv-1808.09637 #region-us \n", "# Dataset Card for CraigslistBargains", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Decoupling Strategy and Generation in Negotiation Dialogues\n- Repository: Github: Stanford NLP Cocoa\n- Paper: Decoupling Strategy and Generation in Negotiation Dialogues\n- Leaderboard: []()\n- Point of Contact: He He", "### Dataset Summary\n\nWe study negotiation dialogues where two agents, a buyer and a seller, negotiate over the price of an time for sale. We collected a dataset of more than 6K negotiation dialogues over multiple categories of products scraped from Craigslist. Our goal is to develop an agent that negotiates with humans through such conversations. The challenge is to handle both the negotiation strategy and the rich language for bargaining. To this end, we develop a modular framework which separates strategy learning from language generation. Specifically, we learn strategies in a coarse dialogue act space and instantiate that into utterances conditioned on dialogue history.", "### Supported Tasks and Leaderboards", "### Languages\n\nThis dataset is English", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n\n- 'agent_info': Information about each of the agents taking part in the dialogue\n - 'Bottomline': TBD\n - 'Role': Whether the agent is buyer or seller\n - 'Target': Target price that the buyer/seller wants to hit in the negotiation\n- 'agent_turn': Agent taking the current turn in the dialogue ('int' index corresponding to 'Role' above)\n- 'dialogue_acts': Rules-based information about the strategy of each agent for each turn\n - 'intent': The intent of the agent at the particular turn (offer, accept, etc.)\n - 'price': The current item price associated with the intent and turn in the bargaining process. Default value for missing: ('-1')\n- 'items': Information about the item the agents are bargaining for. Note that there is an elembet for each of the fields below for each agent\n - 'Category': Category of the item\n - 'Description': Description(s) of the item\n - 'Images': (comma delimited) strings of image names of the item\n - 'Price': Price(s) of the item. Default value for missing: ('-1')\n - 'Title': Title(s) of the item\n- 'utterance': Utterance for each turn in the dialogue, corresponding to the agent in 'agent_turns'. The utterance may be an empty string ('''') for some turns if multiple dialogue acts take place after an utterance (e.g. there are often multiple dialogue acts associated with the closing of the bargaining process after all utterances have completed to describe the conclusion of the bargaining).", "### Data Splits\n\nThis dataset contains three splits, 'train', 'validation' and 'test'. Note that 'test' is not provided with 'dialogue_acts' information as described above. To ensure schema consistency across dataset splits, the 'dialogue_acts' field in the 'test' split is populated with the default values: '{\"price\": -1.0, \"intent\": \"\"}'\n\nThe counts of examples in each split are as follows:\n\n| | Train | Valid | Test |\n| Input Examples | 5247 | 597 | 838 |\n| Average Dialogue Length | 9.14 | 9.17 | 9.24 |\n\nNote that", "## Dataset Creation\n\nFrom the source paper for this dataset: \n\n> To generate the negotiation scenarios, we\n> scraped postings on URL\n> from the 6 most popular categories (housing, furniture, cars, bikes, phones, and electronics). Each\n> posting produces three scenarios with the buyer’s\n> target prices at 0.5x, 0.7x and 0.9x of the listing\n> price. Statistics of the scenarios are shown in Table 2.\n> We collected 6682 human-human dialogues on\n> AMT using the interface shown in Appendix A\n> Figure 2. The dataset statistics in Table 3 show\n> that CRAIGSLISTBARGAIN has longer dialogues\n> and more diverse utterances compared to prior\n> datasets. Furthermore, workers were encouraged\n> to embellish the item and negotiate side offers\n> such as free delivery or pick-up. This highly relatable scenario leads to richer dialogues such as\n> the one shown in Table 1. We also observed various persuasion techniques listed in Table 4 such as\n> embellishment,", "### Curation Rationale\n\nSee Dataset Creation", "### Source Data\n\nSee Dataset Creation", "#### Initial Data Collection and Normalization\n\nSee Dataset Creation", "#### Who are the source language producers?\n\nSee Dataset Creation", "### Annotations\n\nIf the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.", "#### Annotation process\n\nAnnotations for the 'dialogue_acts' in 'train' and 'test' were generated via a rules-based system which can be found in this script", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nHe He and Derek Chen and Anusha Balakrishnan and Percy Liang\nComputer Science Department, Stanford University\n'{hehe,derekchen14,anusha,pliang}@URL'\n\nThe work through which this data was produced was supported by\nDARPA Communicating with Computers (CwC)\nprogram under ARO prime contract no. W911NF15-1-0462", "### Licensing Information", "### Contributions\n\nThanks to @ZacharySBrown for adding this dataset." ]
[ 111, 11, 120, 71, 151, 10, 9, 6, 6, 400, 177, 242, 12, 9, 15, 15, 32, 43, 9, 8, 8, 7, 8, 7, 5, 88, 6, 20 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #arxiv-1808.09637 #region-us \n# Dataset Card for CraigslistBargains## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Decoupling Strategy and Generation in Negotiation Dialogues\n- Repository: Github: Stanford NLP Cocoa\n- Paper: Decoupling Strategy and Generation in Negotiation Dialogues\n- Leaderboard: []()\n- Point of Contact: He He### Dataset Summary\n\nWe study negotiation dialogues where two agents, a buyer and a seller, negotiate over the price of an time for sale. We collected a dataset of more than 6K negotiation dialogues over multiple categories of products scraped from Craigslist. Our goal is to develop an agent that negotiates with humans through such conversations. The challenge is to handle both the negotiation strategy and the rich language for bargaining. To this end, we develop a modular framework which separates strategy learning from language generation. Specifically, we learn strategies in a coarse dialogue act space and instantiate that into utterances conditioned on dialogue history.### Supported Tasks and Leaderboards### Languages\n\nThis dataset is English## Dataset Structure### Data Instances", "passage: ### Data Fields\n\n\n- 'agent_info': Information about each of the agents taking part in the dialogue\n - 'Bottomline': TBD\n - 'Role': Whether the agent is buyer or seller\n - 'Target': Target price that the buyer/seller wants to hit in the negotiation\n- 'agent_turn': Agent taking the current turn in the dialogue ('int' index corresponding to 'Role' above)\n- 'dialogue_acts': Rules-based information about the strategy of each agent for each turn\n - 'intent': The intent of the agent at the particular turn (offer, accept, etc.)\n - 'price': The current item price associated with the intent and turn in the bargaining process. Default value for missing: ('-1')\n- 'items': Information about the item the agents are bargaining for. Note that there is an elembet for each of the fields below for each agent\n - 'Category': Category of the item\n - 'Description': Description(s) of the item\n - 'Images': (comma delimited) strings of image names of the item\n - 'Price': Price(s) of the item. Default value for missing: ('-1')\n - 'Title': Title(s) of the item\n- 'utterance': Utterance for each turn in the dialogue, corresponding to the agent in 'agent_turns'. The utterance may be an empty string ('''') for some turns if multiple dialogue acts take place after an utterance (e.g. there are often multiple dialogue acts associated with the closing of the bargaining process after all utterances have completed to describe the conclusion of the bargaining).### Data Splits\n\nThis dataset contains three splits, 'train', 'validation' and 'test'. Note that 'test' is not provided with 'dialogue_acts' information as described above. To ensure schema consistency across dataset splits, the 'dialogue_acts' field in the 'test' split is populated with the default values: '{\"price\": -1.0, \"intent\": \"\"}'\n\nThe counts of examples in each split are as follows:\n\n| | Train | Valid | Test |\n| Input Examples | 5247 | 597 | 838 |\n| Average Dialogue Length | 9.14 | 9.17 | 9.24 |\n\nNote that## Dataset Creation\n\nFrom the source paper for this dataset: \n\n> To generate the negotiation scenarios, we\n> scraped postings on URL\n> from the 6 most popular categories (housing, furniture, cars, bikes, phones, and electronics). Each\n> posting produces three scenarios with the buyer’s\n> target prices at 0.5x, 0.7x and 0.9x of the listing\n> price. Statistics of the scenarios are shown in Table 2.\n> We collected 6682 human-human dialogues on\n> AMT using the interface shown in Appendix A\n> Figure 2. The dataset statistics in Table 3 show\n> that CRAIGSLISTBARGAIN has longer dialogues\n> and more diverse utterances compared to prior\n> datasets. Furthermore, workers were encouraged\n> to embellish the item and negotiate side offers\n> such as free delivery or pick-up. This highly relatable scenario leads to richer dialogues such as\n> the one shown in Table 1. We also observed various persuasion techniques listed in Table 4 such as\n> embellishment,### Curation Rationale\n\nSee Dataset Creation### Source Data\n\nSee Dataset Creation#### Initial Data Collection and Normalization\n\nSee Dataset Creation#### Who are the source language producers?\n\nSee Dataset Creation### Annotations\n\nIf the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs." ]
ce0f4c7c7217e637a0a1243236713ec34e343a94
# Dataset Card for Common Crawl Domain Names ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/google-research-datasets/common-crawl-domain-names - **Repository:** https://github.com/google-research-datasets/common-crawl-domain-names - **Paper:** https://arxiv.org/pdf/2011.03138 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. "commoncrawl" to "common crawl"). Breaking [domain names](https://developer.mozilla.org/en-US/docs/Learn/Common_questions/What_is_a_URL) such as "openresearch" into component words "open" and "research" is important for applications such as Text-to-Speech synthesis and web search. [Common Crawl](https://commoncrawl.org/) is an open repository of web crawl data that can be accessed and analyzed by anyone. Specifically, we scraped the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so we had to manually annotate the data. ### Supported Tasks and Leaderboards - Text-to-Speech synthesis - Web search ### Languages en: English ## Dataset Structure ### Data Instances Each sample is an example of space separated segments of a domain name. The examples are stored in their original letter casing, but harder and more interesting examples can be generated by lowercasing the input first. For example: ``` Open B S D NASA ASAP Workouts ``` ### Data Fields - `example`: a `string` feature: space separated segments of a domain name. ### Data Splits | split | size | trivial | avg_input_length | avg_segments | |-------|-------|---------|------------------|--------------| | train | 17572 | 13718 | 12.63 | 2.65 | | eval | 1953 | 1536 | 12.77 | 2.67 | | test | 2170 | 1714 | 12.63 | 2.66 | ## Dataset Creation ### Curation Rationale The dataset was curated by scraping the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so the curators of the dataset had to manually annotate the data. ### Source Data #### Initial Data Collection and Normalization Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The annotators are the curators of this dataset ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The curators of this dataset are [Jae Hun Ro](https://github.com/JaeHunRo) and [mwurts4google](https://github.com/mwurts4google), who are the contributors of the official Github repository for this dataset. Since the account handles of other curators are unknown currently, the authors of the paper linked to this dataset is mentioned here as curators, [Hao Zhang](https://arxiv.org/search/cs?searchtype=author&query=Zhang%2C+H), [Jae Ro](https://arxiv.org/search/cs?searchtype=author&query=Ro%2C+J), and [Richard Sproat](https://arxiv.org/search/cs?searchtype=author&query=Sproat%2C+R). ### Licensing Information [MIT License](https://github.com/google-research-datasets/common-crawl-domain-names/blob/master/LICENSE) ### Citation Information ``` @inproceedings{zrs2020urlsegmentation, title={Semi-supervised URL Segmentation with Recurrent Neural Networks Pre-trained on Knowledge Graph Entities}, author={Hao Zhang and Jae Ro and Richard William Sproat}, booktitle={The 28th International Conference on Computational Linguistics (COLING 2020)}, year={2020} } ``` ### Contributions Thanks to [@Karthik-Bhaskar](https://github.com/Karthik-Bhaskar) for adding this dataset.
crawl_domain
[ "task_categories:other", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-Common-Crawl", "source_datasets:original", "language:en", "license:mit", "web-search", "text-to-speech", "arxiv:2011.03138", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated", "found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-Common-Crawl", "original"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "common-crawl-domain-names", "pretty_name": "Common Crawl Domain Names", "tags": ["web-search", "text-to-speech"], "dataset_info": {"features": [{"name": "example", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 321134, "num_examples": 17572}, {"name": "test", "num_bytes": 39712, "num_examples": 2170}, {"name": "validation", "num_bytes": 36018, "num_examples": 1953}], "download_size": 331763, "dataset_size": 396864}}
2024-01-18T09:48:12+00:00
[ "2011.03138" ]
[ "en" ]
TAGS #task_categories-other #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-Common-Crawl #source_datasets-original #language-English #license-mit #web-search #text-to-speech #arxiv-2011.03138 #region-us
Dataset Card for Common Crawl Domain Names ========================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: * Point of Contact: ### Dataset Summary Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. "commoncrawl" to "common crawl"). Breaking domain names such as "openresearch" into component words "open" and "research" is important for applications such as Text-to-Speech synthesis and web search. Common Crawl is an open repository of web crawl data that can be accessed and analyzed by anyone. Specifically, we scraped the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so we had to manually annotate the data. ### Supported Tasks and Leaderboards * Text-to-Speech synthesis * Web search ### Languages en: English Dataset Structure ----------------- ### Data Instances Each sample is an example of space separated segments of a domain name. The examples are stored in their original letter casing, but harder and more interesting examples can be generated by lowercasing the input first. For example: ### Data Fields * 'example': a 'string' feature: space separated segments of a domain name. ### Data Splits Dataset Creation ---------------- ### Curation Rationale The dataset was curated by scraping the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. "OpenBSD"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. "NASA"), so the curators of the dataset had to manually annotate the data. ### Source Data #### Initial Data Collection and Normalization Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? The annotators are the curators of this dataset ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators The curators of this dataset are Jae Hun Ro and mwurts4google, who are the contributors of the official Github repository for this dataset. Since the account handles of other curators are unknown currently, the authors of the paper linked to this dataset is mentioned here as curators, Hao Zhang, Jae Ro, and Richard Sproat. ### Licensing Information MIT License ### Contributions Thanks to @Karthik-Bhaskar for adding this dataset.
[ "### Dataset Summary\n\n\nCorpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. \"commoncrawl\" to \"common crawl\").\n\n\nBreaking domain names such as \"openresearch\" into component words \"open\" and \"research\" is important for applications such as Text-to-Speech synthesis and web search. Common Crawl is an open repository of web crawl data that can be accessed and analyzed by anyone. Specifically, we scraped the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. \"OpenBSD\"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. \"NASA\"), so we had to manually annotate the data.", "### Supported Tasks and Leaderboards\n\n\n* Text-to-Speech synthesis\n* Web search", "### Languages\n\n\nen: English\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach sample is an example of space separated segments of a domain name. The examples are stored in their original letter casing, but harder and more interesting examples can be generated by lowercasing the input first.\n\n\nFor example:", "### Data Fields\n\n\n* 'example': a 'string' feature: space separated segments of a domain name.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by scraping the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. \"OpenBSD\"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. \"NASA\"), so the curators of the dataset had to manually annotate the data.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nCorpus of domain names scraped from Common Crawl and manually annotated to add word boundaries", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\nThe annotators are the curators of this dataset", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe curators of this dataset are Jae Hun Ro and mwurts4google, who are the contributors of the official Github repository for this dataset. Since the account handles of other curators are unknown currently, the authors of the paper linked to this dataset is mentioned here as curators, Hao Zhang, Jae Ro, and Richard Sproat.", "### Licensing Information\n\n\nMIT License", "### Contributions\n\n\nThanks to @Karthik-Bhaskar for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-Common-Crawl #source_datasets-original #language-English #license-mit #web-search #text-to-speech #arxiv-2011.03138 #region-us \n", "### Dataset Summary\n\n\nCorpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. \"commoncrawl\" to \"common crawl\").\n\n\nBreaking domain names such as \"openresearch\" into component words \"open\" and \"research\" is important for applications such as Text-to-Speech synthesis and web search. Common Crawl is an open repository of web crawl data that can be accessed and analyzed by anyone. Specifically, we scraped the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. \"OpenBSD\"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. \"NASA\"), so we had to manually annotate the data.", "### Supported Tasks and Leaderboards\n\n\n* Text-to-Speech synthesis\n* Web search", "### Languages\n\n\nen: English\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach sample is an example of space separated segments of a domain name. The examples are stored in their original letter casing, but harder and more interesting examples can be generated by lowercasing the input first.\n\n\nFor example:", "### Data Fields\n\n\n* 'example': a 'string' feature: space separated segments of a domain name.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by scraping the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. \"OpenBSD\"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. \"NASA\"), so the curators of the dataset had to manually annotate the data.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nCorpus of domain names scraped from Common Crawl and manually annotated to add word boundaries", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\nThe annotators are the curators of this dataset", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe curators of this dataset are Jae Hun Ro and mwurts4google, who are the contributors of the official Github repository for this dataset. Since the account handles of other curators are unknown currently, the authors of the paper linked to this dataset is mentioned here as curators, Hao Zhang, Jae Ro, and Richard Sproat.", "### Licensing Information\n\n\nMIT License", "### Contributions\n\n\nThanks to @Karthik-Bhaskar for adding this dataset." ]
[ 132, 192, 24, 14, 58, 28, 11, 97, 4, 32, 10, 5, 5, 21, 18, 7, 8, 14, 93, 8, 20 ]
[ "passage: TAGS\n#task_categories-other #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-Common-Crawl #source_datasets-original #language-English #license-mit #web-search #text-to-speech #arxiv-2011.03138 #region-us \n### Dataset Summary\n\n\nCorpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. \"commoncrawl\" to \"common crawl\").\n\n\nBreaking domain names such as \"openresearch\" into component words \"open\" and \"research\" is important for applications such as Text-to-Speech synthesis and web search. Common Crawl is an open repository of web crawl data that can be accessed and analyzed by anyone. Specifically, we scraped the plaintext (WET) extracts for domain names from URLs that contained diverse letter casing (e.g. \"OpenBSD\"). Although in the previous example, segmentation is trivial using letter casing, this was not always the case (e.g. \"NASA\"), so we had to manually annotate the data.### Supported Tasks and Leaderboards\n\n\n* Text-to-Speech synthesis\n* Web search### Languages\n\n\nen: English\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nEach sample is an example of space separated segments of a domain name. The examples are stored in their original letter casing, but harder and more interesting examples can be generated by lowercasing the input first.\n\n\nFor example:### Data Fields\n\n\n* 'example': a 'string' feature: space separated segments of a domain name.### Data Splits\n\n\n\nDataset Creation\n----------------" ]
57e33b4cfd74bf5d6a1aee45a0a3eaf3bfa16a56
# Dataset Card for "crd3" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CRD3 homepage](https://github.com/RevanthRameshkumar/CRD3) - **Repository:** [CRD3 repository](https://github.com/RevanthRameshkumar/CRD3) - **Paper:** [Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset](https://www.aclweb.org/anthology/2020.acl-main.459/) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game. The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail, and semantic ties to the previous dialogues. ### Supported Tasks and Leaderboards `summarization`: The dataset can be used to train a model for abstractive summarization. A [fast abstractive summarization-RL](https://github.com/ChenRocks/fast_abs_rl) model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18. ### Languages The text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { "alignment_score": 3.679936647415161, "chunk": "Wish them a Happy Birthday on their Facebook and Twitter pages! Also, as a reminder: D&D Beyond streams their weekly show (\"And Beyond\") every Wednesday on twitch.tv/dndbeyond.", "chunk_id": 1, "turn_end": 6, "turn_num": 4, "turn_start": 4, "turns": { "names": ["SAM"], "utterances": ["Yesterday, guys, was D&D Beyond's first one--", "first one-year anniversary. Take two. Hey guys,", "yesterday was D&D Beyond's one-year anniversary.", "Wish them a happy birthday on their Facebook and", "Twitter pages."] } } ``` ### Data Fields The data fields are the same among all splits. - `chunk`: a `string` feature. - `chunk_id`: a `int32` feature. - `turn_start`: a `int32` feature. - `turn_end`: a `int32` feature. - `alignment_score`: a `float32` feature. - `turn_num`: a `int32` feature. - `turns`: a dictionary feature containing: - `names`: a `string` feature. - `utterances`: a `string` feature. ### Data Splits | name | train |validation| test | |-------|------:|---------:|------:| |default|38,969| 6,327|7,500| ## Dataset Creation ### Curation Rationale Dialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains. ### Source Data #### Initial Data Collection and Normalization Dungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved. The abstractive summaries were collected from the [Critical Role Fandom wiki](https://criticalrole.fandom.com/) #### Who are the source language producers? The language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [N/A] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators CRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries. ### Licensing Information This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki https://criticalrole.fandom.com/ ### Citation Information ```bibtex @inproceedings{ title = {Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset}, author = {Rameshkumar, Revanth and Bailey, Peter}, year = {2020}, publisher = {Association for Computational Linguistics}, conference = {ACL} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
crd3
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization", "text-generation", "fill-mask"], "task_ids": ["dialogue-modeling"], "paperswithcode_id": "crd3", "pretty_name": "CRD3 (Critical Role Dungeons and Dragons Dataset)", "dataset_info": {"features": [{"name": "chunk", "dtype": "string"}, {"name": "chunk_id", "dtype": "int32"}, {"name": "turn_start", "dtype": "int32"}, {"name": "turn_end", "dtype": "int32"}, {"name": "alignment_score", "dtype": "float32"}, {"name": "turns", "list": [{"name": "names", "sequence": "string"}, {"name": "utterances", "sequence": "string"}, {"name": "number", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 236605152, "num_examples": 38969}, {"name": "test", "num_bytes": 40269203, "num_examples": 7500}, {"name": "validation", "num_bytes": 41543528, "num_examples": 6327}], "download_size": 117519820, "dataset_size": 318417883}}
2024-01-18T09:48:37+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #region-us
Dataset Card for "crd3" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: CRD3 homepage * Repository: CRD3 repository * Paper: Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset * Point of Contact: ### Dataset Summary Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game. The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player collaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail, and semantic ties to the previous dialogues. ### Supported Tasks and Leaderboards 'summarization': The dataset can be used to train a model for abstractive summarization. A fast abstractive summarization-RL model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18. ### Languages The text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game. Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. * 'chunk': a 'string' feature. * 'chunk\_id': a 'int32' feature. * 'turn\_start': a 'int32' feature. * 'turn\_end': a 'int32' feature. * 'alignment\_score': a 'float32' feature. * 'turn\_num': a 'int32' feature. * 'turns': a dictionary feature containing: + 'names': a 'string' feature. + 'utterances': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale Dialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains. ### Source Data #### Initial Data Collection and Normalization Dungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved. The abstractive summaries were collected from the Critical Role Fandom wiki #### Who are the source language producers? The language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information [N/A] Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators CRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries. ### Licensing Information This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki URL ### Contributions Thanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nStorytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.\nCritical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.\nThe dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding\nabstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player\ncollaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,\nand semantic ties to the previous dialogues.", "### Supported Tasks and Leaderboards\n\n\n'summarization': The dataset can be used to train a model for abstractive summarization. A fast abstractive summarization-RL model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.", "### Languages\n\n\nThe text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'chunk': a 'string' feature.\n* 'chunk\\_id': a 'int32' feature.\n* 'turn\\_start': a 'int32' feature.\n* 'turn\\_end': a 'int32' feature.\n* 'alignment\\_score': a 'float32' feature.\n* 'turn\\_num': a 'int32' feature.\n* 'turns': a dictionary feature containing:\n\t+ 'names': a 'string' feature.\n\t+ 'utterances': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nDialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved.\n\n\nThe abstractive summaries were collected from the Critical Role Fandom wiki", "#### Who are the source language producers?\n\n\nThe language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.", "### Annotations", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nCRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries.", "### Licensing Information\n\n\nThis work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki URL", "### Contributions\n\n\nThanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nStorytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.\nCritical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.\nThe dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding\nabstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player\ncollaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,\nand semantic ties to the previous dialogues.", "### Supported Tasks and Leaderboards\n\n\n'summarization': The dataset can be used to train a model for abstractive summarization. A fast abstractive summarization-RL model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.", "### Languages\n\n\nThe text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'chunk': a 'string' feature.\n* 'chunk\\_id': a 'int32' feature.\n* 'turn\\_start': a 'int32' feature.\n* 'turn\\_end': a 'int32' feature.\n* 'alignment\\_score': a 'float32' feature.\n* 'turn\\_num': a 'int32' feature.\n* 'turns': a dictionary feature containing:\n\t+ 'names': a 'string' feature.\n\t+ 'utterances': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nDialogue understanding and abstractive summarization remain both important and challenging problems for computational linguistics. Current paradigms in summarization modeling have specific failures in capturing semantics and pragmatics, content selection, rewriting, and evaluation in the domain of long, story-telling dialogue. CRD3 offers a linguistically rich dataset to explore these domains.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDungeons and Dragons is a popular roleplaying game that is driven by structured storytelling. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons. This dataset consists of 159 episodes of the show, where the episodes are transcribed. Inconsistencies (e.g. spelling of speaker names) were manually resolved.\n\n\nThe abstractive summaries were collected from the Critical Role Fandom wiki", "#### Who are the source language producers?\n\n\nThe language producers are actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.", "### Annotations", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nCRTranscript provided transcripts of the show; contributors of the Critical Role Wiki provided the abstractive summaries.", "### Licensing Information\n\n\nThis work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa-4.0]., as corresponding to the Critical Role Wiki URL", "### Contributions\n\n\nThanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun for adding this dataset." ]
[ 117, 175, 66, 69, 18, 150, 11, 87, 4, 122, 59, 5, 10, 14, 23, 7, 8, 14, 33, 44, 32 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #region-us \n### Dataset Summary\n\n\nStorytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset.\nCritical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game.\nThe dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding\nabstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player\ncollaboration and spoken interaction. For each dialogue, there are a large number of turns, multiple abstractive summaries with varying levels of detail,\nand semantic ties to the previous dialogues.### Supported Tasks and Leaderboards\n\n\n'summarization': The dataset can be used to train a model for abstractive summarization. A fast abstractive summarization-RL model was presented as a baseline, which achieves ROUGE-L-F1 of 25.18.### Languages\n\n\nThe text in the dataset is in English, as spoken by actors on The Critical Role show, which is a weekly unscripted, live-stream of a fixed group of people playing Dungeons and Dragons, a popular role-playing game.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows." ]
0bbf9e43bda9036b4d48d7af53a7a8dbc01bcf59
# Dataset Card for "crime_and_punish" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.gutenberg.org/files/2554/2554-h/2554-h.htm](https://www.gutenberg.org/files/2554/2554-h/2554-h.htm) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.21 MB - **Size of the generated dataset:** 1.27 MB - **Total amount of disk used:** 2.47 MB ### Dataset Summary ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### crime-and-punish - **Size of downloaded dataset files:** 1.21 MB - **Size of the generated dataset:** 1.27 MB - **Total amount of disk used:** 2.47 MB An example of 'train' looks as follows. ``` { "line": "CRIME AND PUNISHMENT\n" } ``` ### Data Fields The data fields are the same among all splits. #### crime-and-punish - `line`: a `string` feature. ### Data Splits | name |train| |----------------|----:| |crime-and-punish|21969| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
crime_and_punish
[ "language:en", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "pretty_name": "CrimeAndPunish", "dataset_info": {"features": [{"name": "line", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1270540, "num_examples": 21969}], "download_size": 1201735, "dataset_size": 1270540}}
2023-04-05T09:02:51+00:00
[]
[ "en" ]
TAGS #language-English #region-us
Dataset Card for "crime\_and\_punish" ===================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 1.21 MB * Size of the generated dataset: 1.27 MB * Total amount of disk used: 2.47 MB ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### crime-and-punish * Size of downloaded dataset files: 1.21 MB * Size of the generated dataset: 1.27 MB * Total amount of disk used: 2.47 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### crime-and-punish * 'line': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patrickvonplaten, @thomwolf for adding this dataset.
[ "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### crime-and-punish\n\n\n* Size of downloaded dataset files: 1.21 MB\n* Size of the generated dataset: 1.27 MB\n* Total amount of disk used: 2.47 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### crime-and-punish\n\n\n* 'line': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf for adding this dataset." ]
[ "TAGS\n#language-English #region-us \n", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### crime-and-punish\n\n\n* Size of downloaded dataset files: 1.21 MB\n* Size of the generated dataset: 1.27 MB\n* Total amount of disk used: 2.47 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### crime-and-punish\n\n\n* 'line': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf for adding this dataset." ]
[ 10, 6, 10, 11, 6, 54, 17, 19, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 24 ]
[ "passage: TAGS\n#language-English #region-us \n### Dataset Summary### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### crime-and-punish\n\n\n* Size of downloaded dataset files: 1.21 MB\n* Size of the generated dataset: 1.27 MB\n* Total amount of disk used: 2.47 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### crime-and-punish\n\n\n* 'line': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf for adding this dataset." ]
b9c986f7facc268b3e4c6e2127335d67e7a3f206
# Dataset Card for CrowS-Pairs ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]() - **Repository:** https://github.com/nyu-mll/crows-pairs - **Paper:** https://aclanthology.org/2020.emnlp-main.154 - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]() ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CrowS-Pairs is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). It is created using prompts taken from the [ROCStories corpora](https://cs.rochester.edu/nlp/rocstories/) and the fiction part of [MNLI](https://cims.nyu.edu/~sbowman/multinli/). Please refer to their papers for more details. ### Citation Information ``` @inproceedings{nangia-etal-2020-crows, title = "{C}row{S}-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models", author = "Nangia, Nikita and Vania, Clara and Bhalerao, Rasika and Bowman, Samuel R.", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.emnlp-main.154", doi = "10.18653/v1/2020.emnlp-main.154", pages = "1953--1967", } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
crows_pairs
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "bias-evaluation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring"], "paperswithcode_id": "crows-pairs", "pretty_name": "CrowS-Pairs", "tags": ["bias-evaluation"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "sent_more", "dtype": "string"}, {"name": "sent_less", "dtype": "string"}, {"name": "stereo_antistereo", "dtype": {"class_label": {"names": {"0": "stereo", "1": "antistereo"}}}}, {"name": "bias_type", "dtype": {"class_label": {"names": {"0": "race-color", "1": "socioeconomic", "2": "gender", "3": "disability", "4": "nationality", "5": "sexual-orientation", "6": "physical-appearance", "7": "religion", "8": "age"}}}}, {"name": "annotations", "sequence": {"sequence": {"class_label": {"names": {"0": "race-color", "1": "socioeconomic", "2": "gender", "3": "disability", "4": "nationality", "5": "sexual-orientation", "6": "physical-appearance", "7": "religion", "8": "age"}}}}}, {"name": "anon_writer", "dtype": "string"}, {"name": "anon_annotators", "sequence": "string"}], "config_name": "crows_pairs", "splits": [{"name": "test", "num_bytes": 419976, "num_examples": 1508}], "download_size": 437764, "dataset_size": 419976}}
2024-01-18T09:49:15+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-sa-4.0 #bias-evaluation #region-us
# Dataset Card for CrowS-Pairs ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]() - Repository: URL - Paper: URL - Leaderboard: [If the dataset supports an active leaderboard, add link here]() - Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]() ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information CrowS-Pairs is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. It is created using prompts taken from the ROCStories corpora and the fiction part of MNLI. Please refer to their papers for more details. ### Contributions Thanks to @patil-suraj for adding this dataset.
[ "# Dataset Card for CrowS-Pairs", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: URL\n- Paper: URL\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCrowS-Pairs is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.\n\nIt is created using prompts taken from the ROCStories corpora and the fiction part of MNLI. Please refer to their papers for more details.", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-sa-4.0 #bias-evaluation #region-us \n", "# Dataset Card for CrowS-Pairs", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: URL\n- Paper: URL\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCrowS-Pairs is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.\n\nIt is created using prompts taken from the ROCStories corpora and the fiction part of MNLI. Please refer to their papers for more details.", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ 100, 11, 120, 96, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 59, 19 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-text-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-sa-4.0 #bias-evaluation #region-us \n# Dataset Card for CrowS-Pairs## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: URL\n- Paper: URL\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators" ]
d672fdefdeb37601c5c7d35c5c6aee9595748ca9
# Dataset Card for Cryptonite ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/aviaefrat/cryptonite) - **Repository:** [Github](https://github.com/aviaefrat/cryptonite) - **Paper:** [Arxiv](https://arxiv.org/pdf/2103.01242.pdf) - **Leaderboard:** - **Point of Contact:** [Twitter](https://twitter.com/AviaEfrat) ### Dataset Summary Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%). ### Languages English ## Dataset Structure ### Data Instances This is one example from the train set. ```python { 'clue': 'make progress socially in stated region (5)', 'answer': 'climb', 'date': 971654400000, 'enumeration': '(5)', 'id': 'Times-31523-6across', 'publisher': 'Times', 'quick': False } ``` ### Data Fields - `clue`: a string representing the clue provided for the crossword - `answer`: a string representing the answer to the clue - `enumeration`: a string representing the - `publisher`: a string representing the publisher of the crossword - `date`: a int64 representing the UNIX timestamp of the date of publication of the crossword - `quick`: a bool representing whether the crossword is quick (a crossword aimed at beginners, easier to solve) - `id`: a string to uniquely identify a given example in the dataset ### Data Splits Train (470,804 examples), validation (26,156 examples), test (26,157 examples). ## Dataset Creation ### Curation Rationale Crosswords from the Times and the Telegraph. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Avia Efrat, Uri Shaham, Dan Kilman, Omer Levy ### Licensing Information `cc-by-nc-4.0` ### Citation Information ``` @misc{efrat2021cryptonite, title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language}, author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy}, year={2021}, eprint={2103.01242}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@theo-m](https://github.com/theo-m) for adding this dataset.
cryptonite
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "arxiv:2103.01242", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "pretty_name": "Cryptonite", "config_names": ["cryptonite", "default"], "dataset_info": [{"config_name": "default", "features": [{"name": "agent_info", "sequence": [{"name": "Bottomline", "dtype": "string"}, {"name": "Role", "dtype": "string"}, {"name": "Target", "dtype": "float32"}]}, {"name": "agent_turn", "sequence": "int32"}, {"name": "dialogue_acts", "sequence": [{"name": "intent", "dtype": "string"}, {"name": "price", "dtype": "float32"}]}, {"name": "utterance", "sequence": "string"}, {"name": "items", "sequence": [{"name": "Category", "dtype": "string"}, {"name": "Images", "dtype": "string"}, {"name": "Price", "dtype": "float32"}, {"name": "Description", "dtype": "string"}, {"name": "Title", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8538836, "num_examples": 5247}, {"name": "test", "num_bytes": 1353933, "num_examples": 838}, {"name": "validation", "num_bytes": 966032, "num_examples": 597}], "download_size": 25373618, "dataset_size": 10858801}, {"config_name": "cryptonite", "features": [{"name": "clue", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "enumeration", "dtype": "string"}, {"name": "publisher", "dtype": "string"}, {"name": "date", "dtype": "int64"}, {"name": "quick", "dtype": "bool"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 52228597, "num_examples": 470804}, {"name": "validation", "num_bytes": 2901768, "num_examples": 26156}, {"name": "test", "num_bytes": 2908275, "num_examples": 26157}], "download_size": 21615952, "dataset_size": 58038640}]}
2024-01-18T09:49:43+00:00
[ "2103.01242" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-4.0 #arxiv-2103.01242 #region-us
# Dataset Card for Cryptonite ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Github - Repository: Github - Paper: Arxiv - Leaderboard: - Point of Contact: Twitter ### Dataset Summary Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%). ### Languages English ## Dataset Structure ### Data Instances This is one example from the train set. ### Data Fields - 'clue': a string representing the clue provided for the crossword - 'answer': a string representing the answer to the clue - 'enumeration': a string representing the - 'publisher': a string representing the publisher of the crossword - 'date': a int64 representing the UNIX timestamp of the date of publication of the crossword - 'quick': a bool representing whether the crossword is quick (a crossword aimed at beginners, easier to solve) - 'id': a string to uniquely identify a given example in the dataset ### Data Splits Train (470,804 examples), validation (26,156 examples), test (26,157 examples). ## Dataset Creation ### Curation Rationale Crosswords from the Times and the Telegraph. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Avia Efrat, Uri Shaham, Dan Kilman, Omer Levy ### Licensing Information 'cc-by-nc-4.0' ### Contributions Thanks to @theo-m for adding this dataset.
[ "# Dataset Card for Cryptonite", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Arxiv\n- Leaderboard:\n- Point of Contact: Twitter", "### Dataset Summary\n\nCurrent NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%).", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nThis is one example from the train set.", "### Data Fields\n\n- 'clue': a string representing the clue provided for the crossword\n- 'answer': a string representing the answer to the clue\n- 'enumeration': a string representing the \n- 'publisher': a string representing the publisher of the crossword\n- 'date': a int64 representing the UNIX timestamp of the date of publication of the crossword\n- 'quick': a bool representing whether the crossword is quick (a crossword aimed at beginners, easier to solve)\n- 'id': a string to uniquely identify a given example in the dataset", "### Data Splits\n\nTrain (470,804 examples), validation (26,156 examples), test (26,157 examples).", "## Dataset Creation", "### Curation Rationale\n\nCrosswords from the Times and the Telegraph.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nAvia Efrat, Uri Shaham, Dan Kilman, Omer Levy", "### Licensing Information\n\n'cc-by-nc-4.0'", "### Contributions\n\nThanks to @theo-m for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-4.0 #arxiv-2103.01242 #region-us \n", "# Dataset Card for Cryptonite", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Arxiv\n- Leaderboard:\n- Point of Contact: Twitter", "### Dataset Summary\n\nCurrent NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%).", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nThis is one example from the train set.", "### Data Fields\n\n- 'clue': a string representing the clue provided for the crossword\n- 'answer': a string representing the answer to the clue\n- 'enumeration': a string representing the \n- 'publisher': a string representing the publisher of the crossword\n- 'date': a int64 representing the UNIX timestamp of the date of publication of the crossword\n- 'quick': a bool representing whether the crossword is quick (a crossword aimed at beginners, easier to solve)\n- 'id': a string to uniquely identify a given example in the dataset", "### Data Splits\n\nTrain (470,804 examples), validation (26,156 examples), test (26,157 examples).", "## Dataset Creation", "### Curation Rationale\n\nCrosswords from the Times and the Telegraph.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nAvia Efrat, Uri Shaham, Dan Kilman, Omer Levy", "### Licensing Information\n\n'cc-by-nc-4.0'", "### Contributions\n\nThanks to @theo-m for adding this dataset." ]
[ 118, 7, 120, 34, 203, 5, 6, 15, 145, 29, 5, 17, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 22, 15, 18 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-4.0 #arxiv-2103.01242 #region-us \n# Dataset Card for Cryptonite## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Arxiv\n- Leaderboard:\n- Point of Contact: Twitter### Dataset Summary\n\nCurrent NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%).### Languages\n\nEnglish## Dataset Structure" ]
70b50ebfd7088698189ba1ea34345def9c7ec486
# Dataset Card for Czech Restaurant ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Czech restaurants homepage](https://github.com/UFAL-DSG/cs_restaurant_dataset) - **Paper:** [Czech restaurants on Arxiv](https://arxiv.org/abs/1910.05298) ### Dataset Summary This is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the [English San Francisco Restaurants dataset](https://www.repository.cam.ac.uk/handle/1810/251304) by Wen et al. (2015). The domain is restaurant information in Prague, with random/fictional values. It includes input dialogue acts and the corresponding outputs in Czech. ### Supported Tasks and Leaderboards - `other-intent-to-text`: The dataset can be used to train a model for data-to-text generation: from a desired dialogue act, the model must produce textual output that conveys this intention. ### Languages The entire dataset is in Czech, translated from the English San Francisco dataset by professional translators. ## Dataset Structure ### Data Instances Example of a data instance: ``` { "da": "?request(area)", "delex_da": "?request(area)", "text": "Jakou lokalitu hledáte ?", "delex_text": "Jakou lokalitu hledáte ?" } ``` ### Data Fields - `da`: input dialogue act - `delex_da`: input dialogue act, delexicalized - `text`: output text - `delex_text`: output text, delexicalized ### Data Splits The order of the instances is random; the split is roughly 3:1:1 between train, development, and test, ensuring that the different sections don't share the same DAs (so the generators need to generalize to unseen DAs), but they share as many generic different DA types as possible (e.g., confirm, inform_only_match etc.). DA types that only have a single corresponding DA (e.g., bye()) are included in the training set. The training, development, and test set contain 3569, 781, and 842 instances, respectively. ## Dataset Creation ### Curation Rationale While most current neural NLG systems do not explicitly contain language-specific components and are thus capable of multilingual generation in principle, there has been little work to test these capabilities experimentally. This goes hand in hand with the scarcity of non-English training datasets for NLG – the only data-to-text NLG set known to us is a small sportscasting Korean dataset (Chenet al., 2010), which only contains a limited number of named entities, reducing the need for their inflection. Since most generators are only tested on English, they do not need to handle grammar complexities not present in English. A prime example is the delexicalization technique used by most current generators. We create a novel dataset for Czech delexicalized generation; this extends the typical task of data-to-text NLG by requiring attribute value inflection. We choose Czech as an example of a morphologically complex language with a large set of NLP tools readily available. ### Source Data #### Initial Data Collection and Normalization The original data was collected from the [English San Francisco Restaurants dataset](https://www.repository.cam.ac.uk/handle/1810/251304) by Wen et al. (2015). #### Who are the source language producers? The original data was produced in interactions between Amazon Mechanical Turk workers and themed around San Francisco restaurants. This data was then translated into Czech and localized to Prague restaurants by professional translators. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information This data does not contain personal information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Ondřej Dušek, Filip Jurčíček, Josef Dvořák, Petra Grycová, Matěj Hejda, Jana Olivová, Michal Starý, Eva Štichová, Charles University. This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 333, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071). ### Licensing Information [Creative Commons 4.0 BY-SA](https://creativecommons.org/licenses/by-sa/4.0/) ### Citation Information ``` @article{DBLP:journals/corr/abs-1910-05298, author = {Ondrej Dusek and Filip Jurcicek}, title = {Neural Generation for Czech: Data and Baselines}, journal = {CoRR}, volume = {abs/1910.05298}, year = {2019}, url = {http://arxiv.org/abs/1910.05298}, archivePrefix = {arXiv}, eprint = {1910.05298}, timestamp = {Wed, 16 Oct 2019 16:25:53 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-05298.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
cs_restaurants
[ "task_categories:text2text-generation", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-san-francisco-restaurants", "language:cs", "license:cc-by-4.0", "intent-to-text", "arxiv:1910.05298", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["expert-generated", "machine-generated"], "language": ["cs"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|other-san-francisco-restaurants"], "task_categories": ["text2text-generation", "text-generation", "fill-mask"], "task_ids": ["dialogue-modeling", "language-modeling", "masked-language-modeling"], "paperswithcode_id": "czech-restaurant-information", "pretty_name": "Czech Restaurant", "tags": ["intent-to-text"], "dataset_info": {"features": [{"name": "dialogue_act", "dtype": "string"}, {"name": "delexicalized_dialogue_act", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "delexicalized_text", "dtype": "string"}], "config_name": "CSRestaurants", "splits": [{"name": "train", "num_bytes": 654071, "num_examples": 3569}, {"name": "validation", "num_bytes": 181528, "num_examples": 781}, {"name": "test", "num_bytes": 191334, "num_examples": 842}], "download_size": 1463019, "dataset_size": 1026933}}
2024-01-18T09:50:17+00:00
[ "1910.05298" ]
[ "cs" ]
TAGS #task_categories-text2text-generation #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-san-francisco-restaurants #language-Czech #license-cc-by-4.0 #intent-to-text #arxiv-1910.05298 #region-us
# Dataset Card for Czech Restaurant ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: Czech restaurants homepage - Paper: Czech restaurants on Arxiv ### Dataset Summary This is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the English San Francisco Restaurants dataset by Wen et al. (2015). The domain is restaurant information in Prague, with random/fictional values. It includes input dialogue acts and the corresponding outputs in Czech. ### Supported Tasks and Leaderboards - 'other-intent-to-text': The dataset can be used to train a model for data-to-text generation: from a desired dialogue act, the model must produce textual output that conveys this intention. ### Languages The entire dataset is in Czech, translated from the English San Francisco dataset by professional translators. ## Dataset Structure ### Data Instances Example of a data instance: ### Data Fields - 'da': input dialogue act - 'delex_da': input dialogue act, delexicalized - 'text': output text - 'delex_text': output text, delexicalized ### Data Splits The order of the instances is random; the split is roughly 3:1:1 between train, development, and test, ensuring that the different sections don't share the same DAs (so the generators need to generalize to unseen DAs), but they share as many generic different DA types as possible (e.g., confirm, inform_only_match etc.). DA types that only have a single corresponding DA (e.g., bye()) are included in the training set. The training, development, and test set contain 3569, 781, and 842 instances, respectively. ## Dataset Creation ### Curation Rationale While most current neural NLG systems do not explicitly contain language-specific components and are thus capable of multilingual generation in principle, there has been little work to test these capabilities experimentally. This goes hand in hand with the scarcity of non-English training datasets for NLG – the only data-to-text NLG set known to us is a small sportscasting Korean dataset (Chenet al., 2010), which only contains a limited number of named entities, reducing the need for their inflection. Since most generators are only tested on English, they do not need to handle grammar complexities not present in English. A prime example is the delexicalization technique used by most current generators. We create a novel dataset for Czech delexicalized generation; this extends the typical task of data-to-text NLG by requiring attribute value inflection. We choose Czech as an example of a morphologically complex language with a large set of NLP tools readily available. ### Source Data #### Initial Data Collection and Normalization The original data was collected from the English San Francisco Restaurants dataset by Wen et al. (2015). #### Who are the source language producers? The original data was produced in interactions between Amazon Mechanical Turk workers and themed around San Francisco restaurants. This data was then translated into Czech and localized to Prague restaurants by professional translators. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information This data does not contain personal information. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Ondřej Dušek, Filip Jurčíček, Josef Dvořák, Petra Grycová, Matěj Hejda, Jana Olivová, Michal Starý, Eva Štichová, Charles University. This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 333, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071). ### Licensing Information Creative Commons 4.0 BY-SA ### Contributions Thanks to @TevenLeScao for adding this dataset.
[ "# Dataset Card for Czech Restaurant", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: Czech restaurants homepage\n- Paper: Czech restaurants on Arxiv", "### Dataset Summary\n\nThis is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the English San Francisco Restaurants dataset by Wen et al. (2015). The domain is restaurant information in Prague, with random/fictional values. It includes input dialogue acts and the corresponding outputs in Czech.", "### Supported Tasks and Leaderboards\n\n- 'other-intent-to-text': The dataset can be used to train a model for data-to-text generation: from a desired dialogue act, the model must produce textual output that conveys this intention.", "### Languages\n\nThe entire dataset is in Czech, translated from the English San Francisco dataset by professional translators.", "## Dataset Structure", "### Data Instances\n\nExample of a data instance:", "### Data Fields\n\n- 'da': input dialogue act\n- 'delex_da': input dialogue act, delexicalized\n- 'text': output text\n- 'delex_text': output text, delexicalized", "### Data Splits\n\nThe order of the instances is random; the split is roughly 3:1:1 between train, development, and test, ensuring that the different sections don't share the same DAs (so the generators need to generalize to unseen DAs), but they share as many generic different DA types as possible (e.g., confirm, inform_only_match etc.). DA types that only have a single corresponding DA (e.g., bye()) are included in the training set.\n\nThe training, development, and test set contain 3569, 781, and 842 instances, respectively.", "## Dataset Creation", "### Curation Rationale\n\nWhile most current neural NLG systems do not explicitly contain language-specific components and are thus capable of multilingual generation in principle, there has been little work to test these capabilities experimentally. This goes hand in hand with the scarcity of non-English training datasets for NLG – the only data-to-text NLG set known to us is a small sportscasting Korean dataset (Chenet al., 2010), which only contains a limited number of named entities, reducing the need for their inflection. Since most generators are only tested on English, they do not need to handle grammar complexities not present in English. A prime example is the delexicalization technique used by most current generators. We create a novel dataset for Czech delexicalized generation; this extends the typical task of data-to-text NLG by requiring attribute value inflection. We choose Czech as an example of a morphologically complex language with a large set of NLP tools readily available.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe original data was collected from the English San Francisco Restaurants dataset by Wen et al. (2015).", "#### Who are the source language producers?\n\nThe original data was produced in interactions between Amazon Mechanical Turk workers and themed around San Francisco restaurants. This data was then translated into Czech and localized to Prague restaurants by professional translators.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThis data does not contain personal information.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nOndřej Dušek, Filip Jurčíček, Josef Dvořák, Petra Grycová, Matěj Hejda, Jana Olivová, Michal Starý, Eva Štichová, Charles University. This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 333, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).", "### Licensing Information\n\nCreative Commons 4.0 BY-SA", "### Contributions\n\nThanks to @TevenLeScao for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-san-francisco-restaurants #language-Czech #license-cc-by-4.0 #intent-to-text #arxiv-1910.05298 #region-us \n", "# Dataset Card for Czech Restaurant", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: Czech restaurants homepage\n- Paper: Czech restaurants on Arxiv", "### Dataset Summary\n\nThis is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the English San Francisco Restaurants dataset by Wen et al. (2015). The domain is restaurant information in Prague, with random/fictional values. It includes input dialogue acts and the corresponding outputs in Czech.", "### Supported Tasks and Leaderboards\n\n- 'other-intent-to-text': The dataset can be used to train a model for data-to-text generation: from a desired dialogue act, the model must produce textual output that conveys this intention.", "### Languages\n\nThe entire dataset is in Czech, translated from the English San Francisco dataset by professional translators.", "## Dataset Structure", "### Data Instances\n\nExample of a data instance:", "### Data Fields\n\n- 'da': input dialogue act\n- 'delex_da': input dialogue act, delexicalized\n- 'text': output text\n- 'delex_text': output text, delexicalized", "### Data Splits\n\nThe order of the instances is random; the split is roughly 3:1:1 between train, development, and test, ensuring that the different sections don't share the same DAs (so the generators need to generalize to unseen DAs), but they share as many generic different DA types as possible (e.g., confirm, inform_only_match etc.). DA types that only have a single corresponding DA (e.g., bye()) are included in the training set.\n\nThe training, development, and test set contain 3569, 781, and 842 instances, respectively.", "## Dataset Creation", "### Curation Rationale\n\nWhile most current neural NLG systems do not explicitly contain language-specific components and are thus capable of multilingual generation in principle, there has been little work to test these capabilities experimentally. This goes hand in hand with the scarcity of non-English training datasets for NLG – the only data-to-text NLG set known to us is a small sportscasting Korean dataset (Chenet al., 2010), which only contains a limited number of named entities, reducing the need for their inflection. Since most generators are only tested on English, they do not need to handle grammar complexities not present in English. A prime example is the delexicalization technique used by most current generators. We create a novel dataset for Czech delexicalized generation; this extends the typical task of data-to-text NLG by requiring attribute value inflection. We choose Czech as an example of a morphologically complex language with a large set of NLP tools readily available.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe original data was collected from the English San Francisco Restaurants dataset by Wen et al. (2015).", "#### Who are the source language producers?\n\nThe original data was produced in interactions between Amazon Mechanical Turk workers and themed around San Francisco restaurants. This data was then translated into Czech and localized to Prague restaurants by professional translators.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThis data does not contain personal information.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nOndřej Dušek, Filip Jurčíček, Josef Dvořák, Petra Grycová, Matěj Hejda, Jana Olivová, Michal Starý, Eva Štichová, Charles University. This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 333, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).", "### Licensing Information\n\nCreative Commons 4.0 BY-SA", "### Contributions\n\nThanks to @TevenLeScao for adding this dataset." ]
[ 180, 7, 120, 22, 81, 61, 28, 6, 13, 51, 140, 5, 233, 4, 31, 53, 5, 5, 9, 16, 8, 7, 8, 7, 5, 142, 12, 19 ]
[ "passage: TAGS\n#task_categories-text2text-generation #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-san-francisco-restaurants #language-Czech #license-cc-by-4.0 #intent-to-text #arxiv-1910.05298 #region-us \n# Dataset Card for Czech Restaurant## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: Czech restaurants homepage\n- Paper: Czech restaurants on Arxiv### Dataset Summary\n\nThis is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the English San Francisco Restaurants dataset by Wen et al. (2015). The domain is restaurant information in Prague, with random/fictional values. It includes input dialogue acts and the corresponding outputs in Czech.### Supported Tasks and Leaderboards\n\n- 'other-intent-to-text': The dataset can be used to train a model for data-to-text generation: from a desired dialogue act, the model must produce textual output that conveys this intention.### Languages\n\nThe entire dataset is in Czech, translated from the English San Francisco dataset by professional translators.## Dataset Structure", "passage: ### Data Instances\n\nExample of a data instance:### Data Fields\n\n- 'da': input dialogue act\n- 'delex_da': input dialogue act, delexicalized\n- 'text': output text\n- 'delex_text': output text, delexicalized### Data Splits\n\nThe order of the instances is random; the split is roughly 3:1:1 between train, development, and test, ensuring that the different sections don't share the same DAs (so the generators need to generalize to unseen DAs), but they share as many generic different DA types as possible (e.g., confirm, inform_only_match etc.). DA types that only have a single corresponding DA (e.g., bye()) are included in the training set.\n\nThe training, development, and test set contain 3569, 781, and 842 instances, respectively.## Dataset Creation### Curation Rationale\n\nWhile most current neural NLG systems do not explicitly contain language-specific components and are thus capable of multilingual generation in principle, there has been little work to test these capabilities experimentally. This goes hand in hand with the scarcity of non-English training datasets for NLG – the only data-to-text NLG set known to us is a small sportscasting Korean dataset (Chenet al., 2010), which only contains a limited number of named entities, reducing the need for their inflection. Since most generators are only tested on English, they do not need to handle grammar complexities not present in English. A prime example is the delexicalization technique used by most current generators. We create a novel dataset for Czech delexicalized generation; this extends the typical task of data-to-text NLG by requiring attribute value inflection. We choose Czech as an example of a morphologically complex language with a large set of NLP tools readily available.### Source Data#### Initial Data Collection and Normalization\n\nThe original data was collected from the English San Francisco Restaurants dataset by Wen et al. (2015)." ]
4b04fdde00e112cb54733513703e809ff625fd51
# Dataset Card for CUAD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad) - **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/) - **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268) - **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org) ### Dataset Summary Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset contains samples in English only. ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [44], "text": ['DISTRIBUTOR AGREEMENT'] }, "context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...', "id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0", "question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract", "title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT" } ``` ### Data Fields - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits This dataset is split into train/test set. Number of samples in each set is given below: | | Train | Test | | ----- | ------ | ---- | | CUAD | 22450 | 4182 | ## Dataset Creation ### Curation Rationale A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring. Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies. To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack. ### Source Data #### Initial Data Collection and Normalization The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet. Type of Contracts: # of Docs Affiliate Agreement: 10 Agency Agreement: 13 Collaboration/Cooperation Agreement: 26 Co-Branding Agreement: 22 Consulting Agreement: 11 Development Agreement: 29 Distributor Agreement: 32 Endorsement Agreement: 24 Franchise Agreement: 15 Hosting Agreement: 20 IP Agreement: 17 Joint Venture Agreemen: 23 License Agreement: 33 Maintenance Agreement: 34 Manufacturing Agreement: 17 Marketing Agreement: 17 Non-Compete/No-Solicit/Non-Disparagement Agreement: 3 Outsourcing Agreement: 18 Promotion Agreement: 12 Reseller Agreement: 12 Service Agreement: 28 Sponsorship Agreement: 31 Supply Agreement: 18 Strategic Alliance Agreement: 32 Transportation Agreement: 13 TOTAL: 510 #### Who are the source language producers? The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD. ### Annotations #### Annotation process The labeling process included multiple steps to ensure accuracy: 1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours. 2. Law Student Label: law students conducted manual contract review and labeling in eBrevia. 3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step. 4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled. 5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly. 6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels. 7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer. #### Who are the annotators? Answered in above section. ### Personal and Sensitive Information Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”). For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”. For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”. Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows: THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION. Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category. To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.” Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Attorney Advisors Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu Law Student Leaders John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran Law Student Contributors Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin Technical Advisors & Contributors Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen ### Licensing Information CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use. The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR. Privacy Policy & Disclaimers The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved. The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer. ### Citation Information ``` @article{hendrycks2021cuad, title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball}, journal={arXiv preprint arXiv:2103.06268}, year={2021} } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
cuad
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2103.06268", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa", "extractive-qa"], "paperswithcode_id": "cuad", "pretty_name": "CUAD", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 1466037640, "num_examples": 22450}, {"name": "test", "num_bytes": 198543467, "num_examples": 4182}], "download_size": 18309308, "dataset_size": 1664581107}, "train-eval-index": [{"config": "default", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"question": "question", "context": "context", "answers": {"text": "text", "answer_start": "answer_start"}}, "metrics": [{"type": "cuad", "name": "CUAD"}]}]}
2024-01-18T09:51:23+00:00
[ "2103.06268" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-closed-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2103.06268 #region-us
Dataset Card for CUAD ===================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Contract Understanding Atticus Dataset * Repository: Contract Understanding Atticus Dataset * Paper: CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review * Point of Contact: Atticus Project Team ### Dataset Summary Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at URL Code for replicating the results and the trained model can be found at URL ### Supported Tasks and Leaderboards ### Languages The dataset contains samples in English only. Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows. ### Data Fields * 'id': a 'string' feature. * 'title': a 'string' feature. * 'context': a 'string' feature. * 'question': a 'string' feature. * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. ### Data Splits This dataset is split into train/test set. Number of samples in each set is given below: Train: CUAD, Test: 22450 Dataset Creation ---------------- ### Curation Rationale A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring. Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies. To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack. ### Source Data #### Initial Data Collection and Normalization The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet. Type of Contracts: # of Docs ``` Affiliate Agreement: 10 Agency Agreement: 13 Collaboration/Cooperation Agreement: 26 Co-Branding Agreement: 22 Consulting Agreement: 11 Development Agreement: 29 Distributor Agreement: 32 Endorsement Agreement: 24 Franchise Agreement: 15 Hosting Agreement: 20 IP Agreement: 17 Joint Venture Agreemen: 23 License Agreement: 33 Maintenance Agreement: 34 Manufacturing Agreement: 17 Marketing Agreement: 17 Non-Compete/No-Solicit/Non-Disparagement Agreement: 3 Outsourcing Agreement: 18 Promotion Agreement: 12 Reseller Agreement: 12 Service Agreement: 28 Sponsorship Agreement: 31 Supply Agreement: 18 Strategic Alliance Agreement: 32 Transportation Agreement: 13 TOTAL: 510 ``` #### Who are the source language producers? The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at URL Please read the Datasheet at URL for information on the intended use and limitations of the CUAD. ### Annotations #### Annotation process The labeling process included multiple steps to ensure accuracy: 1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours. 2. Law Student Label: law students conducted manual contract review and labeling in eBrevia. 3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step. 4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled. 5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly. 6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels. 7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer. #### Who are the annotators? Answered in above section. ### Personal and Sensitive Information Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”). For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”. For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”. Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows: THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [\* \* \*] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION. Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category. To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 This Agreement is effective as of the date written above.” Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Attorney Advisors Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu Law Student Leaders John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran Law Student Contributors Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin Technical Advisors & Contributors Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen ### Licensing Information CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use. The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR. Privacy Policy & Disclaimers The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@URL. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved. The use of CUAD is subject to their privacy policy URL and disclaimer URL ### Contributions Thanks to @bhavitvyamalik for adding this dataset.
[ "### Dataset Summary\n\n\nContract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions.\n\n\nCUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at URL Code for replicating the results and the trained model can be found at URL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset contains samples in English only.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\nThis dataset is split into train/test set. Number of samples in each set is given below:\n\n\nTrain: CUAD, Test: 22450\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nA highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.\n\n\nContract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.\n\n\nTo reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.\n\n\nType of Contracts: # of Docs\n\n\n\n```\nAffiliate Agreement:\t\t10\nAgency Agreement:\t\t 13\nCollaboration/Cooperation Agreement: 26\nCo-Branding Agreement:\t\t22\nConsulting Agreement:\t\t11\nDevelopment Agreement:\t\t29\nDistributor Agreement:\t\t32\nEndorsement Agreement:\t\t24\nFranchise Agreement:\t\t15\nHosting Agreement:\t\t20\nIP Agreement:\t\t\t17\nJoint Venture Agreemen:\t\t23\nLicense Agreement:\t\t33\nMaintenance Agreement:\t\t34\nManufacturing Agreement:\t17\nMarketing Agreement:\t\t17\nNon-Compete/No-Solicit/Non-Disparagement Agreement: 3\nOutsourcing Agreement:\t\t18\nPromotion Agreement:\t\t12\nReseller Agreement:\t\t12\nService Agreement:\t\t28\nSponsorship Agreement:\t\t31\nSupply Agreement:\t\t18\nStrategic Alliance Agreement:\t32\nTransportation Agreement:\t13\nTOTAL:\t\t\t\t510\n\n```", "#### Who are the source language producers?\n\n\nThe contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at URL Please read the Datasheet at URL for information on the intended use and limitations of the CUAD.", "### Annotations", "#### Annotation process\n\n\nThe labeling process included multiple steps to ensure accuracy:\n\n\n1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.\n2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.\n3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.\n4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.\n5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.\n6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.\n7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.", "#### Who are the annotators?\n\n\nAnswered in above section.", "### Personal and Sensitive Information\n\n\nSome clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\\*\\*\\*) or underscores (\\_\\_\\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \\_\\_ 2020” would be “1/[]/2020”).\n\n\nFor any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.\n\n\nFor the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.\n\n\nSome sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:\n\n\nTHIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [\\* \\* \\*] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.\n\n\nSome sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.\n\n\nTo address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol \"\" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol \"”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 This Agreement is effective as of the date written above.”\n\n\nBecause the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nAttorney Advisors\nWei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu\n\n\nLaw Student Leaders\nJohn Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran\n\n\nLaw Student Contributors\nScott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin\n\n\nTechnical Advisors & Contributors\nDan Hendrycks, Collin Burns, Spencer Ball, Anya Chen", "### Licensing Information\n\n\nCUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.\n\n\nThe creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.\nPrivacy Policy & Disclaimers\n\n\nThe categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@URL. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.\n\n\nThe use of CUAD is subject to their privacy policy URL and disclaimer URL", "### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2103.06268 #region-us \n", "### Dataset Summary\n\n\nContract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions.\n\n\nCUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at URL Code for replicating the results and the trained model can be found at URL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset contains samples in English only.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\nThis dataset is split into train/test set. Number of samples in each set is given below:\n\n\nTrain: CUAD, Test: 22450\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nA highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.\n\n\nContract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.\n\n\nTo reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.\n\n\nType of Contracts: # of Docs\n\n\n\n```\nAffiliate Agreement:\t\t10\nAgency Agreement:\t\t 13\nCollaboration/Cooperation Agreement: 26\nCo-Branding Agreement:\t\t22\nConsulting Agreement:\t\t11\nDevelopment Agreement:\t\t29\nDistributor Agreement:\t\t32\nEndorsement Agreement:\t\t24\nFranchise Agreement:\t\t15\nHosting Agreement:\t\t20\nIP Agreement:\t\t\t17\nJoint Venture Agreemen:\t\t23\nLicense Agreement:\t\t33\nMaintenance Agreement:\t\t34\nManufacturing Agreement:\t17\nMarketing Agreement:\t\t17\nNon-Compete/No-Solicit/Non-Disparagement Agreement: 3\nOutsourcing Agreement:\t\t18\nPromotion Agreement:\t\t12\nReseller Agreement:\t\t12\nService Agreement:\t\t28\nSponsorship Agreement:\t\t31\nSupply Agreement:\t\t18\nStrategic Alliance Agreement:\t32\nTransportation Agreement:\t13\nTOTAL:\t\t\t\t510\n\n```", "#### Who are the source language producers?\n\n\nThe contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at URL Please read the Datasheet at URL for information on the intended use and limitations of the CUAD.", "### Annotations", "#### Annotation process\n\n\nThe labeling process included multiple steps to ensure accuracy:\n\n\n1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.\n2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.\n3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.\n4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.\n5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.\n6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.\n7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.", "#### Who are the annotators?\n\n\nAnswered in above section.", "### Personal and Sensitive Information\n\n\nSome clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\\*\\*\\*) or underscores (\\_\\_\\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \\_\\_ 2020” would be “1/[]/2020”).\n\n\nFor any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.\n\n\nFor the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.\n\n\nSome sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:\n\n\nTHIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [\\* \\* \\*] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.\n\n\nSome sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.\n\n\nTo address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol \"\" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol \"”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 This Agreement is effective as of the date written above.”\n\n\nBecause the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nAttorney Advisors\nWei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu\n\n\nLaw Student Leaders\nJohn Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran\n\n\nLaw Student Contributors\nScott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin\n\n\nTechnical Advisors & Contributors\nDan Hendrycks, Collin Burns, Spencer Ball, Anya Chen", "### Licensing Information\n\n\nCUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.\n\n\nThe creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.\nPrivacy Policy & Disclaimers\n\n\nThe categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@URL. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.\n\n\nThe use of CUAD is subject to their privacy policy URL and disclaimer URL", "### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset." ]
[ 113, 122, 10, 22, 18, 92, 42, 448, 4, 224, 109, 5, 387, 15, 733, 7, 8, 14, 218, 167, 19 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2103.06268 #region-us \n### Dataset Summary\n\n\nContract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions.\n\n\nCUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at URL Code for replicating the results and the trained model can be found at URL### Supported Tasks and Leaderboards### Languages\n\n\nThe dataset contains samples in English only.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.### Data Splits\n\n\nThis dataset is split into train/test set. Number of samples in each set is given below:\n\n\nTrain: CUAD, Test: 22450\n\n\nDataset Creation\n----------------", "passage: ### Curation Rationale\n\n\nA highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.\n\n\nContract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.\n\n\nTo reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.### Source Data#### Initial Data Collection and Normalization\n\n\nThe CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.\n\n\nType of Contracts: # of Docs\n\n\n\n```\nAffiliate Agreement:\t\t10\nAgency Agreement:\t\t 13\nCollaboration/Cooperation Agreement: 26\nCo-Branding Agreement:\t\t22\nConsulting Agreement:\t\t11\nDevelopment Agreement:\t\t29\nDistributor Agreement:\t\t32\nEndorsement Agreement:\t\t24\nFranchise Agreement:\t\t15\nHosting Agreement:\t\t20\nIP Agreement:\t\t\t17\nJoint Venture Agreemen:\t\t23\nLicense Agreement:\t\t33\nMaintenance Agreement:\t\t34\nManufacturing Agreement:\t17\nMarketing Agreement:\t\t17\nNon-Compete/No-Solicit/Non-Disparagement Agreement: 3\nOutsourcing Agreement:\t\t18\nPromotion Agreement:\t\t12\nReseller Agreement:\t\t12\nService Agreement:\t\t28\nSponsorship Agreement:\t\t31\nSupply Agreement:\t\t18\nStrategic Alliance Agreement:\t32\nTransportation Agreement:\t13\nTOTAL:\t\t\t\t510\n\n```#### Who are the source language producers?\n\n\nThe contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at URL Please read the Datasheet at URL for information on the intended use and limitations of the CUAD.### Annotations", "passage: #### Annotation process\n\n\nThe labeling process included multiple steps to ensure accuracy:\n\n\n1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.\n2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.\n3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.\n4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.\n5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.\n6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.\n7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.#### Who are the annotators?\n\n\nAnswered in above section." ]
98cd6aef80e25b01f029e08dcfd79e3ccc7c20b1
# Dataset Card for Curiosity Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Curiosity Dataset Homepage](https://www.pedro.ai/curiosity) - **Repository:** [Curiosity Dataset Repository](https://github.com/facebookresearch/curiosity) - **Paper:** [ACL Anthology](https://www.aclweb.org/anthology/2020.emnlp-main.655/) - **Point of Contact:** [Pedro Rodriguez](https://mailhide.io/e/wbfjM) ### Dataset Summary Curiosity dataset consists of 14K English dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages. ### Supported Tasks and Leaderboards * `text-generation-other-conversational-curiosity`: The dataset can be used to train a model for Conversational Curiosity, which consists in the testing of the hypothesis that engagement increases when users are presented with facts related to what they know. Success on this task is typically measured by achieving a *high* [Accuracy](https://huggingface.co/metrics/accuracy) and [F1 Score](https://huggingface.co/metrics/f1). ### Languages The text in the dataset is in English collected by crowd-souring. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances A typical data point consists of dialogs between an user and an assistant, which is followed by the different attributes of the particular dialog. An example from the Curiosity Dataset train set looks as follows: ``` {'annotated': 1, 'aspects': ['Media', 'Politics and government'], 'assistant_dialog_rating': 5, 'assistant_id': 341, 'assistant_other_agent_rating': 5, 'created_time': 1571783665, 'dialog_id': 21922, 'first_aspect': 'Media', 'focus_entity': 'Namibia', 'inferred_steps': 1, 'is_annotated': 0, 'known_entities': ['South Africa', 'United Kingdom', 'Portugal'], 'messages': {'dialog_acts': [['request_topic'], ['inform_response'], ['request_aspect'], ['inform_response'], ['request_followup'], ['inform_response'], ['request_aspect', 'feedback_positive'], ['inform_response'], ['request_followup'], ['inform_response'], [], []], 'facts': [{'fid': [], 'source': [], 'used': []}, {'fid': [77870, 77676, 77816, 77814, 77775, 77659, 77877, 77785, 77867], 'source': [0, 1, 2, 2, 0, 2, 0, 1, 1], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77725, 77870, 77676, 77863, 77814, 77775, 77659, 77877, 77867], 'source': [2, 0, 1, 1, 2, 0, 2, 0, 1], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77694, 77661, 77863, 77780, 77671, 77704, 77869, 77693, 77877], 'source': [1, 2, 1, 0, 2, 2, 0, 1, 0], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 1]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77816, 77814, 77864, 77659, 77877, 77803, 77738, 77784, 77789], 'source': [2, 2, 0, 2, 0, 1, 1, 0, 1], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77694, 77776, 77780, 77696, 77707, 77693, 77778, 77702, 77743], 'source': [1, 0, 0, 2, 1, 1, 0, 2, 2], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]}, {'fid': [], 'source': [], 'used': []}, {'fid': [77662, 77779, 77742, 77734, 77663, 77777, 77702, 77731, 77778], 'source': [1, 0, 2, 1, 2, 0, 2, 1, 0], 'used': [0, 0, 0, 0, 0, 0, 0, 0, 1]}], 'liked': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'message': ['Hi. I want information about Namibia.', 'Nmbia is a country in southern Africa.', 'Do you have information about the media there?', 'A mentional amount of foriegn', 'What about it?', "Media and journalists in Namibia are represented by the Namibia chapter of the Media Institute of 'southern Africa and the Editors Forum of Namibia.", 'Interesting! What can you tell me about the politics and government?', 'Namibia formed the Namibian Defence Force, comprising former enemies in a 23-year bush war.', 'Do you have more information about it?', "With a small army and a fragile economy , the Namibian government's principal foreign policy concern is developing strengthened ties within the Southern African region.", "That's all I wanted to know. Thank you!", 'My pleasure!'], 'message_id': ['617343895', '2842515356', '4240816985', '520711081', '1292358002', '3677078227', '1563061125', '1089028270', '1607063839', '113037558', '1197873991', '1399017322'], 'sender': [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1]}, 'related_entities': ['Western Roman Empire', 'United Kingdom', 'Portuguese language', 'Southern African Development Community', 'South Africa', 'Kalahari Desert', 'Namib Desert', 'League of Nations', 'Afrikaans', 'Sub-Saharan Africa', 'Portugal', 'South-West Africa', 'Warmbad, Namibia', 'German language', 'NBC'], 'reported': 0, 'second_aspect': 'Politics and government', 'shuffle_facts': 1, 'tag': 'round_2', 'user_dialog_rating': 5, 'user_id': 207, 'user_other_agent_rating': 5} ``` ### Data Fields * `messages`: List of dialogs between the user and the assistant and their associated attributes * `dialog_acts`: List of actions performed in the dialogs * `facts`: List of facts returned by the assistant * `fid`: Fact ID * `source`: Source for the fact * `used`: Whether facts were used before in the same dialog * `liked`: List of values indicating whether each dialog was liked * `message`: List of dialogs (messages) between the user and the assistant * `message_id`: Message ID * `sender`: Message author ID (numeric) * `known_entities`: Rooted facts about entities the user knows * `focus_entity` : Entity in focus in the dialogs * `dialog_id `: Dialog ID * `inferred_steps`: Number of inferred steps * `created_time`: Time of creation of the dialog * `aspects`: List of two aspects which the dialog is about * `first_aspect`: First aspect * `second_aspect`: Second aspect * `shuffle_facts`: Whether facts were shuffled * `related_entities` : List of fifteen related entities to the focus entity * `tag`: Conversation tag * `user_id`: User ID * `assistant_id`: Assistant ID * `is_annotated`: 0 or 1 (More Information Needed) * `user_dialog_rating`: 1 - 5 (More Information Needed) * `user_other_agent_rating`: 1 - 5 (More Information Needed) * `assistant_dialog_rating`: 1 - 5 (More Information Needed) * `assistant_other_agent_rating`: 1 - 5 (More Information Needed) * `reported`: Whether the dialog was reported inappropriate * `annotated`: 0 or 1 (More Information Needed) ### Data Splits The data is split into a training, validation, test and test_zero set as per the original dataset split. | | train | validation | test | test_zero | |-----------------------|------:|-----------:|-----:|----------:| | Input dialog examples | 10287 | 1287 | 1287 | 1187 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/legalcode) ### Citation Information ``` @inproceedings{rodriguez2020curiosity, title = {Information Seeking in the Spirit of Learning: a Dataset for Conversational Curiosity}, author = {Pedro Rodriguez and Paul Crook and Seungwhan Moon and Zhiguang Wang}, year = 2020, booktitle = {Empirical Methods in Natural Language Processing} } ``` ### Contributions Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset.
curiosity_dialogs
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "conversational-curiosity", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["dialogue-modeling"], "paperswithcode_id": "curiosity", "pretty_name": "Curiosity Dataset", "tags": ["conversational-curiosity"], "dataset_info": {"features": [{"name": "messages", "sequence": [{"name": "message", "dtype": "string"}, {"name": "liked", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}, {"name": "sender", "dtype": {"class_label": {"names": {"0": "user", "1": "assistant"}}}}, {"name": "facts", "sequence": [{"name": "fid", "dtype": "int32"}, {"name": "used", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}, {"name": "source", "dtype": {"class_label": {"names": {"0": "section", "1": "known", "2": "random"}}}}]}, {"name": "message_id", "dtype": "string"}, {"name": "dialog_acts", "sequence": "string"}]}, {"name": "known_entities", "sequence": "string"}, {"name": "focus_entity", "dtype": "string"}, {"name": "dialog_id", "dtype": "int32"}, {"name": "inferred_steps", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}, {"name": "created_time", "dtype": "int64"}, {"name": "aspects", "sequence": "string"}, {"name": "first_aspect", "dtype": "string"}, {"name": "second_aspect", "dtype": "string"}, {"name": "shuffle_facts", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}, {"name": "related_entities", "sequence": "string"}, {"name": "tag", "dtype": "string"}, {"name": "user_id", "dtype": "int32"}, {"name": "assistant_id", "dtype": "int32"}, {"name": "is_annotated", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}, {"name": "user_dialog_rating", "dtype": "int32"}, {"name": "user_other_agent_rating", "dtype": "int32"}, {"name": "assistant_dialog_rating", "dtype": "int32"}, {"name": "assistant_other_agent_rating", "dtype": "int32"}, {"name": "reported", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}, {"name": "annotated", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}], "config_name": "curiosity_dialogs", "splits": [{"name": "train", "num_bytes": 37198297, "num_examples": 10287}, {"name": "val", "num_bytes": 4914487, "num_examples": 1287}, {"name": "test", "num_bytes": 4915613, "num_examples": 1287}, {"name": "test_zero", "num_bytes": 4333191, "num_examples": 1187}], "download_size": 92169165, "dataset_size": 51361588}}
2024-01-18T09:51:48+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-4.0 #conversational-curiosity #region-us
Dataset Card for Curiosity Dataset ================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Curiosity Dataset Homepage * Repository: Curiosity Dataset Repository * Paper: ACL Anthology * Point of Contact: Pedro Rodriguez ### Dataset Summary Curiosity dataset consists of 14K English dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages. ### Supported Tasks and Leaderboards * 'text-generation-other-conversational-curiosity': The dataset can be used to train a model for Conversational Curiosity, which consists in the testing of the hypothesis that engagement increases when users are presented with facts related to what they know. Success on this task is typically measured by achieving a *high* Accuracy and F1 Score. ### Languages The text in the dataset is in English collected by crowd-souring. The associated BCP-47 code is 'en'. Dataset Structure ----------------- ### Data Instances A typical data point consists of dialogs between an user and an assistant, which is followed by the different attributes of the particular dialog. An example from the Curiosity Dataset train set looks as follows: ### Data Fields * 'messages': List of dialogs between the user and the assistant and their associated attributes + 'dialog\_acts': List of actions performed in the dialogs + 'facts': List of facts returned by the assistant - 'fid': Fact ID - 'source': Source for the fact - 'used': Whether facts were used before in the same dialog + 'liked': List of values indicating whether each dialog was liked + 'message': List of dialogs (messages) between the user and the assistant + 'message\_id': Message ID + 'sender': Message author ID (numeric) * 'known\_entities': Rooted facts about entities the user knows * 'focus\_entity' : Entity in focus in the dialogs * 'dialog\_id ': Dialog ID * 'inferred\_steps': Number of inferred steps * 'created\_time': Time of creation of the dialog * 'aspects': List of two aspects which the dialog is about * 'first\_aspect': First aspect * 'second\_aspect': Second aspect * 'shuffle\_facts': Whether facts were shuffled * 'related\_entities' : List of fifteen related entities to the focus entity * 'tag': Conversation tag * 'user\_id': User ID * 'assistant\_id': Assistant ID * 'is\_annotated': 0 or 1 () * 'user\_dialog\_rating': 1 - 5 () * 'user\_other\_agent\_rating': 1 - 5 () * 'assistant\_dialog\_rating': 1 - 5 () * 'assistant\_other\_agent\_rating': 1 - 5 () * 'reported': Whether the dialog was reported inappropriate * 'annotated': 0 or 1 () ### Data Splits The data is split into a training, validation, test and test\_zero set as per the original dataset split. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Attribution-NonCommercial 4.0 International ### Contributions Thanks to @vineeths96 for adding this dataset.
[ "### Dataset Summary\n\n\nCuriosity dataset consists of 14K English dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages.", "### Supported Tasks and Leaderboards\n\n\n* 'text-generation-other-conversational-curiosity': The dataset can be used to train a model for Conversational Curiosity, which consists in the testing of the hypothesis that engagement increases when users are presented with facts related to what they know. Success on this task is typically measured by achieving a *high* Accuracy and F1 Score.", "### Languages\n\n\nThe text in the dataset is in English collected by crowd-souring. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point consists of dialogs between an user and an assistant, which is followed by the different attributes of the particular dialog.\n\n\nAn example from the Curiosity Dataset train set looks as follows:", "### Data Fields\n\n\n* 'messages': List of dialogs between the user and the assistant and their associated attributes\n\t+ 'dialog\\_acts': List of actions performed in the dialogs\n\t+ 'facts': List of facts returned by the assistant\n\t\t- 'fid': Fact ID\n\t\t- 'source': Source for the fact\n\t\t- 'used': Whether facts were used before in the same dialog\n\t+ 'liked': List of values indicating whether each dialog was liked\n\t+ 'message': List of dialogs (messages) between the user and the assistant\n\t+ 'message\\_id': Message ID\n\t+ 'sender': Message author ID (numeric)\n* 'known\\_entities': Rooted facts about entities the user knows\n* 'focus\\_entity' : Entity in focus in the dialogs\n* 'dialog\\_id ': Dialog ID\n* 'inferred\\_steps': Number of inferred steps\n* 'created\\_time': Time of creation of the dialog\n* 'aspects': List of two aspects which the dialog is about\n* 'first\\_aspect': First aspect\n* 'second\\_aspect': Second aspect\n* 'shuffle\\_facts': Whether facts were shuffled\n* 'related\\_entities' : List of fifteen related entities to the focus entity\n* 'tag': Conversation tag\n* 'user\\_id': User ID\n* 'assistant\\_id': Assistant ID\n* 'is\\_annotated': 0 or 1 ()\n* 'user\\_dialog\\_rating': 1 - 5 ()\n* 'user\\_other\\_agent\\_rating': 1 - 5 ()\n* 'assistant\\_dialog\\_rating': 1 - 5 ()\n* 'assistant\\_other\\_agent\\_rating': 1 - 5 ()\n* 'reported': Whether the dialog was reported inappropriate\n* 'annotated': 0 or 1 ()", "### Data Splits\n\n\nThe data is split into a training, validation, test and test\\_zero set as per the original dataset split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nAttribution-NonCommercial 4.0 International", "### Contributions\n\n\nThanks to @vineeths96 for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-4.0 #conversational-curiosity #region-us \n", "### Dataset Summary\n\n\nCuriosity dataset consists of 14K English dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages.", "### Supported Tasks and Leaderboards\n\n\n* 'text-generation-other-conversational-curiosity': The dataset can be used to train a model for Conversational Curiosity, which consists in the testing of the hypothesis that engagement increases when users are presented with facts related to what they know. Success on this task is typically measured by achieving a *high* Accuracy and F1 Score.", "### Languages\n\n\nThe text in the dataset is in English collected by crowd-souring. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point consists of dialogs between an user and an assistant, which is followed by the different attributes of the particular dialog.\n\n\nAn example from the Curiosity Dataset train set looks as follows:", "### Data Fields\n\n\n* 'messages': List of dialogs between the user and the assistant and their associated attributes\n\t+ 'dialog\\_acts': List of actions performed in the dialogs\n\t+ 'facts': List of facts returned by the assistant\n\t\t- 'fid': Fact ID\n\t\t- 'source': Source for the fact\n\t\t- 'used': Whether facts were used before in the same dialog\n\t+ 'liked': List of values indicating whether each dialog was liked\n\t+ 'message': List of dialogs (messages) between the user and the assistant\n\t+ 'message\\_id': Message ID\n\t+ 'sender': Message author ID (numeric)\n* 'known\\_entities': Rooted facts about entities the user knows\n* 'focus\\_entity' : Entity in focus in the dialogs\n* 'dialog\\_id ': Dialog ID\n* 'inferred\\_steps': Number of inferred steps\n* 'created\\_time': Time of creation of the dialog\n* 'aspects': List of two aspects which the dialog is about\n* 'first\\_aspect': First aspect\n* 'second\\_aspect': Second aspect\n* 'shuffle\\_facts': Whether facts were shuffled\n* 'related\\_entities' : List of fifteen related entities to the focus entity\n* 'tag': Conversation tag\n* 'user\\_id': User ID\n* 'assistant\\_id': Assistant ID\n* 'is\\_annotated': 0 or 1 ()\n* 'user\\_dialog\\_rating': 1 - 5 ()\n* 'user\\_other\\_agent\\_rating': 1 - 5 ()\n* 'assistant\\_dialog\\_rating': 1 - 5 ()\n* 'assistant\\_other\\_agent\\_rating': 1 - 5 ()\n* 'reported': Whether the dialog was reported inappropriate\n* 'annotated': 0 or 1 ()", "### Data Splits\n\n\nThe data is split into a training, validation, test and test\\_zero set as per the original dataset split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nAttribution-NonCommercial 4.0 International", "### Contributions\n\n\nThanks to @vineeths96 for adding this dataset." ]
[ 115, 80, 97, 39, 50, 454, 37, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 14, 18 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-4.0 #conversational-curiosity #region-us \n### Dataset Summary\n\n\nCuriosity dataset consists of 14K English dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages.### Supported Tasks and Leaderboards\n\n\n* 'text-generation-other-conversational-curiosity': The dataset can be used to train a model for Conversational Curiosity, which consists in the testing of the hypothesis that engagement increases when users are presented with facts related to what they know. Success on this task is typically measured by achieving a *high* Accuracy and F1 Score.### Languages\n\n\nThe text in the dataset is in English collected by crowd-souring. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA typical data point consists of dialogs between an user and an assistant, which is followed by the different attributes of the particular dialog.\n\n\nAn example from the Curiosity Dataset train set looks as follows:" ]
ffde34acefcd956529c39c4fc78d993b5b7f7520
# Dataset Card for "daily_dialog" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://yanran.li/dailydialog](http://yanran.li/dailydialog) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 4.48 MB - **Size of the generated dataset:** 8.63 MB - **Total amount of disk used:** 13.11 MB ### Dataset Summary We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 4.48 MB - **Size of the generated dataset:** 8.63 MB - **Total amount of disk used:** 13.11 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "act": [2, 1, 1, 1, 1, 2, 3, 2, 3, 4], "dialog": "[\"Good afternoon . This is Michelle Li speaking , calling on behalf of IBA . Is Mr Meng available at all ? \", \" This is Mr Meng ...", "emotion": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ``` ### Data Fields The data fields are the same among all splits. #### default - `dialog`: a `list` of `string` features. - `act`: a `list` of classification labels, with possible values including `__dummy__` (0), `inform` (1), `question` (2), `directive` (3) and `commissive` (4). - `emotion`: a `list` of classification labels, with possible values including `no emotion` (0), `anger` (1), `disgust` (2), `fear` (3), `happiness` (4), `sadness` (5) and `surprise` (6). ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|11118| 1000|1000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information DailyDialog dataset is licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information ``` @InProceedings{li2017dailydialog, author = {Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi}, title = {DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset}, booktitle = {Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)}, year = {2017} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@julien-c](https://github.com/julien-c) for adding this dataset.
daily_dialog
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "emotion-classification", "dialog-act-classification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "paperswithcode_id": "dailydialog", "pretty_name": "DailyDialog", "tags": ["emotion-classification", "dialog-act-classification"], "dataset_info": {"features": [{"name": "dialog", "sequence": "string"}, {"name": "act", "sequence": {"class_label": {"names": {"0": "__dummy__", "1": "inform", "2": "question", "3": "directive", "4": "commissive"}}}}, {"name": "emotion", "sequence": {"class_label": {"names": {"0": "no emotion", "1": "anger", "2": "disgust", "3": "fear", "4": "happiness", "5": "sadness", "6": "surprise"}}}}], "splits": [{"name": "train", "num_bytes": 7296715, "num_examples": 11118}, {"name": "test", "num_bytes": 655844, "num_examples": 1000}, {"name": "validation", "num_bytes": 673943, "num_examples": 1000}], "download_size": 4475921, "dataset_size": 8626502}}
2024-01-18T11:02:28+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #emotion-classification #dialog-act-classification #region-us
Dataset Card for "daily\_dialog" ================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 4.48 MB * Size of the generated dataset: 8.63 MB * Total amount of disk used: 13.11 MB ### Dataset Summary We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 4.48 MB * Size of the generated dataset: 8.63 MB * Total amount of disk used: 13.11 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'dialog': a 'list' of 'string' features. * 'act': a 'list' of classification labels, with possible values including '**dummy**' (0), 'inform' (1), 'question' (2), 'directive' (3) and 'commissive' (4). * 'emotion': a 'list' of classification labels, with possible values including 'no emotion' (0), 'anger' (1), 'disgust' (2), 'fear' (3), 'happiness' (4), 'sadness' (5) and 'surprise' (6). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. Additional Information ---------------------- ### Dataset Curators ### Licensing Information DailyDialog dataset is licensed under CC BY-NC-SA 4.0. ### Contributions Thanks to @thomwolf, @julien-c for adding this dataset.
[ "### Dataset Summary\n\n\nWe develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects.\nThe language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way\nand cover various topics about our daily life. We also manually label the developed dataset with communication\nintention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it\nbenefit the research field of dialog systems.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 4.48 MB\n* Size of the generated dataset: 8.63 MB\n* Total amount of disk used: 13.11 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'dialog': a 'list' of 'string' features.\n* 'act': a 'list' of classification labels, with possible values including '**dummy**' (0), 'inform' (1), 'question' (2), 'directive' (3) and 'commissive' (4).\n* 'emotion': a 'list' of classification labels, with possible values including 'no emotion' (0), 'anger' (1), 'disgust' (2), 'fear' (3), 'happiness' (4), 'sadness' (5) and 'surprise' (6).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nDataset provided for research purposes only. Please check dataset license for additional information.\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nDailyDialog dataset is licensed under CC BY-NC-SA 4.0.", "### Contributions\n\n\nThanks to @thomwolf, @julien-c for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #emotion-classification #dialog-act-classification #region-us \n", "### Dataset Summary\n\n\nWe develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects.\nThe language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way\nand cover various topics about our daily life. We also manually label the developed dataset with communication\nintention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it\nbenefit the research field of dialog systems.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 4.48 MB\n* Size of the generated dataset: 8.63 MB\n* Total amount of disk used: 13.11 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'dialog': a 'list' of 'string' features.\n* 'act': a 'list' of classification labels, with possible values including '**dummy**' (0), 'inform' (1), 'question' (2), 'directive' (3) and 'commissive' (4).\n* 'emotion': a 'list' of classification labels, with possible values including 'no emotion' (0), 'anger' (1), 'disgust' (2), 'fear' (3), 'happiness' (4), 'sadness' (5) and 'surprise' (6).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nDataset provided for research purposes only. Please check dataset license for additional information.\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nDailyDialog dataset is licensed under CC BY-NC-SA 4.0.", "### Contributions\n\n\nThanks to @thomwolf, @julien-c for adding this dataset." ]
[ 109, 108, 10, 11, 6, 50, 17, 131, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 32, 6, 23, 24 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #emotion-classification #dialog-act-classification #region-us \n### Dataset Summary\n\n\nWe develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects.\nThe language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way\nand cover various topics about our daily life. We also manually label the developed dataset with communication\nintention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it\nbenefit the research field of dialog systems.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 4.48 MB\n* Size of the generated dataset: 8.63 MB\n* Total amount of disk used: 13.11 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'dialog': a 'list' of 'string' features.\n* 'act': a 'list' of classification labels, with possible values including '**dummy**' (0), 'inform' (1), 'question' (2), 'directive' (3) and 'commissive' (4).\n* 'emotion': a 'list' of classification labels, with possible values including 'no emotion' (0), 'anger' (1), 'disgust' (2), 'fear' (3), 'happiness' (4), 'sadness' (5) and 'surprise' (6).### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
f303de644792372c0a24869c165f500e6f5c45fb
# Dataset Card for DaNE ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [DaNE homepage](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dane) - **Repository:** [Github](https://github.com/alexandrainst/danlp) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.lrec-1.565) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme. The Danish UD treebank (Johannsen et al., 2015, UD-DDT) is a conversion of the Danish Dependency Treebank (Buch-Kromann et al. 2003) based on texts from Parole (Britt, 1998). UD-DDT has annotations for dependency parsing and part-of-speech (POS) tagging. The dataset was annotated with Named Entities for PER, ORG, and LOC by the Alexandra Institute in the DaNE dataset (Hvingelby et al. 2020). ### Supported Tasks and Leaderboards Parts-of-speech tagging, dependency parsing and named entitity recognition. ### Languages Danish ## Dataset Structure ### Data Instances This is an example in the "train" split: ```python { 'sent_id': 'train-v2-0\n', 'lemmas': ['på', 'fredag', 'have', 'SiD', 'invitere', 'til', 'reception', 'i', 'SID-hus', 'i', 'anledning', 'af', 'at', 'formand', 'Kjeld', 'Christensen', 'gå', 'ind', 'i', 'den', 'glad', 'tresser', '.'], 'dep_labels': [35, 16, 28, 33, 19, 35, 16, 35, 18, 35, 18, 1, 1, 33, 22, 12, 32, 11, 35, 10, 30, 16, 34], 'ner_tags': [0, 0, 0, 3, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0], 'morph_tags': ['AdpType=Prep', 'Definite=Ind|Gender=Com|Number=Sing', 'Mood=Ind|Tense=Pres|VerbForm=Fin|Voice=Act', '_', 'Definite=Ind|Number=Sing|Tense=Past|VerbForm=Part', 'AdpType=Prep', 'Definite=Ind|Gender=Com|Number=Sing', 'AdpType=Prep', 'Definite=Def|Gender=Neut|Number=Sing', 'AdpType=Prep', 'Definite=Ind|Gender=Com|Number=Sing', 'AdpType=Prep', '_', 'Definite=Def|Gender=Com|Number=Sing', '_', '_', 'Mood=Ind|Tense=Pres|VerbForm=Fin|Voice=Act', '_', 'AdpType=Prep', 'Number=Plur|PronType=Dem', 'Degree=Pos|Number=Plur', 'Definite=Ind|Gender=Com|Number=Plur', '_'], 'dep_ids': [2, 5, 5, 5, 0, 7, 5, 9, 7, 11, 7, 17, 17, 17, 14, 15, 11, 17, 22, 22, 22, 18, 5], 'pos_tags': [11, 12, 5, 7, 3, 11, 12, 11, 12, 11, 12, 11, 16, 12, 7, 7, 3, 9, 11, 14, 6, 12, 10], 'text': 'På fredag har SID inviteret til reception i SID-huset i anledning af at formanden Kjeld Christensen går ind i de glade tressere.\n', 'tokens': ['På', 'fredag', 'har', 'SID', 'inviteret', 'til', 'reception', 'i', 'SID-huset', 'i', 'anledning', 'af', 'at', 'formanden', 'Kjeld', 'Christensen', 'går', 'ind', 'i', 'de', 'glade', 'tressere', '.'], 'tok_ids': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23] } ``` ### Data Fields Data Fields: - q_id: a string question identifier for each example, corresponding to its ID in the Pushshift.io Reddit submission dumps. - subreddit: One of explainlikeimfive, askscience, or AskHistorians, indicating which subreddit the question came from - title: title of the question, with URLs extracted and replaced by URL_n tokens - title_urls: list of the extracted URLs, the nth element of the list was replaced by URL_n - sent_id: a string identifier for each example - text: a string, the original sentence (not tokenized) - tok_ids: a list of ids (int), one for each token - tokens: a list of strings, the tokens - lemmas: a list of strings, the lemmas of the tokens - pos_tags: a list of strings, the part-of-speech tags of the tokens - morph_tags: a list of strings, the morphological tags of the tokens - dep_ids: a list of ids (int), the id of the head of the incoming dependency for each token - dep_labels: a list of strings, the dependency labels - ner_tags: a list of strings, the named entity tags (BIO format) ### Data Splits | | train | validation | test | |-------------|-------:|-----------:|-------:| | # sentences | 4383 | 564 | 565 | | # tokens | 80 378 | 10 322 | 10 023 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Citation Information ``` @inproceedings{hvingelby-etal-2020-dane, title = "{D}a{NE}: A Named Entity Resource for {D}anish", author = "Hvingelby, Rasmus and Pauli, Amalie Brogaard and Barrett, Maria and Rosted, Christina and Lidegaard, Lasse Malm and S{\o}gaard, Anders", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.565", pages = "4597--4604", abstract = "We present a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme: DaNE. It is the largest publicly available, Danish named entity gold annotation. We evaluate the quality of our annotations intrinsically by double annotating the entire treebank and extrinsically by comparing our annotations to a recently released named entity annotation of the validation and test sections of the Danish Universal Dependencies treebank. We benchmark the new resource by training and evaluating competitive architectures for supervised named entity recognition (NER), including FLAIR, monolingual (Danish) BERT and multilingual BERT. We explore cross-lingual transfer in multilingual BERT from five related languages in zero-shot and direct transfer setups, and we show that even with our modestly-sized training set, we improve Danish NER over a recent cross-lingual approach, as well as over zero-shot transfer from five related languages. Using multilingual BERT, we achieve higher performance by fine-tuning on both DaNE and a larger Bokm{\aa}l (Norwegian) training set compared to only using DaNE. However, the highest performance isachieved by using a Danish BERT fine-tuned on DaNE. Our dataset enables improvements and applicability for Danish NER beyond cross-lingual methods. We employ a thorough error analysis of the predictions of the best models for seen and unseen entities, as well as their robustness on un-capitalized text. The annotated dataset and all the trained models are made publicly available.", language = "English", ISBN = "979-10-95546-34-4", } ``` ### Contributions Thanks to [@ophelielacroix](https://github.com/ophelielacroix), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
dane
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-Danish-Universal-Dependencies-treebank", "language:da", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["da"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|other-Danish-Universal-Dependencies-treebank"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "part-of-speech"], "paperswithcode_id": "dane", "pretty_name": "DaNE", "dataset_info": {"features": [{"name": "sent_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tok_ids", "sequence": "int64"}, {"name": "tokens", "sequence": "string"}, {"name": "lemmas", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "NUM", "1": "CCONJ", "2": "PRON", "3": "VERB", "4": "INTJ", "5": "AUX", "6": "ADJ", "7": "PROPN", "8": "PART", "9": "ADV", "10": "PUNCT", "11": "ADP", "12": "NOUN", "13": "X", "14": "DET", "15": "SYM", "16": "SCONJ"}}}}, {"name": "morph_tags", "sequence": "string"}, {"name": "dep_ids", "sequence": "int64"}, {"name": "dep_labels", "sequence": {"class_label": {"names": {"0": "parataxis", "1": "mark", "2": "nummod", "3": "discourse", "4": "compound:prt", "5": "reparandum", "6": "vocative", "7": "list", "8": "obj", "9": "dep", "10": "det", "11": "obl:loc", "12": "flat", "13": "iobj", "14": "cop", "15": "expl", "16": "obl", "17": "conj", "18": "nmod", "19": "root", "20": "acl:relcl", "21": "goeswith", "22": "appos", "23": "fixed", "24": "obl:tmod", "25": "xcomp", "26": "advmod", "27": "nmod:poss", "28": "aux", "29": "ccomp", "30": "amod", "31": "cc", "32": "advcl", "33": "nsubj", "34": "punct", "35": "case"}}}}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PER", "2": "I-PER", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "splits": [{"name": "train", "num_bytes": 7311212, "num_examples": 4383}, {"name": "test", "num_bytes": 909699, "num_examples": 565}, {"name": "validation", "num_bytes": 940413, "num_examples": 564}], "download_size": 1209710, "dataset_size": 9161324}}
2024-01-18T11:02:29+00:00
[]
[ "da" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-Danish-Universal-Dependencies-treebank #language-Danish #license-cc-by-sa-4.0 #region-us
Dataset Card for DaNE ===================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: DaNE homepage * Repository: Github * Paper: Aclweb * Leaderboard: * Point of Contact: ### Dataset Summary The Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme. The Danish UD treebank (Johannsen et al., 2015, UD-DDT) is a conversion of the Danish Dependency Treebank (Buch-Kromann et al. 2003) based on texts from Parole (Britt, 1998). UD-DDT has annotations for dependency parsing and part-of-speech (POS) tagging. The dataset was annotated with Named Entities for PER, ORG, and LOC by the Alexandra Institute in the DaNE dataset (Hvingelby et al. 2020). ### Supported Tasks and Leaderboards Parts-of-speech tagging, dependency parsing and named entitity recognition. ### Languages Danish Dataset Structure ----------------- ### Data Instances This is an example in the "train" split: ### Data Fields Data Fields: * q\_id: a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps. * subreddit: One of explainlikeimfive, askscience, or AskHistorians, indicating which subreddit the question came from * title: title of the question, with URLs extracted and replaced by URL\_n tokens * title\_urls: list of the extracted URLs, the nth element of the list was replaced by URL\_n * sent\_id: a string identifier for each example * text: a string, the original sentence (not tokenized) * tok\_ids: a list of ids (int), one for each token * tokens: a list of strings, the tokens * lemmas: a list of strings, the lemmas of the tokens * pos\_tags: a list of strings, the part-of-speech tags of the tokens * morph\_tags: a list of strings, the morphological tags of the tokens * dep\_ids: a list of ids (int), the id of the head of the incoming dependency for each token * dep\_labels: a list of strings, the dependency labels * ner\_tags: a list of strings, the named entity tags (BIO format) ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @ophelielacroix, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nThe Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme.\n\n\nThe Danish UD treebank (Johannsen et al., 2015, UD-DDT) is a conversion of the Danish Dependency Treebank (Buch-Kromann et al. 2003) based on texts from Parole (Britt, 1998). UD-DDT has annotations for dependency parsing and part-of-speech (POS) tagging. The dataset was annotated with Named Entities for PER, ORG, and LOC by the Alexandra Institute in the DaNE dataset (Hvingelby et al. 2020).", "### Supported Tasks and Leaderboards\n\n\nParts-of-speech tagging, dependency parsing and named entitity recognition.", "### Languages\n\n\nDanish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis is an example in the \"train\" split:", "### Data Fields\n\n\nData Fields:\n\n\n* q\\_id: a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps.\n* subreddit: One of explainlikeimfive, askscience, or AskHistorians, indicating which subreddit the question came from\n* title: title of the question, with URLs extracted and replaced by URL\\_n tokens\n* title\\_urls: list of the extracted URLs, the nth element of the list was replaced by URL\\_n\n* sent\\_id: a string identifier for each example\n* text: a string, the original sentence (not tokenized)\n* tok\\_ids: a list of ids (int), one for each token\n* tokens: a list of strings, the tokens\n* lemmas: a list of strings, the lemmas of the tokens\n* pos\\_tags: a list of strings, the part-of-speech tags of the tokens\n* morph\\_tags: a list of strings, the morphological tags of the tokens\n* dep\\_ids: a list of ids (int), the id of the head of the incoming dependency for each token\n* dep\\_labels: a list of strings, the dependency labels\n* ner\\_tags: a list of strings, the named entity tags (BIO format)", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ophelielacroix, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-Danish-Universal-Dependencies-treebank #language-Danish #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nThe Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme.\n\n\nThe Danish UD treebank (Johannsen et al., 2015, UD-DDT) is a conversion of the Danish Dependency Treebank (Buch-Kromann et al. 2003) based on texts from Parole (Britt, 1998). UD-DDT has annotations for dependency parsing and part-of-speech (POS) tagging. The dataset was annotated with Named Entities for PER, ORG, and LOC by the Alexandra Institute in the DaNE dataset (Hvingelby et al. 2020).", "### Supported Tasks and Leaderboards\n\n\nParts-of-speech tagging, dependency parsing and named entitity recognition.", "### Languages\n\n\nDanish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis is an example in the \"train\" split:", "### Data Fields\n\n\nData Fields:\n\n\n* q\\_id: a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps.\n* subreddit: One of explainlikeimfive, askscience, or AskHistorians, indicating which subreddit the question came from\n* title: title of the question, with URLs extracted and replaced by URL\\_n tokens\n* title\\_urls: list of the extracted URLs, the nth element of the list was replaced by URL\\_n\n* sent\\_id: a string identifier for each example\n* text: a string, the original sentence (not tokenized)\n* tok\\_ids: a list of ids (int), one for each token\n* tokens: a list of strings, the tokens\n* lemmas: a list of strings, the lemmas of the tokens\n* pos\\_tags: a list of strings, the part-of-speech tags of the tokens\n* morph\\_tags: a list of strings, the morphological tags of the tokens\n* dep\\_ids: a list of ids (int), the id of the head of the incoming dependency for each token\n* dep\\_labels: a list of strings, the dependency labels\n* ner\\_tags: a list of strings, the named entity tags (BIO format)", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ophelielacroix, @lhoestq for adding this dataset." ]
[ 127, 167, 32, 12, 18, 321, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 24 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-Danish-Universal-Dependencies-treebank #language-Danish #license-cc-by-sa-4.0 #region-us \n### Dataset Summary\n\n\nThe Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme.\n\n\nThe Danish UD treebank (Johannsen et al., 2015, UD-DDT) is a conversion of the Danish Dependency Treebank (Buch-Kromann et al. 2003) based on texts from Parole (Britt, 1998). UD-DDT has annotations for dependency parsing and part-of-speech (POS) tagging. The dataset was annotated with Named Entities for PER, ORG, and LOC by the Alexandra Institute in the DaNE dataset (Hvingelby et al. 2020).### Supported Tasks and Leaderboards\n\n\nParts-of-speech tagging, dependency parsing and named entitity recognition.### Languages\n\n\nDanish\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThis is an example in the \"train\" split:" ]
2c133fcebcba496f9aeb706c52609cf8d4f99372
# Dataset Card for DanishPoliticalComments ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/steffan267/Sentiment-Analysis-on-Danish-Social-Media - **Repository:** https://github.com/steffan267/Sentiment-Analysis-on-Danish-Social-Media - **Paper:** https://github.com/lucaspuvis/SAM/blob/master/Thesis.pdf - **Point of Contact:** [More Information Needed] ### Dataset Summary The dataset consists of 9008 sentences that are labeled with fine-grained polarity in the range from -2 to 2 (negative to positive). The quality of the fine-grained is not cross-validated and is therefore subject to uncertainties; however, the simple polarity has been cross-validated and therefore is considered to be more correct. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
danish_political_comments
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:da", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["da"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "DanishPoliticalComments", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "target", "dtype": {"class_label": {"names": {"0": "2", "1": "1", "2": "0", "3": "-1", "4": "-2"}}}}], "splits": [{"name": "train", "num_bytes": 829569, "num_examples": 9008}], "download_size": 690873, "dataset_size": 829569}}
2024-01-18T11:02:31+00:00
[]
[ "da" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Danish #license-unknown #region-us
# Dataset Card for DanishPoliticalComments ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Point of Contact: ### Dataset Summary The dataset consists of 9008 sentences that are labeled with fine-grained polarity in the range from -2 to 2 (negative to positive). The quality of the fine-grained is not cross-validated and is therefore subject to uncertainties; however, the simple polarity has been cross-validated and therefore is considered to be more correct. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for DanishPoliticalComments", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact:", "### Dataset Summary\n\nThe dataset consists of 9008 sentences that are labeled with fine-grained polarity in the range from -2 to 2 (negative to positive). The quality of the fine-grained is not cross-validated and is therefore subject to uncertainties; however, the simple polarity has been cross-validated and therefore is considered to be more correct.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Danish #license-unknown #region-us \n", "# Dataset Card for DanishPoliticalComments", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact:", "### Dataset Summary\n\nThe dataset consists of 9008 sentences that are labeled with fine-grained polarity in the range from -2 to 2 (negative to positive). The quality of the fine-grained is not cross-validated and is therefore subject to uncertainties; however, the simple polarity has been cross-validated and therefore is considered to be more correct.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ 90, 9, 120, 23, 89, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 20 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Danish #license-unknown #region-us \n# Dataset Card for DanishPoliticalComments## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact:### Dataset Summary\n\nThe dataset consists of 9008 sentences that are labeled with fine-grained polarity in the range from -2 to 2 (negative to positive). The quality of the fine-grained is not cross-validated and is therefore subject to uncertainties; however, the simple polarity has been cross-validated and therefore is considered to be more correct.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
c8cd33ecbcc26b64feefb0457988dfcae27b8df4
# Dataset Card for DART ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepahe](https://github.com/Yale-LILY/dart) - **Repository:** [github](https://github.com/Yale-LILY/dart) - **Paper:** [paper](https://arxiv.org/abs/2007.02871) - **Leaderboard:** [leaderboard](https://github.com/Yale-LILY/dart#leaderboard) ### Dataset Summary DART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora. ### Supported Tasks and Leaderboards The task associated to DART is text generation from data records that are RDF triplets: - `rdf-to-text`: The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving a *high* [BLEU](https://huggingface.co/metrics/bleu), [METEOR](https://huggingface.co/metrics/meteor), [BLEURT](https://huggingface.co/metrics/bleurt), [TER](https://huggingface.co/metrics/ter), [MoverScore](https://huggingface.co/metrics/mover_score), and [BERTScore](https://huggingface.co/metrics/bert_score). The ([BART-large model](https://huggingface.co/facebook/bart-large) from [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following scores: | | BLEU | METEOR | TER | MoverScore | BERTScore | BLEURT | | ----- | ----- | ------ | ---- | ----------- | ---------- | ------ | | BART | 37.06 | 0.36 | 0.57 | 0.44 | 0.92 | 0.22 | This task has an active leaderboard which can be found [here](https://github.com/Yale-LILY/dart#leaderboard) and ranks models based on the above metrics while also reporting. ### Languages The dataset is in english (en). ## Dataset Structure ### Data Instances Here is an example from the dataset: ``` {'annotations': {'source': ['WikiTableQuestions_mturk'], 'text': ['First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville']}, 'subtree_was_extended': False, 'tripleset': [['First Clearing', 'LOCATION', 'On NYS 52 1 Mi. Youngsville'], ['On NYS 52 1 Mi. Youngsville', 'CITY_OR_TOWN', 'Callicoon, New York']]} ``` It contains one annotation where the textual description is 'First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object. ### Data Fields The different fields are: - `annotations`: - `text`: list of text descriptions of the triplets - `source`: list of sources of the RDF triplets (WikiTable, e2e, etc.) - `subtree_was_extended`: boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set to `None` - `tripleset`: RDF triplets as a list of triplets of strings (subject, predicate, object) ### Data Splits There are three splits, train, validation and test: | | train | validation | test | | ----- |------:|-----------:|-----:| | N. Examples | 30526 | 2768 | 6959 | ## Dataset Creation ### Curation Rationale Automatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users. ### Source Data DART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E. #### Initial Data Collection and Normalization DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019) #### Who are the source language producers? [More Information Needed] ### Annotations DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019) #### Annotation process The two stage annotation process for constructing tripleset sentence pairs is based on a tree-structured ontology of each table. First, internal skilled annotators denote the parent column for each column header. Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Under MIT license (see [here](https://github.com/Yale-LILY/dart/blob/master/LICENSE)) ### Citation Information ``` @article{radev2020dart, title={DART: Open-Domain Structured Data Record to Text Generation}, author={Dragomir Radev and Rui Zhang and Amrit Rau and Abhinand Sivaprasad and Chiachun Hsieh and Nazneen Fatema Rajani and Xiangru Tang and Aadit Vyas and Neha Verma and Pranav Krishna and Yangxiaokang Liu and Nadia Irwanto and Jessica Pan and Faiaz Rahman and Ahmad Zaidi and Murori Mutuma and Yasin Tarabar and Ankit Gupta and Tao Yu and Yi Chern Tan and Xi Victoria Lin and Caiming Xiong and Richard Socher}, journal={arXiv preprint arXiv:2007.02871}, year={2020} ``` ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset.
dart
[ "task_categories:tabular-to-text", "task_ids:rdf-to-text", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|wikitable_questions", "source_datasets:extended|wikisql", "source_datasets:extended|web_nlg", "source_datasets:extended|cleaned_e2e", "language:en", "license:mit", "arxiv:2007.02871", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "machine-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|wikitable_questions", "extended|wikisql", "extended|web_nlg", "extended|cleaned_e2e"], "task_categories": ["tabular-to-text"], "task_ids": ["rdf-to-text"], "paperswithcode_id": "dart", "pretty_name": "DART", "dataset_info": {"features": [{"name": "tripleset", "sequence": {"sequence": "string"}}, {"name": "subtree_was_extended", "dtype": "bool"}, {"name": "annotations", "sequence": [{"name": "source", "dtype": "string"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 12966443, "num_examples": 30526}, {"name": "validation", "num_bytes": 1458106, "num_examples": 2768}, {"name": "test", "num_bytes": 2657644, "num_examples": 5097}], "download_size": 29939366, "dataset_size": 17082193}}
2022-11-18T19:57:00+00:00
[ "2007.02871" ]
[ "en" ]
TAGS #task_categories-tabular-to-text #task_ids-rdf-to-text #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|wikitable_questions #source_datasets-extended|wikisql #source_datasets-extended|web_nlg #source_datasets-extended|cleaned_e2e #language-English #license-mit #arxiv-2007.02871 #region-us
Dataset Card for DART ===================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: homepahe * Repository: github * Paper: paper * Leaderboard: leaderboard ### Dataset Summary DART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora. ### Supported Tasks and Leaderboards The task associated to DART is text generation from data records that are RDF triplets: * 'rdf-to-text': The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving a *high* BLEU, METEOR, BLEURT, TER, MoverScore, and BERTScore. The (BART-large model from BART) model currently achieves the following scores: This task has an active leaderboard which can be found here and ranks models based on the above metrics while also reporting. ### Languages The dataset is in english (en). Dataset Structure ----------------- ### Data Instances Here is an example from the dataset: It contains one annotation where the textual description is 'First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object. ### Data Fields The different fields are: * 'annotations': + 'text': list of text descriptions of the triplets + 'source': list of sources of the RDF triplets (WikiTable, e2e, etc.) * 'subtree\_was\_extended': boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set to 'None' * 'tripleset': RDF triplets as a list of triplets of strings (subject, predicate, object) ### Data Splits There are three splits, train, validation and test: Dataset Creation ---------------- ### Curation Rationale Automatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users. ### Source Data DART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E. #### Initial Data Collection and Normalization DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019) #### Who are the source language producers? ### Annotations DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019) #### Annotation process The two stage annotation process for constructing tripleset sentence pairs is based on a tree-structured ontology of each table. First, internal skilled annotators denote the parent column for each column header. Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row. #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Under MIT license (see here) ### Contributions Thanks to @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nDART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora.", "### Supported Tasks and Leaderboards\n\n\nThe task associated to DART is text generation from data records that are RDF triplets:\n\n\n* 'rdf-to-text': The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving a *high* BLEU, METEOR, BLEURT, TER, MoverScore, and BERTScore. The (BART-large model from BART) model currently achieves the following scores:\n\n\n\nThis task has an active leaderboard which can be found here and ranks models based on the above metrics while also reporting.", "### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nHere is an example from the dataset:\n\n\nIt contains one annotation where the textual description is 'First Clearing\\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object.", "### Data Fields\n\n\nThe different fields are:\n\n\n* 'annotations':\n\t+ 'text': list of text descriptions of the triplets\n\t+ 'source': list of sources of the RDF triplets (WikiTable, e2e, etc.)\n* 'subtree\\_was\\_extended': boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set to 'None'\n* 'tripleset': RDF triplets as a list of triplets of strings (subject, predicate, object)", "### Data Splits\n\n\nThere are three splits, train, validation and test:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nAutomatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users.", "### Source Data\n\n\nDART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E.", "#### Initial Data Collection and Normalization\n\n\nDART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables\nfrom WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)", "#### Who are the source language producers?", "### Annotations\n\n\nDART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables\nfrom WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)", "#### Annotation process\n\n\nThe two stage annotation process for constructing tripleset sentence pairs is based on a tree-structured ontology of each table.\nFirst, internal skilled annotators denote the parent column for each column header.\nThen, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nUnder MIT license (see here)", "### Contributions\n\n\nThanks to @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-tabular-to-text #task_ids-rdf-to-text #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|wikitable_questions #source_datasets-extended|wikisql #source_datasets-extended|web_nlg #source_datasets-extended|cleaned_e2e #language-English #license-mit #arxiv-2007.02871 #region-us \n", "### Dataset Summary\n\n\nDART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora.", "### Supported Tasks and Leaderboards\n\n\nThe task associated to DART is text generation from data records that are RDF triplets:\n\n\n* 'rdf-to-text': The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving a *high* BLEU, METEOR, BLEURT, TER, MoverScore, and BERTScore. The (BART-large model from BART) model currently achieves the following scores:\n\n\n\nThis task has an active leaderboard which can be found here and ranks models based on the above metrics while also reporting.", "### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nHere is an example from the dataset:\n\n\nIt contains one annotation where the textual description is 'First Clearing\\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object.", "### Data Fields\n\n\nThe different fields are:\n\n\n* 'annotations':\n\t+ 'text': list of text descriptions of the triplets\n\t+ 'source': list of sources of the RDF triplets (WikiTable, e2e, etc.)\n* 'subtree\\_was\\_extended': boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set to 'None'\n* 'tripleset': RDF triplets as a list of triplets of strings (subject, predicate, object)", "### Data Splits\n\n\nThere are three splits, train, validation and test:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nAutomatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users.", "### Source Data\n\n\nDART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E.", "#### Initial Data Collection and Normalization\n\n\nDART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables\nfrom WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)", "#### Who are the source language producers?", "### Annotations\n\n\nDART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables\nfrom WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)", "#### Annotation process\n\n\nThe two stage annotation process for constructing tripleset sentence pairs is based on a tree-structured ontology of each table.\nFirst, internal skilled annotators denote the parent column for each column header.\nThen, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nUnder MIT license (see here)", "### Contributions\n\n\nThanks to @lhoestq for adding this dataset." ]
[ 180, 152, 159, 20, 83, 136, 24, 37, 67, 135, 10, 130, 87, 9, 18, 7, 8, 14, 6, 13, 17 ]
[ "passage: TAGS\n#task_categories-tabular-to-text #task_ids-rdf-to-text #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|wikitable_questions #source_datasets-extended|wikisql #source_datasets-extended|web_nlg #source_datasets-extended|cleaned_e2e #language-English #license-mit #arxiv-2007.02871 #region-us \n### Dataset Summary\n\n\nDART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora.### Supported Tasks and Leaderboards\n\n\nThe task associated to DART is text generation from data records that are RDF triplets:\n\n\n* 'rdf-to-text': The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving a *high* BLEU, METEOR, BLEURT, TER, MoverScore, and BERTScore. The (BART-large model from BART) model currently achieves the following scores:\n\n\n\nThis task has an active leaderboard which can be found here and ranks models based on the above metrics while also reporting.", "passage: ### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nHere is an example from the dataset:\n\n\nIt contains one annotation where the textual description is 'First Clearing\\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object.### Data Fields\n\n\nThe different fields are:\n\n\n* 'annotations':\n\t+ 'text': list of text descriptions of the triplets\n\t+ 'source': list of sources of the RDF triplets (WikiTable, e2e, etc.)\n* 'subtree\\_was\\_extended': boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set to 'None'\n* 'tripleset': RDF triplets as a list of triplets of strings (subject, predicate, object)### Data Splits\n\n\nThere are three splits, train, validation and test:\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nAutomatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users.### Source Data\n\n\nDART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E.#### Initial Data Collection and Normalization\n\n\nDART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables\nfrom WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)#### Who are the source language producers?" ]
7ddcf5f4e680058ab46e05ff0acb6f9f01b849ff
# Dataset Card for DataCommons Fact Checked claims ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Data Commons fact checking FAQ](https://datacommons.org/factcheck/faq) ### Dataset Summary A dataset of fact checked claims by news media maintained by [datacommons.org](https://datacommons.org/) containing the claim, author, and judgments, as well as the URL of the full explanation by the original fact-checker. The fact checking is done by [FactCheck.org](https://www.factcheck.org/), [PolitiFact](https://www.politifact.com/), and [The Washington Post](https://www.washingtonpost.com/). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The data is in English (`en`). ## Dataset Structure ### Data Instances An example of fact checking instance looks as follows: ``` {'claim_author_name': 'Facebook posts', 'claim_date': '2019-01-01', 'claim_text': 'Quotes Michelle Obama as saying, "White folks are what’s wrong with America."', 'review_date': '2019-01-03', 'review_rating': 'Pants on Fire', 'review_url': 'https://www.politifact.com/facebook-fact-checks/statements/2019/jan/03/facebook-posts/did-michelle-obama-once-say-white-folks-are-whats-/', 'reviewer_name': 'PolitiFact'} ``` ### Data Fields A data instance has the following fields: - `review_date`: the day the fact checking report was posted. Missing values are replaced with empty strings - `review_url`: URL for the full fact checking report - `reviewer_name`: the name of the fact checking service. - `claim_text`: the full text of the claim being reviewed. - `claim_author_name`: the author of the claim being reviewed. Missing values are replaced with empty strings - `claim_date` the date of the claim. Missing values are replaced with empty strings - `review_rating`: the judgments of the fact checker (under `alternateName`, names vary by fact checker) ### Data Splits No splits are provided. There are a total of 5632 claims fact-checked. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The fact checking is done by [FactCheck.org](https://www.factcheck.org/), [PolitiFact](https://www.politifact.com/), [The Washington Post](https://www.washingtonpost.com/), and [The Weekly Standard](https://www.weeklystandard.com/). - [FactCheck.org](https://www.factcheck.org/) self describes as "a nonpartisan, nonprofit 'consumer advocate' for voters that aims to reduce the level of deception and confusion in U.S. politics." It was founded by journalists Kathleen Hall Jamieson and Brooks Jackson and is currently directed by Eugene Kiely. - [PolitiFact](https://www.politifact.com/) describe their ethics as "seeking to present the true facts, unaffected by agenda or biases, [with] journalists setting their own opinions aside." It was started in August 2007 by Times Washington Bureau Chief Bill Adair. The organization was acquired in February 2018 by the Poynter Institute, a non-profit journalism education and news media research center that also owns the Tampa Bay Times. - [The Washington Post](https://www.washingtonpost.com/) is a newspaper considered to be near the center of the American political spectrum. In 2013 Amazon.com founder Jeff Bezos bought the newspaper and affiliated publications. The original data source also contains 132 items reviewed by [The Weekly Standard](https://www.weeklystandard.com/), which was a neo-conservative American newspaper. IT is the most politically loaded source of the group, which was originally a vocal creitic of the activity of fact-checking, and has historically taken stances [close to the American right](https://en.wikipedia.org/wiki/The_Weekly_Standard#Support_of_the_invasion_of_Iraq). It also had to admit responsibility for baseless accusations against a well known author in a public [libel case](https://en.wikipedia.org/wiki/The_Weekly_Standard#Libel_case). The fact checked items from this source can be found in the `weekly_standard` configuration but should be used only with full understanding of this context. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases See section above describing the [fact checking organizations](#who-are-the-annotators?). [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators This fact checking dataset is maintained by [datacommons.org](https://datacommons.org/), a Google initiative. ### Licensing Information All fact checked items are released under a `CC-BY-NC-4.0` License. ### Citation Information Data Commons 2020, Fact Checks, electronic dataset, Data Commons, viewed 16 Dec 2020, <https://datacommons.org>. ### Contributions Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
datacommons_factcheck
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K", "n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "pretty_name": "DataCommons Fact Checked claims", "config_names": ["fctchk_politifact_wapo", "weekly_standard"], "dataset_info": [{"config_name": "fctchk_politifact_wapo", "features": [{"name": "reviewer_name", "dtype": "string"}, {"name": "claim_text", "dtype": "string"}, {"name": "review_date", "dtype": "string"}, {"name": "review_url", "dtype": "string"}, {"name": "review_rating", "dtype": "string"}, {"name": "claim_author_name", "dtype": "string"}, {"name": "claim_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1772321, "num_examples": 5632}], "download_size": 671896, "dataset_size": 1772321}, {"config_name": "weekly_standard", "features": [{"name": "reviewer_name", "dtype": "string"}, {"name": "claim_text", "dtype": "string"}, {"name": "review_date", "dtype": "string"}, {"name": "review_url", "dtype": "string"}, {"name": "review_rating", "dtype": "string"}, {"name": "claim_author_name", "dtype": "string"}, {"name": "claim_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35061, "num_examples": 132}], "download_size": 671896, "dataset_size": 35061}]}
2024-01-18T11:02:32+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-nc-4.0 #region-us
# Dataset Card for DataCommons Fact Checked claims ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Data Commons fact checking FAQ ### Dataset Summary A dataset of fact checked claims by news media maintained by URL containing the claim, author, and judgments, as well as the URL of the full explanation by the original fact-checker. The fact checking is done by URL, PolitiFact, and The Washington Post. ### Supported Tasks and Leaderboards ### Languages The data is in English ('en'). ## Dataset Structure ### Data Instances An example of fact checking instance looks as follows: ### Data Fields A data instance has the following fields: - 'review_date': the day the fact checking report was posted. Missing values are replaced with empty strings - 'review_url': URL for the full fact checking report - 'reviewer_name': the name of the fact checking service. - 'claim_text': the full text of the claim being reviewed. - 'claim_author_name': the author of the claim being reviewed. Missing values are replaced with empty strings - 'claim_date' the date of the claim. Missing values are replaced with empty strings - 'review_rating': the judgments of the fact checker (under 'alternateName', names vary by fact checker) ### Data Splits No splits are provided. There are a total of 5632 claims fact-checked. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? The fact checking is done by URL, PolitiFact, The Washington Post, and The Weekly Standard. - URL self describes as "a nonpartisan, nonprofit 'consumer advocate' for voters that aims to reduce the level of deception and confusion in U.S. politics." It was founded by journalists Kathleen Hall Jamieson and Brooks Jackson and is currently directed by Eugene Kiely. - PolitiFact describe their ethics as "seeking to present the true facts, unaffected by agenda or biases, [with] journalists setting their own opinions aside." It was started in August 2007 by Times Washington Bureau Chief Bill Adair. The organization was acquired in February 2018 by the Poynter Institute, a non-profit journalism education and news media research center that also owns the Tampa Bay Times. - The Washington Post is a newspaper considered to be near the center of the American political spectrum. In 2013 URL founder Jeff Bezos bought the newspaper and affiliated publications. The original data source also contains 132 items reviewed by The Weekly Standard, which was a neo-conservative American newspaper. IT is the most politically loaded source of the group, which was originally a vocal creitic of the activity of fact-checking, and has historically taken stances close to the American right. It also had to admit responsibility for baseless accusations against a well known author in a public libel case. The fact checked items from this source can be found in the 'weekly_standard' configuration but should be used only with full understanding of this context. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases See section above describing the fact checking organizations. ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators This fact checking dataset is maintained by URL, a Google initiative. ### Licensing Information All fact checked items are released under a 'CC-BY-NC-4.0' License. Data Commons 2020, Fact Checks, electronic dataset, Data Commons, viewed 16 Dec 2020, <URL>. ### Contributions Thanks to @yjernite for adding this dataset.
[ "# Dataset Card for DataCommons Fact Checked claims", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Data Commons fact checking FAQ", "### Dataset Summary\n\nA dataset of fact checked claims by news media maintained by URL containing the claim, author, and judgments, as well as the URL of the full explanation by the original fact-checker.\n\nThe fact checking is done by URL, PolitiFact, and The Washington Post.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe data is in English ('en').", "## Dataset Structure", "### Data Instances\n\nAn example of fact checking instance looks as follows:", "### Data Fields\n\nA data instance has the following fields:\n- 'review_date': the day the fact checking report was posted. Missing values are replaced with empty strings\n- 'review_url': URL for the full fact checking report\n- 'reviewer_name': the name of the fact checking service.\n- 'claim_text': the full text of the claim being reviewed.\n- 'claim_author_name': the author of the claim being reviewed. Missing values are replaced with empty strings\n- 'claim_date' the date of the claim. Missing values are replaced with empty strings\n- 'review_rating': the judgments of the fact checker (under 'alternateName', names vary by fact checker)", "### Data Splits\n\nNo splits are provided. There are a total of 5632 claims fact-checked.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe fact checking is done by URL, PolitiFact, The Washington Post, and The Weekly Standard.\n\n- URL self describes as \"a nonpartisan, nonprofit 'consumer advocate' for voters that aims to reduce the level of deception and confusion in U.S. politics.\" It was founded by journalists Kathleen Hall Jamieson and Brooks Jackson and is currently directed by Eugene Kiely.\n- PolitiFact describe their ethics as \"seeking to present the true facts, unaffected by agenda or biases, [with] journalists setting their own opinions aside.\" It was started in August 2007 by Times Washington Bureau Chief Bill Adair. The organization was acquired in February 2018 by the Poynter Institute, a non-profit journalism education and news media research center that also owns the Tampa Bay Times.\n- The Washington Post is a newspaper considered to be near the center of the American political spectrum. In 2013 URL founder Jeff Bezos bought the newspaper and affiliated publications.\n\nThe original data source also contains 132 items reviewed by The Weekly Standard, which was a neo-conservative American newspaper. IT is the most politically loaded source of the group, which was originally a vocal creitic of the activity of fact-checking, and has historically taken stances close to the American right. It also had to admit responsibility for baseless accusations against a well known author in a public libel case. The fact checked items from this source can be found in the 'weekly_standard' configuration but should be used only with full understanding of this context.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases\n\nSee section above describing the fact checking organizations.", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators\n\nThis fact checking dataset is maintained by URL, a Google initiative.", "### Licensing Information\n\nAll fact checked items are released under a 'CC-BY-NC-4.0' License.\n\n\n\nData Commons 2020, Fact Checks, electronic dataset, Data Commons, viewed 16 Dec 2020, <URL>.", "### Contributions\n\nThanks to @yjernite for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-nc-4.0 #region-us \n", "# Dataset Card for DataCommons Fact Checked claims", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Data Commons fact checking FAQ", "### Dataset Summary\n\nA dataset of fact checked claims by news media maintained by URL containing the claim, author, and judgments, as well as the URL of the full explanation by the original fact-checker.\n\nThe fact checking is done by URL, PolitiFact, and The Washington Post.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe data is in English ('en').", "## Dataset Structure", "### Data Instances\n\nAn example of fact checking instance looks as follows:", "### Data Fields\n\nA data instance has the following fields:\n- 'review_date': the day the fact checking report was posted. Missing values are replaced with empty strings\n- 'review_url': URL for the full fact checking report\n- 'reviewer_name': the name of the fact checking service.\n- 'claim_text': the full text of the claim being reviewed.\n- 'claim_author_name': the author of the claim being reviewed. Missing values are replaced with empty strings\n- 'claim_date' the date of the claim. Missing values are replaced with empty strings\n- 'review_rating': the judgments of the fact checker (under 'alternateName', names vary by fact checker)", "### Data Splits\n\nNo splits are provided. There are a total of 5632 claims fact-checked.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe fact checking is done by URL, PolitiFact, The Washington Post, and The Weekly Standard.\n\n- URL self describes as \"a nonpartisan, nonprofit 'consumer advocate' for voters that aims to reduce the level of deception and confusion in U.S. politics.\" It was founded by journalists Kathleen Hall Jamieson and Brooks Jackson and is currently directed by Eugene Kiely.\n- PolitiFact describe their ethics as \"seeking to present the true facts, unaffected by agenda or biases, [with] journalists setting their own opinions aside.\" It was started in August 2007 by Times Washington Bureau Chief Bill Adair. The organization was acquired in February 2018 by the Poynter Institute, a non-profit journalism education and news media research center that also owns the Tampa Bay Times.\n- The Washington Post is a newspaper considered to be near the center of the American political spectrum. In 2013 URL founder Jeff Bezos bought the newspaper and affiliated publications.\n\nThe original data source also contains 132 items reviewed by The Weekly Standard, which was a neo-conservative American newspaper. IT is the most politically loaded source of the group, which was originally a vocal creitic of the activity of fact-checking, and has historically taken stances close to the American right. It also had to admit responsibility for baseless accusations against a well known author in a public libel case. The fact checked items from this source can be found in the 'weekly_standard' configuration but should be used only with full understanding of this context.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases\n\nSee section above describing the fact checking organizations.", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators\n\nThis fact checking dataset is maintained by URL, a Google initiative.", "### Licensing Information\n\nAll fact checked items are released under a 'CC-BY-NC-4.0' License.\n\n\n\nData Commons 2020, Fact Checks, electronic dataset, Data Commons, viewed 16 Dec 2020, <URL>.", "### Contributions\n\nThanks to @yjernite for adding this dataset." ]
[ 101, 12, 120, 12, 66, 10, 14, 6, 17, 171, 24, 5, 7, 4, 10, 10, 5, 5, 355, 8, 8, 7, 19, 25, 5, 21, 51, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-nc-4.0 #region-us \n# Dataset Card for DataCommons Fact Checked claims## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Data Commons fact checking FAQ### Dataset Summary\n\nA dataset of fact checked claims by news media maintained by URL containing the claim, author, and judgments, as well as the URL of the full explanation by the original fact-checker.\n\nThe fact checking is done by URL, PolitiFact, and The Washington Post.### Supported Tasks and Leaderboards### Languages\n\nThe data is in English ('en').## Dataset Structure### Data Instances\n\nAn example of fact checking instance looks as follows:", "passage: ### Data Fields\n\nA data instance has the following fields:\n- 'review_date': the day the fact checking report was posted. Missing values are replaced with empty strings\n- 'review_url': URL for the full fact checking report\n- 'reviewer_name': the name of the fact checking service.\n- 'claim_text': the full text of the claim being reviewed.\n- 'claim_author_name': the author of the claim being reviewed. Missing values are replaced with empty strings\n- 'claim_date' the date of the claim. Missing values are replaced with empty strings\n- 'review_rating': the judgments of the fact checker (under 'alternateName', names vary by fact checker)### Data Splits\n\nNo splits are provided. There are a total of 5632 claims fact-checked.## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?\n\nThe fact checking is done by URL, PolitiFact, The Washington Post, and The Weekly Standard.\n\n- URL self describes as \"a nonpartisan, nonprofit 'consumer advocate' for voters that aims to reduce the level of deception and confusion in U.S. politics.\" It was founded by journalists Kathleen Hall Jamieson and Brooks Jackson and is currently directed by Eugene Kiely.\n- PolitiFact describe their ethics as \"seeking to present the true facts, unaffected by agenda or biases, [with] journalists setting their own opinions aside.\" It was started in August 2007 by Times Washington Bureau Chief Bill Adair. The organization was acquired in February 2018 by the Poynter Institute, a non-profit journalism education and news media research center that also owns the Tampa Bay Times.\n- The Washington Post is a newspaper considered to be near the center of the American political spectrum. In 2013 URL founder Jeff Bezos bought the newspaper and affiliated publications.\n\nThe original data source also contains 132 items reviewed by The Weekly Standard, which was a neo-conservative American newspaper. IT is the most politically loaded source of the group, which was originally a vocal creitic of the activity of fact-checking, and has historically taken stances close to the American right. It also had to admit responsibility for baseless accusations against a well known author in a public libel case. The fact checked items from this source can be found in the 'weekly_standard' configuration but should be used only with full understanding of this context.### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases\n\nSee section above describing the fact checking organizations.### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.## Additional Information" ]
9abd46cf7fc8b4c64290f26993c540b92aa145ac
# Dataset Card for DBpedia14 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Repository:** https://github.com/zhangxiangxiao/Crepe - **Paper:** https://arxiv.org/abs/1509.01626 - **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu) ### Dataset Summary The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes from DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we randomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size of the training dataset is 560,000 and testing dataset 70,000. There are 3 columns in the dataset (same for train and test splits), corresponding to class index (1 to 14), title and content. The title and content are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). There are no new lines in title or content. ### Supported Tasks and Leaderboards - `text-classification`, `topic-classification`: The dataset is mainly used for text classification: given the content and the title, predict the correct topic. ### Languages Although DBpedia is a multilingual knowledge base, the DBpedia14 extract contains English data mainly, other languages may appear (e.g. a film whose title is origanlly not English). ## Dataset Structure ### Data Instances A typical data point, comprises of a title, a content and the corresponding label. An example from the DBpedia test set looks as follows: ``` { 'title':'', 'content':" TY KU /taɪkuː/ is an American alcoholic beverage company that specializes in sake and other spirits. The privately-held company was founded in 2004 and is headquartered in New York City New York. While based in New York TY KU's beverages are made in Japan through a joint venture with two sake breweries. Since 2011 TY KU's growth has extended its products into all 50 states.", 'label':0 } ``` ### Data Fields - 'title': a string containing the title of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). - 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). - 'label': one of the 14 possible topics. ### Data Splits The data is split into a training and test set. For each of the 14 classes we have 40,000 training samples and 5,000 testing samples. Therefore, the total size of the training dataset is 560,000 and testing dataset 70,000. ## Dataset Creation ### Curation Rationale The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). ### Source Data #### Initial Data Collection and Normalization Source data is taken from DBpedia: https://wiki.dbpedia.org/develop/datasets #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The DBPedia ontology classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). ### Licensing Information The DBPedia ontology classification dataset is licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. ### Citation Information ``` @inproceedings{NIPS2015_250cf8b5, author = {Zhang, Xiang and Zhao, Junbo and LeCun, Yann}, booktitle = {Advances in Neural Information Processing Systems}, editor = {C. Cortes and N. Lawrence and D. Lee and M. Sugiyama and R. Garnett}, pages = {}, publisher = {Curran Associates, Inc.}, title = {Character-level Convolutional Networks for Text Classification}, url = {https://proceedings.neurips.cc/paper_files/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf}, volume = {28}, year = {2015} } ``` Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195. ### Contributions Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
fancyzhx/dbpedia_14
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "arxiv:1509.01626", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "paperswithcode_id": "dbpedia", "pretty_name": "DBpedia", "dataset_info": {"config_name": "dbpedia_14", "features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "Company", "1": "EducationalInstitution", "2": "Artist", "3": "Athlete", "4": "OfficeHolder", "5": "MeanOfTransportation", "6": "Building", "7": "NaturalPlace", "8": "Village", "9": "Animal", "10": "Plant", "11": "Album", "12": "Film", "13": "WrittenWork"}}}}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 178428970, "num_examples": 560000}, {"name": "test", "num_bytes": 22310285, "num_examples": 70000}], "download_size": 119424374, "dataset_size": 200739255}, "configs": [{"config_name": "dbpedia_14", "data_files": [{"split": "train", "path": "dbpedia_14/train-*"}, {"split": "test", "path": "dbpedia_14/test-*"}], "default": true}]}
2024-01-22T11:57:58+00:00
[ "1509.01626" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-topic-classification #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-3.0 #arxiv-1509.01626 #region-us
# Dataset Card for DBpedia14 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: URL - Paper: URL - Point of Contact: Xiang Zhang ### Dataset Summary The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes from DBpedia 2014. They are listed in URL. From each of thse 14 ontology classes, we randomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size of the training dataset is 560,000 and testing dataset 70,000. There are 3 columns in the dataset (same for train and test splits), corresponding to class index (1 to 14), title and content. The title and content are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). There are no new lines in title or content. ### Supported Tasks and Leaderboards - 'text-classification', 'topic-classification': The dataset is mainly used for text classification: given the content and the title, predict the correct topic. ### Languages Although DBpedia is a multilingual knowledge base, the DBpedia14 extract contains English data mainly, other languages may appear (e.g. a film whose title is origanlly not English). ## Dataset Structure ### Data Instances A typical data point, comprises of a title, a content and the corresponding label. An example from the DBpedia test set looks as follows: ### Data Fields - 'title': a string containing the title of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). - 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). - 'label': one of the 14 possible topics. ### Data Splits The data is split into a training and test set. For each of the 14 classes we have 40,000 training samples and 5,000 testing samples. Therefore, the total size of the training dataset is 560,000 and testing dataset 70,000. ## Dataset Creation ### Curation Rationale The DBPedia ontology classification dataset is constructed by Xiang Zhang (URL@URL), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). ### Source Data #### Initial Data Collection and Normalization Source data is taken from DBpedia: URL #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The DBPedia ontology classification dataset is constructed by Xiang Zhang (URL@URL), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). ### Licensing Information The DBPedia ontology classification dataset is licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. Lehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. "DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia." Semantic web 6, no. 2 (2015): 167-195. ### Contributions Thanks to @hfawaz for adding this dataset.
[ "# Dataset Card for DBpedia14", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: URL\n- Point of Contact: Xiang Zhang", "### Dataset Summary\n\nThe DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes\nfrom DBpedia 2014. They are listed in URL. From each of thse 14 ontology classes, we\nrandomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size\nof the training dataset is 560,000 and testing dataset 70,000.\nThere are 3 columns in the dataset (same for train and test splits), corresponding to class index\n(1 to 14), title and content. The title and content are escaped using double quotes (\"), and any\ninternal double quote is escaped by 2 double quotes (\"\"). There are no new lines in title or content.", "### Supported Tasks and Leaderboards\n\n- 'text-classification', 'topic-classification': The dataset is mainly used for text classification: given the content\nand the title, predict the correct topic.", "### Languages\n\nAlthough DBpedia is a multilingual knowledge base, the DBpedia14 extract contains English data mainly, other languages may appear\n(e.g. a film whose title is origanlly not English).", "## Dataset Structure", "### Data Instances\n\nA typical data point, comprises of a title, a content and the corresponding label. \n\nAn example from the DBpedia test set looks as follows:", "### Data Fields\n\n- 'title': a string containing the title of the document - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\").\n- 'content': a string containing the body of the document - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\").\n- 'label': one of the 14 possible topics.", "### Data Splits\n\nThe data is split into a training and test set.\nFor each of the 14 classes we have 40,000 training samples and 5,000 testing samples.\nTherefore, the total size of the training dataset is 560,000 and testing dataset 70,000.", "## Dataset Creation", "### Curation Rationale\n\nThe DBPedia ontology classification dataset is constructed by Xiang Zhang (URL@URL), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).", "### Source Data", "#### Initial Data Collection and Normalization\n\nSource data is taken from DBpedia: URL", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe DBPedia ontology classification dataset is constructed by Xiang Zhang (URL@URL), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).", "### Licensing Information\n\nThe DBPedia ontology classification dataset is licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License.\n\n\n\n\n\nLehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. \"DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia.\" Semantic web 6, no. 2 (2015): 167-195.", "### Contributions\n\nThanks to @hfawaz for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-3.0 #arxiv-1509.01626 #region-us \n", "# Dataset Card for DBpedia14", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: URL\n- Point of Contact: Xiang Zhang", "### Dataset Summary\n\nThe DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes\nfrom DBpedia 2014. They are listed in URL. From each of thse 14 ontology classes, we\nrandomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size\nof the training dataset is 560,000 and testing dataset 70,000.\nThere are 3 columns in the dataset (same for train and test splits), corresponding to class index\n(1 to 14), title and content. The title and content are escaped using double quotes (\"), and any\ninternal double quote is escaped by 2 double quotes (\"\"). There are no new lines in title or content.", "### Supported Tasks and Leaderboards\n\n- 'text-classification', 'topic-classification': The dataset is mainly used for text classification: given the content\nand the title, predict the correct topic.", "### Languages\n\nAlthough DBpedia is a multilingual knowledge base, the DBpedia14 extract contains English data mainly, other languages may appear\n(e.g. a film whose title is origanlly not English).", "## Dataset Structure", "### Data Instances\n\nA typical data point, comprises of a title, a content and the corresponding label. \n\nAn example from the DBpedia test set looks as follows:", "### Data Fields\n\n- 'title': a string containing the title of the document - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\").\n- 'content': a string containing the body of the document - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\").\n- 'label': one of the 14 possible topics.", "### Data Splits\n\nThe data is split into a training and test set.\nFor each of the 14 classes we have 40,000 training samples and 5,000 testing samples.\nTherefore, the total size of the training dataset is 560,000 and testing dataset 70,000.", "## Dataset Creation", "### Curation Rationale\n\nThe DBPedia ontology classification dataset is constructed by Xiang Zhang (URL@URL), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).", "### Source Data", "#### Initial Data Collection and Normalization\n\nSource data is taken from DBpedia: URL", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe DBPedia ontology classification dataset is constructed by Xiang Zhang (URL@URL), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).", "### Licensing Information\n\nThe DBPedia ontology classification dataset is licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License.\n\n\n\n\n\nLehmann, Jens, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann et al. \"DBpedia–a large-scale, multilingual knowledge base extracted from Wikipedia.\" Semantic web 6, no. 2 (2015): 167-195.", "### Contributions\n\nThanks to @hfawaz for adding this dataset." ]
[ 102, 8, 120, 26, 154, 49, 48, 6, 38, 94, 56, 5, 112, 4, 19, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 111, 107, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-sa-3.0 #arxiv-1509.01626 #region-us \n# Dataset Card for DBpedia14## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: URL\n- Point of Contact: Xiang Zhang### Dataset Summary\n\nThe DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes\nfrom DBpedia 2014. They are listed in URL. From each of thse 14 ontology classes, we\nrandomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size\nof the training dataset is 560,000 and testing dataset 70,000.\nThere are 3 columns in the dataset (same for train and test splits), corresponding to class index\n(1 to 14), title and content. The title and content are escaped using double quotes (\"), and any\ninternal double quote is escaped by 2 double quotes (\"\"). There are no new lines in title or content.### Supported Tasks and Leaderboards\n\n- 'text-classification', 'topic-classification': The dataset is mainly used for text classification: given the content\nand the title, predict the correct topic.### Languages\n\nAlthough DBpedia is a multilingual knowledge base, the DBpedia14 extract contains English data mainly, other languages may appear\n(e.g. a film whose title is origanlly not English).", "passage: ## Dataset Structure### Data Instances\n\nA typical data point, comprises of a title, a content and the corresponding label. \n\nAn example from the DBpedia test set looks as follows:### Data Fields\n\n- 'title': a string containing the title of the document - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\").\n- 'content': a string containing the body of the document - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\").\n- 'label': one of the 14 possible topics.### Data Splits\n\nThe data is split into a training and test set.\nFor each of the 14 classes we have 40,000 training samples and 5,000 testing samples.\nTherefore, the total size of the training dataset is 560,000 and testing dataset 70,000.## Dataset Creation### Curation Rationale\n\nThe DBPedia ontology classification dataset is constructed by Xiang Zhang (URL@URL), licensed under the terms of the Creative Commons Attribution-ShareAlike License and the GNU Free Documentation License. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).### Source Data#### Initial Data Collection and Normalization\n\nSource data is taken from DBpedia: URL#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information" ]
3f756ab4572e071eb53e887ab629f19fa747d39e
# Dataset Card for DBRD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Dutch Book Review Dataset (DBRD) homepage](https://benjaminvdb.github.io/DBRD) - **Repository:** https://github.com/benjaminvdb/DBRD - **Paper:** [The merits of Universal Language Model Fine-tuning for Small Datasets - a case with Dutch book reviews](https://arxiv.org/abs/1910.00896) - **Leaderboard:** - **Point of Contact:** [Benjamin van der Burgh](mailto:benjaminvdb@gmail.com) ### Dataset Summary The DBRD (pronounced *dee-bird*) dataset contains over 110k book reviews of which 22k have associated binary sentiment polarity labels. It is intended as a benchmark for sentiment classification in Dutch and was created due to a lack of annotated datasets in Dutch that are suitable for this task. ### Supported Tasks and Leaderboards - `text-generation`: The dataset can be used to train a model for sequence modeling, more specifically language modeling. - `text-classification`: The dataset can be used to train a model for text classification, more specifically sentiment classification, using the provided positive/negative sentiment polarity labels. ### Languages Non-Dutch reviews were filtered out using [langdetect](https://github.com/Mimino666/langdetect), and all reviews should therefore be in Dutch (nl). They are written by reviewers on [Hebban](https://www.hebban.nl), a Dutch website for book reviews. ## Dataset Structure ### Data Instances The dataset contains three subsets: train, test, and unsupervised. The `train` and `test` sets contain labels, while the `unsupervised` set doesn't (the label value is -1 for each instance in `unsupervised`). Here's an example of a positive review, indicated with a label value of `1`. ``` { 'label': 1, 'text': 'Super om te lezen hoe haar leven is vergaan.\nBijzonder dat ze zo openhartig is geweest.' } ``` ### Data Fields - `label`: either 0 (negative) or 1 (positive) in the supervised sets `train` and `test`. These are always -1 for the unsupervised set. - `text`: book review as a utf-8 encoded string. ### Data Splits The `train` and `test` sets were constructed by extracting all non-neutral reviews because we want to assign either a positive or negative polarity label to each instance. Furthermore, the positive (pos) and negative (neg) labels were balanced in both train and test sets. The remainder was added to the unsupervised set. | | Train | Test | Unsupervised | | ----- | ------ | ----- | ----------- | | # No. texts | 20028 | 2224 | 96264 | | % of total | 16.9% | 1.9% | 81.2% | ## Dataset Creation ### Curation Rationale This dataset was created due to a lack of annotated Dutch text that is suitable for sentiment classification. Non-Dutch texts were therefore removed, but other than that, no curation was done. ### Source Data The book reviews were taken from [Hebban](https://www.hebban.nl), a Dutch platform for book reviews. #### Initial Data Collection and Normalization The source code of the scraper and preprocessing process can be found in the [DBRD GitHub repository](https://github.com/benjaminvdb/DBRD). #### Who are the source language producers? The reviews are written by users of [Hebban](https://www.hebban.nl) and are of varying quality. Some are short, others long, and many contain spelling mistakes and other errors. ### Annotations Each book review was accompanied by a 1 to 5-star rating. The annotations are produced by mapping the user-provided ratings to either a positive or negative label. 1 and 2-star ratings are given the negative label `0` and 4 and 5-star ratings the positive label `1`. Reviews with a rating of 3 stars are considered neutral and left out of the `train`/`test` sets and added to the unsupervised set. #### Annotation process Users of [Hebban](https://www.hebban.nl) were unaware that their reviews would be used in the creation of this dataset. #### Who are the annotators? The annotators are the [Hebban](https://www.hebban.nl) users who wrote the book reviews associated with the annotation. Anyone can register on [Hebban](https://www.hebban.nl) and it's impossible to know the demographics of this group. ### Personal and Sensitive Information The book reviews and ratings are publicly available on [Hebban](https://www.hebban.nl) and no personal or otherwise sensitive information is contained in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset While predicting sentiment of book reviews in itself is not that interesting, the value of this dataset lies in its usage for benchmarking models. The dataset contains some challenges that are common to outings on the internet, such as spelling mistakes and other errors. It is therefore very useful for validating models for their real-world performance. These datasets are abundant for English but are harder to find for Dutch, making them a valuable resource for ML tasks in this language. ### Discussion of Biases [More Information Needed] ### Other Known Limitations Reviews on [Hebban](https://www.hebban.nl) are usually written in Dutch, but some have been written in English and possibly in other languages. While we've done our best to filter out non-Dutch texts, it's hard to do this without errors. For example, some reviews are in multiple languages, and these might slip through. Also be aware that some commercial outings can appear in the text, making them different from other reviews and influencing your models. While this doesn't pose a major issue in most cases, we just wanted to mention it briefly. ## Additional Information ### Dataset Curators This dataset was created by [Benjamin van der Burgh](mailto:benjaminvdb@gmail.com), who was working at [Leiden Institute of Advanced Computer Science (LIACS)](https://liacs.leidenuniv.nl/) at the time. ### Licensing Information The dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/). ### Citation Information Please use the following citation when making use of this dataset in your work. ``` @article{DBLP:journals/corr/abs-1910-00896, author = {Benjamin van der Burgh and Suzan Verberne}, title = {The merits of Universal Language Model Fine-tuning for Small Datasets - a case with Dutch book reviews}, journal = {CoRR}, volume = {abs/1910.00896}, year = {2019}, url = {http://arxiv.org/abs/1910.00896}, archivePrefix = {arXiv}, eprint = {1910.00896}, timestamp = {Fri, 04 Oct 2019 12:28:06 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-00896.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@benjaminvdb](https://github.com/benjaminvdb) for adding this dataset.
dbrd
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:nl", "license:cc-by-nc-sa-4.0", "arxiv:1910.00896", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["nl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask", "text-classification"], "task_ids": ["language-modeling", "masked-language-modeling", "sentiment-classification"], "paperswithcode_id": "dbrd", "pretty_name": "DBRD", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 29496333, "num_examples": 20028}, {"name": "test", "num_bytes": 3246243, "num_examples": 2224}, {"name": "unsupervised", "num_bytes": 152733031, "num_examples": 96264}], "download_size": 79065872, "dataset_size": 185475607}}
2024-01-18T11:02:34+00:00
[ "1910.00896" ]
[ "nl" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Dutch #license-cc-by-nc-sa-4.0 #arxiv-1910.00896 #region-us
Dataset Card for DBRD ===================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Dutch Book Review Dataset (DBRD) homepage * Repository: URL * Paper: The merits of Universal Language Model Fine-tuning for Small Datasets - a case with Dutch book reviews * Leaderboard: * Point of Contact: Benjamin van der Burgh ### Dataset Summary The DBRD (pronounced *dee-bird*) dataset contains over 110k book reviews of which 22k have associated binary sentiment polarity labels. It is intended as a benchmark for sentiment classification in Dutch and was created due to a lack of annotated datasets in Dutch that are suitable for this task. ### Supported Tasks and Leaderboards * 'text-generation': The dataset can be used to train a model for sequence modeling, more specifically language modeling. * 'text-classification': The dataset can be used to train a model for text classification, more specifically sentiment classification, using the provided positive/negative sentiment polarity labels. ### Languages Non-Dutch reviews were filtered out using langdetect, and all reviews should therefore be in Dutch (nl). They are written by reviewers on Hebban, a Dutch website for book reviews. Dataset Structure ----------------- ### Data Instances The dataset contains three subsets: train, test, and unsupervised. The 'train' and 'test' sets contain labels, while the 'unsupervised' set doesn't (the label value is -1 for each instance in 'unsupervised'). Here's an example of a positive review, indicated with a label value of '1'. ### Data Fields * 'label': either 0 (negative) or 1 (positive) in the supervised sets 'train' and 'test'. These are always -1 for the unsupervised set. * 'text': book review as a utf-8 encoded string. ### Data Splits The 'train' and 'test' sets were constructed by extracting all non-neutral reviews because we want to assign either a positive or negative polarity label to each instance. Furthermore, the positive (pos) and negative (neg) labels were balanced in both train and test sets. The remainder was added to the unsupervised set. Dataset Creation ---------------- ### Curation Rationale This dataset was created due to a lack of annotated Dutch text that is suitable for sentiment classification. Non-Dutch texts were therefore removed, but other than that, no curation was done. ### Source Data The book reviews were taken from Hebban, a Dutch platform for book reviews. #### Initial Data Collection and Normalization The source code of the scraper and preprocessing process can be found in the DBRD GitHub repository. #### Who are the source language producers? The reviews are written by users of Hebban and are of varying quality. Some are short, others long, and many contain spelling mistakes and other errors. ### Annotations Each book review was accompanied by a 1 to 5-star rating. The annotations are produced by mapping the user-provided ratings to either a positive or negative label. 1 and 2-star ratings are given the negative label '0' and 4 and 5-star ratings the positive label '1'. Reviews with a rating of 3 stars are considered neutral and left out of the 'train'/'test' sets and added to the unsupervised set. #### Annotation process Users of Hebban were unaware that their reviews would be used in the creation of this dataset. #### Who are the annotators? The annotators are the Hebban users who wrote the book reviews associated with the annotation. Anyone can register on Hebban and it's impossible to know the demographics of this group. ### Personal and Sensitive Information The book reviews and ratings are publicly available on Hebban and no personal or otherwise sensitive information is contained in this dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset While predicting sentiment of book reviews in itself is not that interesting, the value of this dataset lies in its usage for benchmarking models. The dataset contains some challenges that are common to outings on the internet, such as spelling mistakes and other errors. It is therefore very useful for validating models for their real-world performance. These datasets are abundant for English but are harder to find for Dutch, making them a valuable resource for ML tasks in this language. ### Discussion of Biases ### Other Known Limitations Reviews on Hebban are usually written in Dutch, but some have been written in English and possibly in other languages. While we've done our best to filter out non-Dutch texts, it's hard to do this without errors. For example, some reviews are in multiple languages, and these might slip through. Also be aware that some commercial outings can appear in the text, making them different from other reviews and influencing your models. While this doesn't pose a major issue in most cases, we just wanted to mention it briefly. Additional Information ---------------------- ### Dataset Curators This dataset was created by Benjamin van der Burgh, who was working at Leiden Institute of Advanced Computer Science (LIACS) at the time. ### Licensing Information The dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Please use the following citation when making use of this dataset in your work. ### Contributions Thanks to @benjaminvdb for adding this dataset.
[ "### Dataset Summary\n\n\nThe DBRD (pronounced *dee-bird*) dataset contains over 110k book reviews of which 22k have associated binary sentiment polarity labels. It is intended as a benchmark for sentiment classification in Dutch and was created due to a lack of annotated datasets in Dutch that are suitable for this task.", "### Supported Tasks and Leaderboards\n\n\n* 'text-generation': The dataset can be used to train a model for sequence modeling, more specifically language modeling.\n* 'text-classification': The dataset can be used to train a model for text classification, more specifically sentiment classification, using the provided positive/negative sentiment polarity labels.", "### Languages\n\n\nNon-Dutch reviews were filtered out using langdetect, and all reviews should therefore be in Dutch (nl). They are written by reviewers on Hebban, a Dutch website for book reviews.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe dataset contains three subsets: train, test, and unsupervised. The 'train' and 'test' sets contain labels, while the 'unsupervised' set doesn't (the label value is -1 for each instance in 'unsupervised'). Here's an example of a positive review, indicated with a label value of '1'.", "### Data Fields\n\n\n* 'label': either 0 (negative) or 1 (positive) in the supervised sets 'train' and 'test'. These are always -1 for the unsupervised set.\n* 'text': book review as a utf-8 encoded string.", "### Data Splits\n\n\nThe 'train' and 'test' sets were constructed by extracting all non-neutral reviews because we want to assign either a positive or negative polarity label to each instance. Furthermore, the positive (pos) and negative (neg) labels were balanced in both train and test sets. The remainder was added to the unsupervised set.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset was created due to a lack of annotated Dutch text that is suitable for sentiment classification. Non-Dutch texts were therefore removed, but other than that, no curation was done.", "### Source Data\n\n\nThe book reviews were taken from Hebban, a Dutch platform for book reviews.", "#### Initial Data Collection and Normalization\n\n\nThe source code of the scraper and preprocessing process can be found in the DBRD GitHub repository.", "#### Who are the source language producers?\n\n\nThe reviews are written by users of Hebban and are of varying quality. Some are short, others long, and many contain spelling mistakes and other errors.", "### Annotations\n\n\nEach book review was accompanied by a 1 to 5-star rating. The annotations are produced by mapping the user-provided ratings to either a positive or negative label. 1 and 2-star ratings are given the negative label '0' and 4 and 5-star ratings the positive label '1'. Reviews with a rating of 3 stars are considered neutral and left out of the 'train'/'test' sets and added to the unsupervised set.", "#### Annotation process\n\n\nUsers of Hebban were unaware that their reviews would be used in the creation of this dataset.", "#### Who are the annotators?\n\n\nThe annotators are the Hebban users who wrote the book reviews associated with the annotation. Anyone can register on Hebban and it's impossible to know the demographics of this group.", "### Personal and Sensitive Information\n\n\nThe book reviews and ratings are publicly available on Hebban and no personal or otherwise sensitive information is contained in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nWhile predicting sentiment of book reviews in itself is not that interesting, the value of this dataset lies in its usage for benchmarking models. The dataset contains some challenges that are common to outings on the internet, such as spelling mistakes and other errors. It is therefore very useful for validating models for their real-world performance. These datasets are abundant for English but are harder to find for Dutch, making them a valuable resource for ML tasks in this language.", "### Discussion of Biases", "### Other Known Limitations\n\n\nReviews on Hebban are usually written in Dutch, but some have been written in English and possibly in other languages. While we've done our best to filter out non-Dutch texts, it's hard to do this without errors. For example, some reviews are in multiple languages, and these might slip through. Also be aware that some commercial outings can appear in the text, making them different from other reviews and influencing your models. While this doesn't pose a major issue in most cases, we just wanted to mention it briefly.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was created by Benjamin van der Burgh, who was working at Leiden Institute of Advanced Computer Science (LIACS) at the time.", "### Licensing Information\n\n\nThe dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\n\nPlease use the following citation when making use of this dataset in your work.", "### Contributions\n\n\nThanks to @benjaminvdb for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Dutch #license-cc-by-nc-sa-4.0 #arxiv-1910.00896 #region-us \n", "### Dataset Summary\n\n\nThe DBRD (pronounced *dee-bird*) dataset contains over 110k book reviews of which 22k have associated binary sentiment polarity labels. It is intended as a benchmark for sentiment classification in Dutch and was created due to a lack of annotated datasets in Dutch that are suitable for this task.", "### Supported Tasks and Leaderboards\n\n\n* 'text-generation': The dataset can be used to train a model for sequence modeling, more specifically language modeling.\n* 'text-classification': The dataset can be used to train a model for text classification, more specifically sentiment classification, using the provided positive/negative sentiment polarity labels.", "### Languages\n\n\nNon-Dutch reviews were filtered out using langdetect, and all reviews should therefore be in Dutch (nl). They are written by reviewers on Hebban, a Dutch website for book reviews.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe dataset contains three subsets: train, test, and unsupervised. The 'train' and 'test' sets contain labels, while the 'unsupervised' set doesn't (the label value is -1 for each instance in 'unsupervised'). Here's an example of a positive review, indicated with a label value of '1'.", "### Data Fields\n\n\n* 'label': either 0 (negative) or 1 (positive) in the supervised sets 'train' and 'test'. These are always -1 for the unsupervised set.\n* 'text': book review as a utf-8 encoded string.", "### Data Splits\n\n\nThe 'train' and 'test' sets were constructed by extracting all non-neutral reviews because we want to assign either a positive or negative polarity label to each instance. Furthermore, the positive (pos) and negative (neg) labels were balanced in both train and test sets. The remainder was added to the unsupervised set.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset was created due to a lack of annotated Dutch text that is suitable for sentiment classification. Non-Dutch texts were therefore removed, but other than that, no curation was done.", "### Source Data\n\n\nThe book reviews were taken from Hebban, a Dutch platform for book reviews.", "#### Initial Data Collection and Normalization\n\n\nThe source code of the scraper and preprocessing process can be found in the DBRD GitHub repository.", "#### Who are the source language producers?\n\n\nThe reviews are written by users of Hebban and are of varying quality. Some are short, others long, and many contain spelling mistakes and other errors.", "### Annotations\n\n\nEach book review was accompanied by a 1 to 5-star rating. The annotations are produced by mapping the user-provided ratings to either a positive or negative label. 1 and 2-star ratings are given the negative label '0' and 4 and 5-star ratings the positive label '1'. Reviews with a rating of 3 stars are considered neutral and left out of the 'train'/'test' sets and added to the unsupervised set.", "#### Annotation process\n\n\nUsers of Hebban were unaware that their reviews would be used in the creation of this dataset.", "#### Who are the annotators?\n\n\nThe annotators are the Hebban users who wrote the book reviews associated with the annotation. Anyone can register on Hebban and it's impossible to know the demographics of this group.", "### Personal and Sensitive Information\n\n\nThe book reviews and ratings are publicly available on Hebban and no personal or otherwise sensitive information is contained in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nWhile predicting sentiment of book reviews in itself is not that interesting, the value of this dataset lies in its usage for benchmarking models. The dataset contains some challenges that are common to outings on the internet, such as spelling mistakes and other errors. It is therefore very useful for validating models for their real-world performance. These datasets are abundant for English but are harder to find for Dutch, making them a valuable resource for ML tasks in this language.", "### Discussion of Biases", "### Other Known Limitations\n\n\nReviews on Hebban are usually written in Dutch, but some have been written in English and possibly in other languages. While we've done our best to filter out non-Dutch texts, it's hard to do this without errors. For example, some reviews are in multiple languages, and these might slip through. Also be aware that some commercial outings can appear in the text, making them different from other reviews and influencing your models. While this doesn't pose a major issue in most cases, we just wanted to mention it briefly.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was created by Benjamin van der Burgh, who was working at Leiden Institute of Advanced Computer Science (LIACS) at the time.", "### Licensing Information\n\n\nThe dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\n\nPlease use the following citation when making use of this dataset in your work.", "### Contributions\n\n\nThanks to @benjaminvdb for adding this dataset." ]
[ 147, 78, 84, 54, 91, 67, 92, 52, 20, 36, 45, 110, 27, 51, 46, 113, 8, 134, 36, 45, 19 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Dutch #license-cc-by-nc-sa-4.0 #arxiv-1910.00896 #region-us \n### Dataset Summary\n\n\nThe DBRD (pronounced *dee-bird*) dataset contains over 110k book reviews of which 22k have associated binary sentiment polarity labels. It is intended as a benchmark for sentiment classification in Dutch and was created due to a lack of annotated datasets in Dutch that are suitable for this task.### Supported Tasks and Leaderboards\n\n\n* 'text-generation': The dataset can be used to train a model for sequence modeling, more specifically language modeling.\n* 'text-classification': The dataset can be used to train a model for text classification, more specifically sentiment classification, using the provided positive/negative sentiment polarity labels.### Languages\n\n\nNon-Dutch reviews were filtered out using langdetect, and all reviews should therefore be in Dutch (nl). They are written by reviewers on Hebban, a Dutch website for book reviews.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThe dataset contains three subsets: train, test, and unsupervised. The 'train' and 'test' sets contain labels, while the 'unsupervised' set doesn't (the label value is -1 for each instance in 'unsupervised'). Here's an example of a positive review, indicated with a label value of '1'.", "passage: ### Data Fields\n\n\n* 'label': either 0 (negative) or 1 (positive) in the supervised sets 'train' and 'test'. These are always -1 for the unsupervised set.\n* 'text': book review as a utf-8 encoded string.### Data Splits\n\n\nThe 'train' and 'test' sets were constructed by extracting all non-neutral reviews because we want to assign either a positive or negative polarity label to each instance. Furthermore, the positive (pos) and negative (neg) labels were balanced in both train and test sets. The remainder was added to the unsupervised set.\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThis dataset was created due to a lack of annotated Dutch text that is suitable for sentiment classification. Non-Dutch texts were therefore removed, but other than that, no curation was done.### Source Data\n\n\nThe book reviews were taken from Hebban, a Dutch platform for book reviews.#### Initial Data Collection and Normalization\n\n\nThe source code of the scraper and preprocessing process can be found in the DBRD GitHub repository.#### Who are the source language producers?\n\n\nThe reviews are written by users of Hebban and are of varying quality. Some are short, others long, and many contain spelling mistakes and other errors.### Annotations\n\n\nEach book review was accompanied by a 1 to 5-star rating. The annotations are produced by mapping the user-provided ratings to either a positive or negative label. 1 and 2-star ratings are given the negative label '0' and 4 and 5-star ratings the positive label '1'. Reviews with a rating of 3 stars are considered neutral and left out of the 'train'/'test' sets and added to the unsupervised set.#### Annotation process\n\n\nUsers of Hebban were unaware that their reviews would be used in the creation of this dataset.#### Who are the annotators?\n\n\nThe annotators are the Hebban users who wrote the book reviews associated with the annotation. Anyone can register on Hebban and it's impossible to know the demographics of this group.### Personal and Sensitive Information\n\n\nThe book reviews and ratings are publicly available on Hebban and no personal or otherwise sensitive information is contained in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------" ]
de8a685ff1624e1d7f57f6cb3b635518fe7b8113
# Dataset Card for Deal or No Deal Negotiator ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Dataset Repository](https://github.com/facebookresearch/end-to-end-negotiator) - **Paper:** [Deal or No Deal? End-to-End Learning for Negotiation Dialogues](https://arxiv.org/abs/1706.05125) ### Dataset Summary A large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other’s reward functions must reach an agreement (or a deal) via natural language dialogue. ### Supported Tasks and Leaderboards Train end-to-end models for negotiation ### Languages The text in the dataset is in English ## Dataset Structure ### Data Instances {'dialogue': 'YOU: i love basketball and reading <eos> THEM: no . i want the hat and the balls <eos> YOU: both balls ? <eos> THEM: yeah or 1 ball and 1 book <eos> YOU: ok i want the hat and you can have the rest <eos> THEM: okay deal ill take the books and the balls you can have only the hat <eos> YOU: ok <eos> THEM: <selection>', 'input': {'count': [3, 1, 2], 'value': [0, 8, 1]}, 'output': 'item0=0 item1=1 item2=0 item0=3 item1=0 item2=2', 'partner_input': {'count': [3, 1, 2], 'value': [1, 3, 2]}} ### Data Fields `dialogue`: The dialogue between the agents. \ `input`: The input of the firt agent. \ `partner_input`: The input of the other agent. \ `count`: The count of the three available items. \ `value`: The value of the three available items. \ `output`: Describes how many of each of the three item typesare assigned to each agent ### Data Splits | | train | validation | test | |------------|------:|-----------:|-----:| | dialogues | 10095 | 1087 | 1052 | | self_play | 8172 | NA | NA | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? Human workers using Amazon Mechanical Turk. They were paid $0.15 per dialogue, with a $0.05 bonus for maximal scores. Only workers based in the United States with a 95% approval rating and at least 5000 previous HITs were used. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The project is licenced under CC-by-NC ### Citation Information ``` @article{lewis2017deal, title={Deal or no deal? end-to-end learning for negotiation dialogues}, author={Lewis, Mike and Yarats, Denis and Dauphin, Yann N and Parikh, Devi and Batra, Dhruv}, journal={arXiv preprint arXiv:1706.05125}, year={2017} } ``` ### Contributions Thanks to [@moussaKam](https://github.com/moussaKam) for adding this dataset.
deal_or_no_dialog
[ "task_categories:conversational", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1706.05125", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["conversational"], "task_ids": [], "paperswithcode_id": "negotiation-dialogues-dataset", "pretty_name": "Deal or No Deal Negotiator", "dataset_info": [{"config_name": "dialogues", "features": [{"name": "input", "sequence": [{"name": "count", "dtype": "int32"}, {"name": "value", "dtype": "int32"}]}, {"name": "dialogue", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "partner_input", "sequence": [{"name": "count", "dtype": "int32"}, {"name": "value", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 3860624, "num_examples": 10095}, {"name": "test", "num_bytes": 396258, "num_examples": 1052}, {"name": "validation", "num_bytes": 418491, "num_examples": 1087}], "download_size": 5239072, "dataset_size": 4675373}, {"config_name": "self_play", "features": [{"name": "input", "sequence": [{"name": "count", "dtype": "int32"}, {"name": "value", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 261512, "num_examples": 8172}], "download_size": 98304, "dataset_size": 261512}]}
2024-01-18T11:02:35+00:00
[ "1706.05125" ]
[ "en" ]
TAGS #task_categories-conversational #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1706.05125 #region-us
Dataset Card for Deal or No Deal Negotiator =========================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: Dataset Repository * Paper: Deal or No Deal? End-to-End Learning for Negotiation Dialogues ### Dataset Summary A large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other’s reward functions must reach an agreement (or a deal) via natural language dialogue. ### Supported Tasks and Leaderboards Train end-to-end models for negotiation ### Languages The text in the dataset is in English Dataset Structure ----------------- ### Data Instances {'dialogue': 'YOU: i love basketball and reading THEM: no . i want the hat and the balls YOU: both balls ? THEM: yeah or 1 ball and 1 book YOU: ok i want the hat and you can have the rest THEM: okay deal ill take the books and the balls you can have only the hat YOU: ok THEM: ', 'input': {'count': [3, 1, 2], 'value': [0, 8, 1]}, 'output': 'item0=0 item1=1 item2=0 item0=3 item1=0 item2=2', 'partner\_input': {'count': [3, 1, 2], 'value': [1, 3, 2]}} ### Data Fields 'dialogue': The dialogue between the agents. 'input': The input of the firt agent. 'partner\_input': The input of the other agent. 'count': The count of the three available items. 'value': The value of the three available items. 'output': Describes how many of each of the three item typesare assigned to each agent ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? Human workers using Amazon Mechanical Turk. They were paid $0.15 per dialogue, with a $0.05 bonus for maximal scores. Only workers based in the United States with a 95% approval rating and at least 5000 previous HITs were used. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The project is licenced under CC-by-NC ### Contributions Thanks to @moussaKam for adding this dataset.
[ "### Dataset Summary\n\n\nA large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other’s reward functions must reach an agreement (or a deal) via natural language dialogue.", "### Supported Tasks and Leaderboards\n\n\nTrain end-to-end models for negotiation", "### Languages\n\n\nThe text in the dataset is in English\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n{'dialogue': 'YOU: i love basketball and reading THEM: no . i want the hat and the balls YOU: both balls ? THEM: yeah or 1 ball and 1 book YOU: ok i want the hat and you can have the rest THEM: okay deal ill take the books and the balls you can have only the hat YOU: ok THEM: ',\n'input': {'count': [3, 1, 2], 'value': [0, 8, 1]},\n'output': 'item0=0 item1=1 item2=0 item0=3 item1=0 item2=2',\n'partner\\_input': {'count': [3, 1, 2], 'value': [1, 3, 2]}}", "### Data Fields\n\n\n'dialogue': The dialogue between the agents. \n\n'input': The input of the firt agent. \n\n'partner\\_input': The input of the other agent. \n\n'count': The count of the three available items. \n\n'value': The value of the three available items. \n\n'output': Describes how many of each of the three item typesare assigned to each agent", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\nHuman workers using Amazon Mechanical Turk. They were paid $0.15 per dialogue, with a $0.05 bonus for maximal scores. Only workers based in the United States with a 95% approval rating and at least 5000 previous HITs were used.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe project is licenced under CC-by-NC", "### Contributions\n\n\nThanks to @moussaKam for adding this dataset." ]
[ "TAGS\n#task_categories-conversational #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1706.05125 #region-us \n", "### Dataset Summary\n\n\nA large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other’s reward functions must reach an agreement (or a deal) via natural language dialogue.", "### Supported Tasks and Leaderboards\n\n\nTrain end-to-end models for negotiation", "### Languages\n\n\nThe text in the dataset is in English\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n{'dialogue': 'YOU: i love basketball and reading THEM: no . i want the hat and the balls YOU: both balls ? THEM: yeah or 1 ball and 1 book YOU: ok i want the hat and you can have the rest THEM: okay deal ill take the books and the balls you can have only the hat YOU: ok THEM: ',\n'input': {'count': [3, 1, 2], 'value': [0, 8, 1]},\n'output': 'item0=0 item1=1 item2=0 item0=3 item1=0 item2=2',\n'partner\\_input': {'count': [3, 1, 2], 'value': [1, 3, 2]}}", "### Data Fields\n\n\n'dialogue': The dialogue between the agents. \n\n'input': The input of the firt agent. \n\n'partner\\_input': The input of the other agent. \n\n'count': The count of the three available items. \n\n'value': The value of the three available items. \n\n'output': Describes how many of each of the three item typesare assigned to each agent", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\n\nHuman workers using Amazon Mechanical Turk. They were paid $0.15 per dialogue, with a $0.05 bonus for maximal scores. Only workers based in the United States with a 95% approval rating and at least 5000 previous HITs were used.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe project is licenced under CC-by-NC", "### Contributions\n\n\nThanks to @moussaKam for adding this dataset." ]
[ 89, 53, 21, 20, 183, 93, 11, 7, 4, 10, 10, 5, 5, 61, 18, 7, 8, 14, 6, 17, 17 ]
[ "passage: TAGS\n#task_categories-conversational #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1706.05125 #region-us \n### Dataset Summary\n\n\nA large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other’s reward functions must reach an agreement (or a deal) via natural language dialogue.### Supported Tasks and Leaderboards\n\n\nTrain end-to-end models for negotiation### Languages\n\n\nThe text in the dataset is in English\n\n\nDataset Structure\n-----------------### Data Instances\n\n\n{'dialogue': 'YOU: i love basketball and reading THEM: no . i want the hat and the balls YOU: both balls ? THEM: yeah or 1 ball and 1 book YOU: ok i want the hat and you can have the rest THEM: okay deal ill take the books and the balls you can have only the hat YOU: ok THEM: ',\n'input': {'count': [3, 1, 2], 'value': [0, 8, 1]},\n'output': 'item0=0 item1=1 item2=0 item0=3 item1=0 item2=2',\n'partner\\_input': {'count': [3, 1, 2], 'value': [1, 3, 2]}}### Data Fields\n\n\n'dialogue': The dialogue between the agents. \n\n'input': The input of the firt agent. \n\n'partner\\_input': The input of the other agent. \n\n'count': The count of the three available items. \n\n'value': The value of the three available items. \n\n'output': Describes how many of each of the three item typesare assigned to each agent### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations" ]
260a238f51072b02cf9a8cd649f721073e8422c5
# Dataset Card for "definite_pronoun_resolution" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.hlt.utdallas.edu/~vince/data/emnlp12/](https://www.hlt.utdallas.edu/~vince/data/emnlp12/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 0.23 MB - **Size of the generated dataset:** 0.24 MB - **Total amount of disk used:** 0.47 MB ### Dataset Summary Composed by 30 students from one of the author's undergraduate classes. These sentence pairs cover topics ranging from real events (e.g., Iran's plan to attack the Saudi ambassador to the U.S.) to events/characters in movies (e.g., Batman) and purely imaginary situations, largely reflecting the pop culture as perceived by the American kids born in the early 90s. Each annotated example spans four lines: the first line contains the sentence, the second line contains the target pronoun, the third line contains the two candidate antecedents, and the fourth line contains the correct antecedent. If the target pronoun appears more than once in the sentence, its first occurrence is the one to be resolved. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 0.23 MB - **Size of the generated dataset:** 0.24 MB - **Total amount of disk used:** 0.47 MB An example of 'train' looks as follows. ``` { "candidates": ["coreference resolution", "chunking"], "label": 0, "pronoun": "it", "sentence": "There is currently more work on coreference resolution than on chunking because it is a problem that is still far from being solved." } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `sentence`: a `string` feature. - `pronoun`: a `string` feature. - `candidates`: a `list` of `string` features. - `label`: a classification label, with possible values including `0` (0), `1` (1). ### Data Splits | name |train|test| |----------|----:|---:| |plain_text| 1322| 564| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{rahman2012resolving, title={Resolving complex cases of definite pronouns: the winograd schema challenge}, author={Rahman, Altaf and Ng, Vincent}, booktitle={Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning}, pages={777--789}, year={2012}, organization={Association for Computational Linguistics} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
definite_pronoun_resolution
[ "task_categories:token-classification", "task_ids:word-sense-disambiguation", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["word-sense-disambiguation"], "paperswithcode_id": "definite-pronoun-resolution-dataset", "pretty_name": "Definite Pronoun Resolution Dataset", "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "pronoun", "dtype": "string"}, {"name": "candidates", "sequence": "string", "length": 2}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "config_name": "plain_text", "splits": [{"name": "test", "num_bytes": 71691, "num_examples": 564}, {"name": "train", "num_bytes": 171511, "num_examples": 1322}], "download_size": 227452, "dataset_size": 243202}}
2024-01-18T11:02:36+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-word-sense-disambiguation #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #region-us
Dataset Card for "definite\_pronoun\_resolution" ================================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 0.23 MB * Size of the generated dataset: 0.24 MB * Total amount of disk used: 0.47 MB ### Dataset Summary Composed by 30 students from one of the author's undergraduate classes. These sentence pairs cover topics ranging from real events (e.g., Iran's plan to attack the Saudi ambassador to the U.S.) to events/characters in movies (e.g., Batman) and purely imaginary situations, largely reflecting the pop culture as perceived by the American kids born in the early 90s. Each annotated example spans four lines: the first line contains the sentence, the second line contains the target pronoun, the third line contains the two candidate antecedents, and the fourth line contains the correct antecedent. If the target pronoun appears more than once in the sentence, its first occurrence is the one to be resolved. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### plain\_text * Size of downloaded dataset files: 0.23 MB * Size of the generated dataset: 0.24 MB * Total amount of disk used: 0.47 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### plain\_text * 'sentence': a 'string' feature. * 'pronoun': a 'string' feature. * 'candidates': a 'list' of 'string' features. * 'label': a classification label, with possible values including '0' (0), '1' (1). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset.
[ "### Dataset Summary\n\n\nComposed by 30 students from one of the author's undergraduate classes. These\nsentence pairs cover topics ranging from real events (e.g., Iran's plan to\nattack the Saudi ambassador to the U.S.) to events/characters in movies (e.g.,\nBatman) and purely imaginary situations, largely reflecting the pop culture as\nperceived by the American kids born in the early 90s. Each annotated example\nspans four lines: the first line contains the sentence, the second line contains\nthe target pronoun, the third line contains the two candidate antecedents, and\nthe fourth line contains the correct antecedent. If the target pronoun appears\nmore than once in the sentence, its first occurrence is the one to be resolved.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 0.23 MB\n* Size of the generated dataset: 0.24 MB\n* Total amount of disk used: 0.47 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'sentence': a 'string' feature.\n* 'pronoun': a 'string' feature.\n* 'candidates': a 'list' of 'string' features.\n* 'label': a classification label, with possible values including '0' (0), '1' (1).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-word-sense-disambiguation #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nComposed by 30 students from one of the author's undergraduate classes. These\nsentence pairs cover topics ranging from real events (e.g., Iran's plan to\nattack the Saudi ambassador to the U.S.) to events/characters in movies (e.g.,\nBatman) and purely imaginary situations, largely reflecting the pop culture as\nperceived by the American kids born in the early 90s. Each annotated example\nspans four lines: the first line contains the sentence, the second line contains\nthe target pronoun, the third line contains the two candidate antecedents, and\nthe fourth line contains the correct antecedent. If the target pronoun appears\nmore than once in the sentence, its first occurrence is the one to be resolved.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 0.23 MB\n* Size of the generated dataset: 0.24 MB\n* Total amount of disk used: 0.47 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'sentence': a 'string' feature.\n* 'pronoun': a 'string' feature.\n* 'candidates': a 'list' of 'string' features.\n* 'label': a classification label, with possible values including '0' (0), '1' (1).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset." ]
[ 95, 182, 10, 11, 6, 53, 17, 72, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 28 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-word-sense-disambiguation #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nComposed by 30 students from one of the author's undergraduate classes. These\nsentence pairs cover topics ranging from real events (e.g., Iran's plan to\nattack the Saudi ambassador to the U.S.) to events/characters in movies (e.g.,\nBatman) and purely imaginary situations, largely reflecting the pop culture as\nperceived by the American kids born in the early 90s. Each annotated example\nspans four lines: the first line contains the sentence, the second line contains\nthe target pronoun, the third line contains the two candidate antecedents, and\nthe fourth line contains the correct antecedent. If the target pronoun appears\nmore than once in the sentence, its first occurrence is the one to be resolved.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### plain\\_text\n\n\n* Size of downloaded dataset files: 0.23 MB\n* Size of the generated dataset: 0.24 MB\n* Total amount of disk used: 0.47 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### plain\\_text\n\n\n* 'sentence': a 'string' feature.\n* 'pronoun': a 'string' feature.\n* 'candidates': a 'list' of 'string' features.\n* 'label': a classification label, with possible values including '0' (0), '1' (1).### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
59af6924a4ca20faae2c4702f3e767a1ace1e897
# Dataset Card for Dengue Dataset in Filipino ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Dengue Dataset in Filipino homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks) - **Repository:** [Dengue Dataset in Filipino repository](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks) - **Paper:** [IEEE paper](https://ieeexplore.ieee.org/document/8459963) - **Leaderboard:** - **Point of Contact:** [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph) ### Dataset Summary Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular. ## Dataset Structure ### Data Instances Sample data: ``` { "text": "Tapos ang dami pang lamok.", "absent": "0", "dengue": "0", "health": "0", "mosquito": "1", "sick": "0" } ``` ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph) ### Licensing Information [More Information Needed] ### Citation Information @INPROCEEDINGS{8459963, author={E. D. {Livelo} and C. {Cheng}}, booktitle={2018 IEEE International Conference on Agents (ICA)}, title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies}, year={2018}, volume={}, number={}, pages={2-7}, doi={10.1109/AGENTS.2018.8459963}} } ### Contributions Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
dengue_filipino
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:tl", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["crowdsourced"], "language": ["tl"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "paperswithcode_id": "dengue", "pretty_name": "Dengue Dataset in Filipino", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "absent", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "dengue", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "health", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "mosquito", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "sick", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 428549, "num_examples": 4015}, {"name": "test", "num_bytes": 57364, "num_examples": 500}, {"name": "validation", "num_bytes": 54380, "num_examples": 500}], "download_size": 156014, "dataset_size": 540293}}
2024-02-01T12:38:50+00:00
[]
[ "tl" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Tagalog #license-unknown #region-us
# Dataset Card for Dengue Dataset in Filipino ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Dengue Dataset in Filipino homepage - Repository: Dengue Dataset in Filipino repository - Paper: IEEE paper - Leaderboard: - Point of Contact: Jan Christian Cruz ### Dataset Summary Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets. ### Supported Tasks and Leaderboards ### Languages The dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular. ## Dataset Structure ### Data Instances Sample data: ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Jan Christian Cruz ### Licensing Information @INPROCEEDINGS{8459963, author={E. D. {Livelo} and C. {Cheng}}, booktitle={2018 IEEE International Conference on Agents (ICA)}, title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies}, year={2018}, volume={}, number={}, pages={2-7}, doi={10.1109/AGENTS.2018.8459963}} } ### Contributions Thanks to @anaerobeth for adding this dataset.
[ "# Dataset Card for Dengue Dataset in Filipino", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Dengue Dataset in Filipino homepage\n- Repository: Dengue Dataset in Filipino repository\n- Paper: IEEE paper\n- Leaderboard:\n- Point of Contact: Jan Christian Cruz", "### Dataset Summary\n\nBenchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular.", "## Dataset Structure", "### Data Instances\n\nSample data:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nJan Christian Cruz", "### Licensing Information\n\n\n\n\n\n @INPROCEEDINGS{8459963,\n author={E. D. {Livelo} and C. {Cheng}},\n booktitle={2018 IEEE International Conference on Agents (ICA)},\n title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies},\n year={2018},\n volume={},\n number={},\n pages={2-7},\n doi={10.1109/AGENTS.2018.8459963}}\n }", "### Contributions\n\nThanks to @anaerobeth for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Tagalog #license-unknown #region-us \n", "# Dataset Card for Dengue Dataset in Filipino", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Dengue Dataset in Filipino homepage\n- Repository: Dengue Dataset in Filipino repository\n- Paper: IEEE paper\n- Leaderboard:\n- Point of Contact: Jan Christian Cruz", "### Dataset Summary\n\nBenchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular.", "## Dataset Structure", "### Data Instances\n\nSample data:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nJan Christian Cruz", "### Licensing Information\n\n\n\n\n\n @INPROCEEDINGS{8459963,\n author={E. D. {Livelo} and C. {Cheng}},\n booktitle={2018 IEEE International Conference on Agents (ICA)},\n title={Intelligent Dengue Infoveillance Using Gated Recurrent Neural Learning and Cross-Label Frequencies},\n year={2018},\n volume={},\n number={},\n pages={2-7},\n doi={10.1109/AGENTS.2018.8459963}}\n }", "### Contributions\n\nThanks to @anaerobeth for adding this dataset." ]
[ 107, 11, 120, 47, 63, 10, 29, 6, 10, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 9, 128, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Tagalog #license-unknown #region-us \n# Dataset Card for Dengue Dataset in Filipino## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Dengue Dataset in Filipino homepage\n- Repository: Dengue Dataset in Filipino repository\n- Paper: IEEE paper\n- Leaderboard:\n- Point of Contact: Jan Christian Cruz### Dataset Summary\n\nBenchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.### Supported Tasks and Leaderboards### Languages\n\nThe dataset is primarily in Filipino, with the addition of some English words commonly used in Filipino vernacular.## Dataset Structure### Data Instances\n\nSample data:### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations" ]
aea09f967ca3b4e52931b380361a3f73ca67f2dd
# Dataset Card for [DialogRE] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [DialogRE Homepage](https://dataset.org/dialogre/) - **Repository:** [DialogRE Repository](https://github.com/nlpdata/dialogre) - **Paper:** [Arxiv](https://arxiv.org/abs/2004.08056v1) - **Point of Contact:** [dialogre@dataset.org](mailto:dialogre@dataset.org) ### Dataset Summary The DialogRE dataset is the first human-annotated dialogue-based relation extraction (RE) dataset, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. DialogRE can also act as a platform for studying cross-sentence RE as most facts span multiple sentences. Specifically, the dataset annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends (in English). ### Supported Tasks and Leaderboards * `other-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists of the prediction of relation between two arguments that appear in a dialogue. Success on this task is typically measured by achieving a *high* [F1 Score](https://huggingface.co/metrics/f1). ### Languages The dialogues in the dataset is in English originating from the transcripts of Friends. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances A typical data point consists of a dialogue between speakers as a list of sentences. This is followed by the annotations of the relations between the entities in the dialog. An example from the DialogRE train set looks as follows: ``` {'dialog': ["Speaker 1: It's been an hour and not one of my classmates has shown up! I tell you, when I actually die some people are gonna get seriously haunted!", 'Speaker 2: There you go! Someone came!', "Speaker 1: Ok, ok! I'm gonna go hide! Oh, this is so exciting, my first mourner!", 'Speaker 3: Hi, glad you could come.', 'Speaker 2: Please, come in.', "Speaker 4: Hi, you're Chandler Bing, right? I'm Tom Gordon, I was in your class.", 'Speaker 2: Oh yes, yes... let me... take your coat.', "Speaker 4: Thanks... uh... I'm so sorry about Ross, it's...", 'Speaker 2: At least he died doing what he loved... watching blimps.', 'Speaker 1: Who is he?', 'Speaker 2: Some guy, Tom Gordon.', "Speaker 1: I don't remember him, but then again I touched so many lives.", 'Speaker 3: So, did you know Ross well?', "Speaker 4: Oh, actually I barely knew him. Yeah, I came because I heard Chandler's news. D'you know if he's seeing anyone?", 'Speaker 3: Yes, he is. Me.', 'Speaker 4: What? You... You... Oh! Can I ask you a personal question? Ho-how do you shave your beard so close?', "Speaker 2: Ok Tommy, that's enough mourning for you! Here we go, bye bye!!", 'Speaker 4: Hey, listen. Call me.', 'Speaker 2: Ok!'], 'relation_data': {'r': [['per:alternate_names'], ['per:alumni'], ['per:alternate_names'], ['per:alumni', 'per:positive_impression'], ['per:alternate_names'], ['unanswerable']], 'rid': [[30], [4], [30], [4, 1], [30], [37]], 't': [[''], [''], [''], ['', 'call me'], [''], ['']], 'x': ['Speaker 2', 'Speaker 2', 'Speaker 4', 'Speaker 4', 'Speaker 4', 'Speaker 1'], 'x_type': ['PER', 'PER', 'PER', 'PER', 'PER', 'PER'], 'y': ['Chandler Bing', 'Speaker 4', 'Tom Gordon', 'Speaker 2', 'Tommy', 'Tommy'], 'y_type': ['PER', 'PER', 'PER', 'PER', 'PER', 'PER']}} ``` ### Data Fields * `dialog` * List of dialog spoken between the speakers * List of annotations per dialog per argument * `x` : First entity * `y` : Second entity * `x_type` : Type of the first entity * `y_type`: Type of the second entity * `r` : List of relations * `rid`: List of relation IDs * `t`: List of relation Trigger words ### Data Splits The data is split into a training, validation and test set as per the original dataset split. | | train | validation | test | | --------------------- |-------:|------------:|------:| | Input dialog examples | 1073 | 358 | 357 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information DialogRE dataset is intended for non-commercial research purpose only ### Citation Information ``` @inproceedings{yu2020dialogue, title={Dialogue-Based Relation Extraction}, author={Yu, Dian and Sun, Kai and Cardie, Claire and Yu, Dong}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020}, url={https://arxiv.org/abs/2004.08056v1} } ``` ### Contributions Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset.
dialog_re
[ "task_categories:other", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:other", "relation-extraction", "arxiv:2004.08056", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["other", "text-generation", "fill-mask"], "task_ids": ["dialogue-modeling"], "paperswithcode_id": "dialogre", "pretty_name": "DialogRE", "tags": ["relation-extraction"], "dataset_info": {"features": [{"name": "dialog", "sequence": "string"}, {"name": "relation_data", "sequence": [{"name": "x", "dtype": "string"}, {"name": "y", "dtype": "string"}, {"name": "x_type", "dtype": "string"}, {"name": "y_type", "dtype": "string"}, {"name": "r", "sequence": "string"}, {"name": "rid", "sequence": "int32"}, {"name": "t", "sequence": "string"}]}], "config_name": "dialog_re", "splits": [{"name": "train", "num_bytes": 1520940, "num_examples": 1073}, {"name": "test", "num_bytes": 472306, "num_examples": 357}, {"name": "validation", "num_bytes": 490580, "num_examples": 358}], "download_size": 3816234, "dataset_size": 2483826}}
2024-01-18T11:02:38+00:00
[ "2004.08056" ]
[ "en" ]
TAGS #task_categories-other #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-other #relation-extraction #arxiv-2004.08056 #region-us
Dataset Card for [DialogRE] =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: DialogRE Homepage * Repository: DialogRE Repository * Paper: Arxiv * Point of Contact: dialogre@URL ### Dataset Summary The DialogRE dataset is the first human-annotated dialogue-based relation extraction (RE) dataset, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. DialogRE can also act as a platform for studying cross-sentence RE as most facts span multiple sentences. Specifically, the dataset annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends (in English). ### Supported Tasks and Leaderboards * 'other-other-relation-extraction': The dataset can be used to train a model for Relation Extraction, which consists of the prediction of relation between two arguments that appear in a dialogue. Success on this task is typically measured by achieving a *high* F1 Score. ### Languages The dialogues in the dataset is in English originating from the transcripts of Friends. The associated BCP-47 code is 'en'. Dataset Structure ----------------- ### Data Instances A typical data point consists of a dialogue between speakers as a list of sentences. This is followed by the annotations of the relations between the entities in the dialog. An example from the DialogRE train set looks as follows: ### Data Fields * 'dialog' + List of dialog spoken between the speakers * List of annotations per dialog per argument + 'x' : First entity + 'y' : Second entity + 'x\_type' : Type of the first entity + 'y\_type': Type of the second entity + 'r' : List of relations + 'rid': List of relation IDs + 't': List of relation Trigger words ### Data Splits The data is split into a training, validation and test set as per the original dataset split. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information DialogRE dataset is intended for non-commercial research purpose only ### Contributions Thanks to @vineeths96 for adding this dataset.
[ "### Dataset Summary\n\n\nThe DialogRE dataset is the first human-annotated dialogue-based relation extraction (RE) dataset, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. DialogRE can also act as a platform for studying cross-sentence RE as most facts span multiple sentences. Specifically, the dataset annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends (in English).", "### Supported Tasks and Leaderboards\n\n\n* 'other-other-relation-extraction': The dataset can be used to train a model for Relation Extraction, which consists of the prediction of relation between two arguments that appear in a dialogue. Success on this task is typically measured by achieving a *high* F1 Score.", "### Languages\n\n\nThe dialogues in the dataset is in English originating from the transcripts of Friends. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point consists of a dialogue between speakers as a list of sentences. This is followed by the annotations of the relations between the entities in the dialog.\n\n\nAn example from the DialogRE train set looks as follows:", "### Data Fields\n\n\n* 'dialog'\n\n\n\t+ List of dialog spoken between the speakers\n* List of annotations per dialog per argument\n\n\n\t+ 'x' : First entity\n\t+ 'y' : Second entity\n\t+ 'x\\_type' : Type of the first entity\n\t+ 'y\\_type': Type of the second entity\n\t+ 'r' : List of relations\n\t+ 'rid': List of relation IDs\n\t+ 't': List of relation Trigger words", "### Data Splits\n\n\nThe data is split into a training, validation and test set as per the original dataset split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nDialogRE dataset is intended for non-commercial research purpose only", "### Contributions\n\n\nThanks to @vineeths96 for adding this dataset." ]
[ "TAGS\n#task_categories-other #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-other #relation-extraction #arxiv-2004.08056 #region-us \n", "### Dataset Summary\n\n\nThe DialogRE dataset is the first human-annotated dialogue-based relation extraction (RE) dataset, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. DialogRE can also act as a platform for studying cross-sentence RE as most facts span multiple sentences. Specifically, the dataset annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends (in English).", "### Supported Tasks and Leaderboards\n\n\n* 'other-other-relation-extraction': The dataset can be used to train a model for Relation Extraction, which consists of the prediction of relation between two arguments that appear in a dialogue. Success on this task is typically measured by achieving a *high* F1 Score.", "### Languages\n\n\nThe dialogues in the dataset is in English originating from the transcripts of Friends. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point consists of a dialogue between speakers as a list of sentences. This is followed by the annotations of the relations between the entities in the dialog.\n\n\nAn example from the DialogRE train set looks as follows:", "### Data Fields\n\n\n* 'dialog'\n\n\n\t+ List of dialog spoken between the speakers\n* List of annotations per dialog per argument\n\n\n\t+ 'x' : First entity\n\t+ 'y' : Second entity\n\t+ 'x\\_type' : Type of the first entity\n\t+ 'y\\_type': Type of the second entity\n\t+ 'r' : List of relations\n\t+ 'rid': List of relation IDs\n\t+ 't': List of relation Trigger words", "### Data Splits\n\n\nThe data is split into a training, validation and test set as per the original dataset split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nDialogRE dataset is intended for non-commercial research purpose only", "### Contributions\n\n\nThanks to @vineeths96 for adding this dataset." ]
[ 123, 123, 77, 42, 57, 103, 32, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 21, 18 ]
[ "passage: TAGS\n#task_categories-other #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-other #relation-extraction #arxiv-2004.08056 #region-us \n### Dataset Summary\n\n\nThe DialogRE dataset is the first human-annotated dialogue-based relation extraction (RE) dataset, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. DialogRE can also act as a platform for studying cross-sentence RE as most facts span multiple sentences. Specifically, the dataset annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends (in English).### Supported Tasks and Leaderboards\n\n\n* 'other-other-relation-extraction': The dataset can be used to train a model for Relation Extraction, which consists of the prediction of relation between two arguments that appear in a dialogue. Success on this task is typically measured by achieving a *high* F1 Score.### Languages\n\n\nThe dialogues in the dataset is in English originating from the transcripts of Friends. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA typical data point consists of a dialogue between speakers as a list of sentences. This is followed by the annotations of the relations between the entities in the dialog.\n\n\nAn example from the DialogRE train set looks as follows:" ]
037e5d881c20f5d54b7116b58130ab26f5eeaca0
# Dataset Card for HateOffensive ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage** : https://sites.google.com/view/qanta/projects/diplomacy - **Repository** : https://github.com/DenisPeskov/2020_acl_diplomacy - **Paper** : http://users.umiacs.umd.edu/~jbg/docs/2020_acl_diplomacy.pdf - **Leaderboard** : - **Point of Contact** : ### Dataset Summary This dataset contains pairwise conversations annotated by the sender and the receiver for deception (and conversely truthfulness). The 17,289 messages are gathered from 12 games. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances ``` { "messages": ["Greetings Sultan!\n\nAs your neighbor I would like to propose an alliance! What are your views on the board so far?", "I think an alliance would be great! Perhaps a dmz in the Black Sea would be a good idea to solidify this alliance?\n\nAs for my views on the board, my first moves will be Western into the Balkans and Mediterranean Sea.", "Sounds good lets call a dmz in the black sea", "What's our move this year?", "I've been away from the game for a while", "Not sure yet, what are your thoughts?", "Well I'm pretty worried about Germany attacking me (and Austria to a lesser extent) so im headed west. It looks like Italy's landing a army in Syr this fall unless you can stop it", "That sounds good to me. I'll move to defend against Italy while you move west. If it's not too much too ask, I'd like to request that you withdraw your fleet from bla.", "Oh sorry missed the msg to move out of bl sea ill do that this turn. I did bring my army down into Armenia, To help you expel the Italian. It looks like Austria and Italy are working together. If we have a chance in the region you should probably use smy to protect con. We can't afford to lose con.", "I'll defend con from both ank and smy.", "Hey sorry for stabbing you earlier, it was an especially hard choice since Turkey is usually my country of choice. It's cool we got to do this study huh?"], "sender_labels": [false, true, false, true, true, true, true, true, true, true, true], "receiver_labels": [true, true, true, true, true, true, true, true, true, true, "NOANNOTATION"], "speakers": ["russia", "turkey", "russia", "russia", "russia", "turkey", "russia", "turkey", "russia", "turkey", "russia"], "receivers": ["turkey", "russia", "turkey", "turkey", "turkey", "russia", "turkey", "russia", "turkey", "russia", "turkey"], "absolute_message_index": [78, 107, 145, 370, 371, 374, 415, 420, 495, 497, 717], "relative_message_index": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "seasons": ["Spring", "Spring", "Spring", "Spring", "Spring", "Spring", "Fall", "Fall", "Spring", "Spring", "Fall"], "years": ["1901", "1901", "1901", "1902", "1902", "1902", "1902", "1902", "1903", "1903", "1905"], "game_score": ["4", "3", "4", "5", "5", "4", "5", "4", "5", "3", "7"], "game_score_delta": ["1", "-1", "1", "1", "1", "-1", "1", "-1", "2", "-2", "7"], "players": ["russia", "turkey"], "game_id": 10 } ``` ### Data Fields - speakers: the sender of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy) - receivers: the receiver of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy) - messages: the raw message string (string format. ranges in length from one word to paragraphs in length) - sender_labels: indicates if the sender of the message selected that the message is truthful, true, or deceptive, false. This is used for our ACTUAL_LIE calculation (true/false which can be bool or string format) - receiver_labels: indicates if the receiver of the message selected that the message is perceived as truthful, true, or deceptive, false. In <10% of the cases, no annotation was received. This is used for our SUSPECTED_LIE calculation (string format. true/false/"NOANNOTATION" ) - game_score: the current game score---supply centers---of the sender (string format that ranges can range from 0 to 18) - game_score_delta: the current game score---supply centers---of the sender minus the game score of the recipient (string format that ranges from -18 to 18) - absolute_message_index: the index the message is in the entire game, across all dialogs (int format) - relative_message_index: the index of the message in the current dialog (int format) - seasons: the season in Diplomacy, associated with the year (string format. Spring, Fall, Winter) - years: the year in Diplomacy, associated with the season (string format. 1901 through 1918) - game_id: which of the 12 games the dialog comes from (int format ranging from 1 to 12) ### Data Splits Train, Test and Validation splits ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown ### Citation Information @inproceedings{Peskov:Cheng:Elgohary:Barrow:Danescu-Niculescu-Mizil:Boyd-Graber-2020, Title = {It Takes Two to Lie: One to Lie and One to Listen}, Author = {Denis Peskov and Benny Cheng and Ahmed Elgohary and Joe Barrow and Cristian Danescu-Niculescu-Mizil and Jordan Boyd-Graber}, Booktitle = {Association for Computational Linguistics}, Year = {2020}, Location = {Seattle}, } ### Contributions Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789) for adding this dataset.
diplomacy_detection
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"], "pretty_name": "HateOffensive", "dataset_info": {"features": [{"name": "messages", "sequence": "string"}, {"name": "sender_labels", "sequence": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "receiver_labels", "sequence": {"class_label": {"names": {"0": "false", "1": "true", "2": "noannotation"}}}}, {"name": "speakers", "sequence": {"class_label": {"names": {"0": "italy", "1": "turkey", "2": "russia", "3": "england", "4": "austria", "5": "germany", "6": "france"}}}}, {"name": "receivers", "sequence": {"class_label": {"names": {"0": "italy", "1": "turkey", "2": "russia", "3": "england", "4": "austria", "5": "germany", "6": "france"}}}}, {"name": "absolute_message_index", "sequence": "int64"}, {"name": "relative_message_index", "sequence": "int64"}, {"name": "seasons", "sequence": {"class_label": {"names": {"0": "spring", "1": "fall", "2": "winter", "3": "Spring", "4": "Fall", "5": "Winter"}}}}, {"name": "years", "sequence": {"class_label": {"names": {"0": "1901", "1": "1902", "2": "1903", "3": "1904", "4": "1905", "5": "1906", "6": "1907", "7": "1908", "8": "1909", "9": "1910", "10": "1911", "11": "1912", "12": "1913", "13": "1914", "14": "1915", "15": "1916", "16": "1917", "17": "1918"}}}}, {"name": "game_score", "sequence": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9", "10": "10", "11": "11", "12": "12", "13": "13", "14": "14", "15": "15", "16": "16", "17": "17", "18": "18"}}}}, {"name": "game_score_delta", "sequence": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9", "10": "10", "11": "11", "12": "12", "13": "13", "14": "14", "15": "15", "16": "16", "17": "17", "18": "18", "19": "-1", "20": "-2", "21": "-3", "22": "-4", "23": "-5", "24": "-6", "25": "-7", "26": "-8", "27": "-9", "28": "-10", "29": "-11", "30": "-12", "31": "-13", "32": "-14", "33": "-15", "34": "-16", "35": "-17", "36": "-18"}}}}, {"name": "players", "sequence": {"class_label": {"names": {"0": "italy", "1": "turkey", "2": "russia", "3": "england", "4": "austria", "5": "germany", "6": "france"}}}}, {"name": "game_id", "dtype": "int64"}], "splits": [{"name": "validation", "num_bytes": 254344, "num_examples": 21}, {"name": "train", "num_bytes": 2539778, "num_examples": 189}, {"name": "test", "num_bytes": 506191, "num_examples": 42}], "download_size": 3208706, "dataset_size": 3300313}}
2024-01-18T11:02:40+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-intent-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us
# Dataset Card for HateOffensive ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage : URL - Repository : URL - Paper : URL - Leaderboard : - Point of Contact : ### Dataset Summary This dataset contains pairwise conversations annotated by the sender and the receiver for deception (and conversely truthfulness). The 17,289 messages are gathered from 12 games. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances ### Data Fields - speakers: the sender of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy) - receivers: the receiver of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy) - messages: the raw message string (string format. ranges in length from one word to paragraphs in length) - sender_labels: indicates if the sender of the message selected that the message is truthful, true, or deceptive, false. This is used for our ACTUAL_LIE calculation (true/false which can be bool or string format) - receiver_labels: indicates if the receiver of the message selected that the message is perceived as truthful, true, or deceptive, false. In <10% of the cases, no annotation was received. This is used for our SUSPECTED_LIE calculation (string format. true/false/"NOANNOTATION" ) - game_score: the current game score---supply centers---of the sender (string format that ranges can range from 0 to 18) - game_score_delta: the current game score---supply centers---of the sender minus the game score of the recipient (string format that ranges from -18 to 18) - absolute_message_index: the index the message is in the entire game, across all dialogs (int format) - relative_message_index: the index of the message in the current dialog (int format) - seasons: the season in Diplomacy, associated with the year (string format. Spring, Fall, Winter) - years: the year in Diplomacy, associated with the season (string format. 1901 through 1918) - game_id: which of the 12 games the dialog comes from (int format ranging from 1 to 12) ### Data Splits Train, Test and Validation splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Unknown @inproceedings{Peskov:Cheng:Elgohary:Barrow:Danescu-Niculescu-Mizil:Boyd-Graber-2020, Title = {It Takes Two to Lie: One to Lie and One to Listen}, Author = {Denis Peskov and Benny Cheng and Ahmed Elgohary and Joe Barrow and Cristian Danescu-Niculescu-Mizil and Jordan Boyd-Graber}, Booktitle = {Association for Computational Linguistics}, Year = {2020}, Location = {Seattle}, } ### Contributions Thanks to @MisbahKhan789 for adding this dataset.
[ "# Dataset Card for HateOffensive", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage : URL\n- Repository : URL\n- Paper : URL\n- Leaderboard : \n- Point of Contact :", "### Dataset Summary\nThis dataset contains pairwise conversations annotated by the sender and the receiver for deception (and conversely truthfulness). The 17,289 messages are gathered from 12 games.", "### Supported Tasks and Leaderboards", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n- speakers: the sender of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy)\n- receivers: the receiver of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy)\n- messages: the raw message string (string format. ranges in length from one word to paragraphs in length)\n- sender_labels: indicates if the sender of the message selected that the message is truthful, true, or deceptive, false. This is used for our ACTUAL_LIE calculation (true/false which can be bool or string format)\n- receiver_labels: indicates if the receiver of the message selected that the message is perceived as truthful, true, or deceptive, false. In <10% of the cases, no annotation was received. This is used for our SUSPECTED_LIE calculation (string format. true/false/\"NOANNOTATION\" )\n- game_score: the current game score---supply centers---of the sender (string format that ranges can range from 0 to 18)\n- game_score_delta: the current game score---supply centers---of the sender minus the game score of the recipient (string format that ranges from -18 to 18)\n- absolute_message_index: the index the message is in the entire game, across all dialogs (int format)\n- relative_message_index: the index of the message in the current dialog (int format)\n- seasons: the season in Diplomacy, associated with the year (string format. Spring, Fall, Winter)\n- years: the year in Diplomacy, associated with the season (string format. 1901 through 1918)\n- game_id: which of the 12 games the dialog comes from (int format ranging from 1 to 12)", "### Data Splits\nTrain, Test and Validation splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nUnknown\n\n\n@inproceedings{Peskov:Cheng:Elgohary:Barrow:Danescu-Niculescu-Mizil:Boyd-Graber-2020,\nTitle = {It Takes Two to Lie: One to Lie and One to Listen},\nAuthor = {Denis Peskov and Benny Cheng and Ahmed Elgohary and Joe Barrow and Cristian Danescu-Niculescu-Mizil and Jordan Boyd-Graber},\nBooktitle = {Association for Computational Linguistics},\nYear = {2020},\nLocation = {Seattle},\n}", "### Contributions\n\nThanks to @MisbahKhan789 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us \n", "# Dataset Card for HateOffensive", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage : URL\n- Repository : URL\n- Paper : URL\n- Leaderboard : \n- Point of Contact :", "### Dataset Summary\nThis dataset contains pairwise conversations annotated by the sender and the receiver for deception (and conversely truthfulness). The 17,289 messages are gathered from 12 games.", "### Supported Tasks and Leaderboards", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n- speakers: the sender of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy)\n- receivers: the receiver of the message (string format. Seven possible values: russia, turkey, england, austria, germany, france, italy)\n- messages: the raw message string (string format. ranges in length from one word to paragraphs in length)\n- sender_labels: indicates if the sender of the message selected that the message is truthful, true, or deceptive, false. This is used for our ACTUAL_LIE calculation (true/false which can be bool or string format)\n- receiver_labels: indicates if the receiver of the message selected that the message is perceived as truthful, true, or deceptive, false. In <10% of the cases, no annotation was received. This is used for our SUSPECTED_LIE calculation (string format. true/false/\"NOANNOTATION\" )\n- game_score: the current game score---supply centers---of the sender (string format that ranges can range from 0 to 18)\n- game_score_delta: the current game score---supply centers---of the sender minus the game score of the recipient (string format that ranges from -18 to 18)\n- absolute_message_index: the index the message is in the entire game, across all dialogs (int format)\n- relative_message_index: the index of the message in the current dialog (int format)\n- seasons: the season in Diplomacy, associated with the year (string format. Spring, Fall, Winter)\n- years: the year in Diplomacy, associated with the season (string format. 1901 through 1918)\n- game_id: which of the 12 games the dialog comes from (int format ranging from 1 to 12)", "### Data Splits\nTrain, Test and Validation splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nUnknown\n\n\n@inproceedings{Peskov:Cheng:Elgohary:Barrow:Danescu-Niculescu-Mizil:Boyd-Graber-2020,\nTitle = {It Takes Two to Lie: One to Lie and One to Listen},\nAuthor = {Denis Peskov and Benny Cheng and Ahmed Elgohary and Joe Barrow and Cristian Danescu-Niculescu-Mizil and Jordan Boyd-Graber},\nBooktitle = {Association for Computational Linguistics},\nYear = {2020},\nLocation = {Seattle},\n}", "### Contributions\n\nThanks to @MisbahKhan789 for adding this dataset." ]
[ 83, 10, 120, 27, 49, 10, 5, 6, 6, 430, 14, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 138, 19 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us \n# Dataset Card for HateOffensive## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n- Homepage : URL\n- Repository : URL\n- Paper : URL\n- Leaderboard : \n- Point of Contact :### Dataset Summary\nThis dataset contains pairwise conversations annotated by the sender and the receiver for deception (and conversely truthfulness). The 17,289 messages are gathered from 12 games.### Supported Tasks and Leaderboards### Languages\nEnglish## Dataset Structure### Data Instances" ]
e5cf51239ba33985542bddaf35a6549c1397b537
# Dataset Card for Disaster Response Messages ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [HomePage](https://appen.com/datasets/combined-disaster-response-data/) - **Repository:** [Repo to Download the Dataset](https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv) - **Paper: - **Leaderboard: - **Point of Contact:** [Darshan Gandhi](darshangandhi1151@gmail.com) ### Dataset Summary This dataset contains 30,000 messages drawn from events including an earthquake in Haiti in 2010, an earthquake in Chile in 2010, floods in Pakistan in 2010, super-storm Sandy in the U.S.A. in 2012, and news articles spanning a large number of years and 100s of different disasters. The data has been encoded with 36 different categories related to disaster response and has been stripped of messages with sensitive information in their entirety. Upon release, this is the featured dataset of a new Udacity course on Data Science and the AI4ALL summer school and is especially utile for text analytics and natural language processing (NLP) tasks and models.The input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the “Data” tab above, you’ll find the annotated data, with 40 class labels for intent and content. ### Supported Tasks and Leaderboards The input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the dataset, you’ll find the annotated data, with 40 class labels for intent and content. This dataset contains the original message in its original language, the English translation, and dozens of classes for message content. These classes are noted in column titles with a simple binary 1= yes, 0=no. ### Languages The dataset is a multilingual dataset which has the messages in the original language and also it's translated English form. ## Dataset Structure ### Data Instances The dataset consists of a message in English and also it's original language form. Adding on, there are 40 labels which help to understand more about the exact essence of the message. Example of a Disaster Response : { 'split': 'train', 'message': 'Weather update - a cold front from Cuba that could pass over Haiti', 'original': 'Un front froid se retrouve sur Cuba ce matin. Il pourrait traverser Haiti demain. Des averses de pluie isolee sont encore prevues sur notre region ce soi', 'genre': 'direct', 'related': 1, 'PII': 0, 'request': 0, 'offer': 0, 'aid_related': 0, 'medical_help': 0, 'medical_products': 0, 'search_and_rescue': 0, 'security': 0, 'military': 0, 'child_alone': 0, 'water': 0, 'food': 0, 'shelter': 0, 'clothing': 0, 'money': 0, 'missing_people': 0, 'refugees': 0, 'death': 0, 'other_aid': 0, 'infrastructure_related': 0, 'transport': 0, 'buildings': 0, 'electricity': 0, 'tools': 0, 'hospitals': 0, 'shops': 0, 'aid_centers': 0, 'other_infrastructure': 0, 'weather_related': 0, 'floods': 0, 'storm': 0, 'fire': 0, 'earthquake': 0, 'cold': 0, 'other_weather': 0, 'direct_report': 0} ### Data Fields *split: Train, Test split</br> *message: English text of actual messages related to disaster </br> *original: Text of column 3 in native language as originally written</br> *genre: Type of message, including direct messages, social posting, and news stories or bulletins</br> *related: Is the message disaster related? 1= yes, 0=no, 2=maybe</br> *PII: Does the message contain PII? 1= yes, 0=no </br> *request: Does the message contain a request? 1= yes, 0=no </br> *offer: Does the message contain an offer? 1= yes, 0=no </br> *aid_related: Is the message aid related? 1= yes, 0=no </br> *medical_help: Does the message concern medical help? 1= yes, 0=no </br> *medical_products: Does the message concern medical products? 1= yes, 0=no </br> *search_and_rescue: Does the message concern search and rescue? 1= yes, 0=no </br> *security: Does the message concern security? 1= yes, 0=no </br> *military: Does the message concern military? 1= yes, 0=no </br> *child_alone: Does the message mention a child alone? 1= yes, 0=no</br> *water: Does the message concern water? 1= yes, 0=no</br> *food: Does the message concern food? 1= yes, 0=no </br> *shelter: Does the message concern shelter? 1= yes, 0=no </br> *clothing: Does the message concern clothing? 1= yes, 0=no </br> *money: Does the message concern money? 1= yes, 0=no </br> *missing_people: Does the message indicate missing people? 1= yes, 0=no</br> *refugees: Does the message concern refugess? 1= yes, 0=no</br> *death: Does the message imply death? 1= yes, 0=no </br> *other_aid: Is there any other aid needed? 1=yes, 0=no </br> *infrastructure_related: Does the message concern infrastructure? 1= yes, 0=no </br> *transport: Does the message concern transport? 1= yes, 0=no </br> *buildings: Does the message concern buildings? 1= yes, 0=no </br> *electricity: Does the message concern electricity? 1= yes, 0=no </br> *tools: Does the message concern tools? 1= yes, 0=no </br> *hospitals: Does the message concern clothing? 1= yes, 0=no </br> *shops: Does the message concern clothing? 1= yes, 0=no </br> *aid_centers:Does the message concern clothing? 1= yes, 0=no </br> *other_infrastructure:Does the message concern clothing? 1= yes, 0=no </br> *weather_related: Does the message concern weather? 1= yes, 0=no</br> *floods: Does the message indicate there was a flood? 1= yes, 0=no</br> *storm: Does the message indicate there was a storm? 1= yes, 0=no </br> *fire: Does the message indicate there was a fire? 1= yes, 0=no</br> *earthquake: Does the message indicate there was an earthquake? 1= yes, 0=no</br> *cold: Does the message indicate there was a cold? 1= yes, 0=no</br> *other_weather: Does the message indicate there was other weather issues? 1= yes, 0=no</br> *direct_report: Does the show a direct report? 1= yes, 0=no ### Data Splits |train|test |validation| |:----:|:-----------:|:----:| |21046|2629|2573| ## Dataset Creation ### Curation Rationale The dataset was built to understand about the sentiments of the citizens and also more about want was the emergency about and what kind of help they were seeking ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset The dataset has a great usecase of understand more about the sentiments of the citizens around the globe during a disaster and how their responses are. Also, it helps the government to understand their citizens better and would eventually help to draft better policies accordingly. ### Discussion of Biases The messages since have been translated in English may not be able to judically imply the exact significance of the individual when they would have posted the message ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was initially created by [Appen](https://appen.com/) ### Licensing Information [More Information Needed] ### Citation Information [Multilingual Disaster Response Messages](https://appen.com/datasets/combined-disaster-response-data/) ### Contributions Thanks to [@darshan-gandhi](https://github.com/darshan-gandhi) for adding this dataset.
disaster_response_messages
[ "task_categories:text2text-generation", "task_categories:text-classification", "task_ids:intent-classification", "task_ids:sentiment-classification", "task_ids:text-simplification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:es", "language:fr", "language:ht", "language:ur", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en", "es", "fr", "ht", "ur"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "text-classification"], "task_ids": ["intent-classification", "sentiment-classification", "text-simplification"], "pretty_name": "Disaster Response Messages", "dataset_info": {"features": [{"name": "split", "dtype": "string"}, {"name": "message", "dtype": "string"}, {"name": "original", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "related", "dtype": {"class_label": {"names": {"0": "false", "1": "true", "2": "maybe"}}}}, {"name": "PII", "dtype": "int8"}, {"name": "request", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "offer", "dtype": "int8"}, {"name": "aid_related", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "medical_help", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "medical_products", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "search_and_rescue", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "security", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "military", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "child_alone", "dtype": "int8"}, {"name": "water", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "food", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "shelter", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "clothing", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "money", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "missing_people", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "refugees", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "death", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "other_aid", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "infrastructure_related", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "transport", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "buildings", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "electricity", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "tools", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "hospitals", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "shops", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "aid_centers", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "other_infrastructure", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "weather_related", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "floods", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "storm", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "fire", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "earthquake", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "cold", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "other_weather", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "direct_report", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}], "splits": [{"name": "train", "num_bytes": 10060799, "num_examples": 21046}, {"name": "test", "num_bytes": 1253810, "num_examples": 2629}, {"name": "validation", "num_bytes": 1266874, "num_examples": 2573}], "download_size": 7201807, "dataset_size": 12581483}}
2024-01-18T11:02:41+00:00
[]
[ "en", "es", "fr", "ht", "ur" ]
TAGS #task_categories-text2text-generation #task_categories-text-classification #task_ids-intent-classification #task_ids-sentiment-classification #task_ids-text-simplification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-English #language-Spanish #language-French #language-Haitian #language-Urdu #license-unknown #region-us
Dataset Card for Disaster Response Messages =========================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: HomePage * Repository: Repo to Download the Dataset * Paper: * Leaderboard: * Point of Contact: Darshan Gandhi ### Dataset Summary This dataset contains 30,000 messages drawn from events including an earthquake in Haiti in 2010, an earthquake in Chile in 2010, floods in Pakistan in 2010, super-storm Sandy in the U.S.A. in 2012, and news articles spanning a large number of years and 100s of different disasters. The data has been encoded with 36 different categories related to disaster response and has been stripped of messages with sensitive information in their entirety. Upon release, this is the featured dataset of a new Udacity course on Data Science and the AI4ALL summer school and is especially utile for text analytics and natural language processing (NLP) tasks and models.The input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the “Data” tab above, you’ll find the annotated data, with 40 class labels for intent and content. ### Supported Tasks and Leaderboards The input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the dataset, you’ll find the annotated data, with 40 class labels for intent and content. This dataset contains the original message in its original language, the English translation, and dozens of classes for message content. These classes are noted in column titles with a simple binary 1= yes, 0=no. ### Languages The dataset is a multilingual dataset which has the messages in the original language and also it's translated English form. Dataset Structure ----------------- ### Data Instances The dataset consists of a message in English and also it's original language form. Adding on, there are 40 labels which help to understand more about the exact essence of the message. Example of a Disaster Response : { 'split': 'train', 'message': 'Weather update - a cold front from Cuba that could pass over Haiti', 'original': 'Un front froid se retrouve sur Cuba ce matin. Il pourrait traverser Haiti demain. Des averses de pluie isolee sont encore prevues sur notre region ce soi', 'genre': 'direct', 'related': 1, 'PII': 0, 'request': 0, 'offer': 0, 'aid\_related': 0, 'medical\_help': 0, 'medical\_products': 0, 'search\_and\_rescue': 0, 'security': 0, 'military': 0, 'child\_alone': 0, 'water': 0, 'food': 0, 'shelter': 0, 'clothing': 0, 'money': 0, 'missing\_people': 0, 'refugees': 0, 'death': 0, 'other\_aid': 0, 'infrastructure\_related': 0, 'transport': 0, 'buildings': 0, 'electricity': 0, 'tools': 0, 'hospitals': 0, 'shops': 0, 'aid\_centers': 0, 'other\_infrastructure': 0, 'weather\_related': 0, 'floods': 0, 'storm': 0, 'fire': 0, 'earthquake': 0, 'cold': 0, 'other\_weather': 0, 'direct\_report': 0} ### Data Fields \*split: Train, Test split \*message: English text of actual messages related to disaster \*original: Text of column 3 in native language as originally written \*genre: Type of message, including direct messages, social posting, and news stories or bulletins \*related: Is the message disaster related? 1= yes, 0=no, 2=maybe \*PII: Does the message contain PII? 1= yes, 0=no \*request: Does the message contain a request? 1= yes, 0=no \*offer: Does the message contain an offer? 1= yes, 0=no \*aid\_related: Is the message aid related? 1= yes, 0=no \*medical\_help: Does the message concern medical help? 1= yes, 0=no \*medical\_products: Does the message concern medical products? 1= yes, 0=no \*search\_and\_rescue: Does the message concern search and rescue? 1= yes, 0=no \*security: Does the message concern security? 1= yes, 0=no \*military: Does the message concern military? 1= yes, 0=no \*child\_alone: Does the message mention a child alone? 1= yes, 0=no \*water: Does the message concern water? 1= yes, 0=no \*food: Does the message concern food? 1= yes, 0=no \*shelter: Does the message concern shelter? 1= yes, 0=no \*clothing: Does the message concern clothing? 1= yes, 0=no \*money: Does the message concern money? 1= yes, 0=no \*missing\_people: Does the message indicate missing people? 1= yes, 0=no \*refugees: Does the message concern refugess? 1= yes, 0=no \*death: Does the message imply death? 1= yes, 0=no \*other\_aid: Is there any other aid needed? 1=yes, 0=no \*infrastructure\_related: Does the message concern infrastructure? 1= yes, 0=no \*transport: Does the message concern transport? 1= yes, 0=no \*buildings: Does the message concern buildings? 1= yes, 0=no \*electricity: Does the message concern electricity? 1= yes, 0=no \*tools: Does the message concern tools? 1= yes, 0=no \*hospitals: Does the message concern clothing? 1= yes, 0=no \*shops: Does the message concern clothing? 1= yes, 0=no \*aid\_centers:Does the message concern clothing? 1= yes, 0=no \*other\_infrastructure:Does the message concern clothing? 1= yes, 0=no \*weather\_related: Does the message concern weather? 1= yes, 0=no \*floods: Does the message indicate there was a flood? 1= yes, 0=no \*storm: Does the message indicate there was a storm? 1= yes, 0=no \*fire: Does the message indicate there was a fire? 1= yes, 0=no \*earthquake: Does the message indicate there was an earthquake? 1= yes, 0=no \*cold: Does the message indicate there was a cold? 1= yes, 0=no \*other\_weather: Does the message indicate there was other weather issues? 1= yes, 0=no \*direct\_report: Does the show a direct report? 1= yes, 0=no ### Data Splits Dataset Creation ---------------- ### Curation Rationale The dataset was built to understand about the sentiments of the citizens and also more about want was the emergency about and what kind of help they were seeking ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The dataset has a great usecase of understand more about the sentiments of the citizens around the globe during a disaster and how their responses are. Also, it helps the government to understand their citizens better and would eventually help to draft better policies accordingly. ### Discussion of Biases The messages since have been translated in English may not be able to judically imply the exact significance of the individual when they would have posted the message ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators The dataset was initially created by Appen ### Licensing Information Multilingual Disaster Response Messages ### Contributions Thanks to @darshan-gandhi for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset contains 30,000 messages drawn from events including an earthquake in Haiti in 2010, an earthquake in Chile in 2010, floods in Pakistan in 2010, super-storm Sandy in the U.S.A. in 2012, and news articles spanning a large number of years and 100s of different disasters. The data has been encoded with 36 different categories related to disaster response and has been stripped of messages with sensitive information in their entirety. Upon release, this is the featured dataset of a new Udacity course on Data Science and the AI4ALL summer school and is especially utile for text analytics and natural language processing (NLP) tasks and models.The input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the “Data” tab above, you’ll find the annotated data, with 40 class labels for intent and content.", "### Supported Tasks and Leaderboards\n\n\nThe input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the dataset, you’ll find the annotated data, with 40 class labels for intent and content. This dataset contains the original message in its original language, the English translation, and dozens of classes for message content. These classes are noted in column titles with a simple binary 1= yes, 0=no.", "### Languages\n\n\nThe dataset is a multilingual dataset which has the messages in the original language and also it's translated English form.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe dataset consists of a message in English and also it's original language form. Adding on, there are 40 labels which help to understand more about the exact essence of the message.\n\n\nExample of a Disaster Response : { 'split': 'train', 'message': 'Weather update - a cold front from Cuba that could pass over Haiti', 'original': 'Un front froid se retrouve sur Cuba ce matin. Il pourrait traverser Haiti demain. Des averses de pluie isolee sont encore prevues sur notre region ce soi', 'genre': 'direct', 'related': 1, 'PII': 0, 'request': 0, 'offer': 0, 'aid\\_related': 0, 'medical\\_help': 0, 'medical\\_products': 0, 'search\\_and\\_rescue': 0, 'security': 0, 'military': 0, 'child\\_alone': 0, 'water': 0, 'food': 0, 'shelter': 0, 'clothing': 0, 'money': 0, 'missing\\_people': 0, 'refugees': 0, 'death': 0, 'other\\_aid': 0, 'infrastructure\\_related': 0, 'transport': 0, 'buildings': 0, 'electricity': 0, 'tools': 0, 'hospitals': 0, 'shops': 0, 'aid\\_centers': 0, 'other\\_infrastructure': 0, 'weather\\_related': 0, 'floods': 0, 'storm': 0, 'fire': 0, 'earthquake': 0, 'cold': 0, 'other\\_weather': 0, 'direct\\_report': 0}", "### Data Fields\n\n\n\\*split: Train, Test split\n\\*message: English text of actual messages related to disaster \n\\*original: Text of column 3 in native language as originally written\n\\*genre: Type of message, including direct messages, social posting, and news stories or bulletins\n\\*related: Is the message disaster related? 1= yes, 0=no, 2=maybe\n\\*PII: Does the message contain PII? 1= yes, 0=no \n\\*request: Does the message contain a request? 1= yes, 0=no \n\\*offer: Does the message contain an offer? 1= yes, 0=no \n\\*aid\\_related: Is the message aid related? 1= yes, 0=no \n\\*medical\\_help: Does the message concern medical help? 1= yes, 0=no \n\\*medical\\_products: Does the message concern medical products? 1= yes, 0=no \n\\*search\\_and\\_rescue: Does the message concern search and rescue? 1= yes, 0=no \n\\*security: Does the message concern security? 1= yes, 0=no \n\\*military: Does the message concern military? 1= yes, 0=no \n\\*child\\_alone: Does the message mention a child alone? 1= yes, 0=no\n\\*water: Does the message concern water? 1= yes, 0=no\n\\*food: Does the message concern food? 1= yes, 0=no \n\\*shelter: Does the message concern shelter? 1= yes, 0=no \n\\*clothing: Does the message concern clothing? 1= yes, 0=no \n\\*money: Does the message concern money? 1= yes, 0=no \n\\*missing\\_people: Does the message indicate missing people? 1= yes, 0=no\n\\*refugees: Does the message concern refugess? 1= yes, 0=no\n\\*death: Does the message imply death? 1= yes, 0=no \n\\*other\\_aid: Is there any other aid needed? 1=yes, 0=no \n\\*infrastructure\\_related: Does the message concern infrastructure? 1= yes, 0=no \n\\*transport: Does the message concern transport? 1= yes, 0=no \n\\*buildings: Does the message concern buildings? 1= yes, 0=no \n\\*electricity: Does the message concern electricity? 1= yes, 0=no \n\\*tools: Does the message concern tools? 1= yes, 0=no \n\\*hospitals: Does the message concern clothing? 1= yes, 0=no \n\\*shops: Does the message concern clothing? 1= yes, 0=no \n\\*aid\\_centers:Does the message concern clothing? 1= yes, 0=no \n\\*other\\_infrastructure:Does the message concern clothing? 1= yes, 0=no \n\\*weather\\_related: Does the message concern weather? 1= yes, 0=no\n\\*floods: Does the message indicate there was a flood? 1= yes, 0=no\n\\*storm: Does the message indicate there was a storm? 1= yes, 0=no \n\\*fire: Does the message indicate there was a fire? 1= yes, 0=no\n\\*earthquake: Does the message indicate there was an earthquake? 1= yes, 0=no\n\\*cold: Does the message indicate there was a cold? 1= yes, 0=no\n\\*other\\_weather: Does the message indicate there was other weather issues? 1= yes, 0=no\n\\*direct\\_report: Does the show a direct report? 1= yes, 0=no", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was built to understand about the sentiments of the citizens and also more about want was the emergency about and what kind of help they were seeking", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe dataset has a great usecase of understand more about the sentiments of the citizens around the globe during a disaster and how their responses are. Also, it helps the government to understand their citizens better and would eventually help to draft better policies accordingly.", "### Discussion of Biases\n\n\nThe messages since have been translated in English may not be able to judically imply the exact significance of the individual when they would have posted the message", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Appen", "### Licensing Information\n\n\nMultilingual Disaster Response Messages", "### Contributions\n\n\nThanks to @darshan-gandhi for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-text-classification #task_ids-intent-classification #task_ids-sentiment-classification #task_ids-text-simplification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-English #language-Spanish #language-French #language-Haitian #language-Urdu #license-unknown #region-us \n", "### Dataset Summary\n\n\nThis dataset contains 30,000 messages drawn from events including an earthquake in Haiti in 2010, an earthquake in Chile in 2010, floods in Pakistan in 2010, super-storm Sandy in the U.S.A. in 2012, and news articles spanning a large number of years and 100s of different disasters. The data has been encoded with 36 different categories related to disaster response and has been stripped of messages with sensitive information in their entirety. Upon release, this is the featured dataset of a new Udacity course on Data Science and the AI4ALL summer school and is especially utile for text analytics and natural language processing (NLP) tasks and models.The input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the “Data” tab above, you’ll find the annotated data, with 40 class labels for intent and content.", "### Supported Tasks and Leaderboards\n\n\nThe input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the dataset, you’ll find the annotated data, with 40 class labels for intent and content. This dataset contains the original message in its original language, the English translation, and dozens of classes for message content. These classes are noted in column titles with a simple binary 1= yes, 0=no.", "### Languages\n\n\nThe dataset is a multilingual dataset which has the messages in the original language and also it's translated English form.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe dataset consists of a message in English and also it's original language form. Adding on, there are 40 labels which help to understand more about the exact essence of the message.\n\n\nExample of a Disaster Response : { 'split': 'train', 'message': 'Weather update - a cold front from Cuba that could pass over Haiti', 'original': 'Un front froid se retrouve sur Cuba ce matin. Il pourrait traverser Haiti demain. Des averses de pluie isolee sont encore prevues sur notre region ce soi', 'genre': 'direct', 'related': 1, 'PII': 0, 'request': 0, 'offer': 0, 'aid\\_related': 0, 'medical\\_help': 0, 'medical\\_products': 0, 'search\\_and\\_rescue': 0, 'security': 0, 'military': 0, 'child\\_alone': 0, 'water': 0, 'food': 0, 'shelter': 0, 'clothing': 0, 'money': 0, 'missing\\_people': 0, 'refugees': 0, 'death': 0, 'other\\_aid': 0, 'infrastructure\\_related': 0, 'transport': 0, 'buildings': 0, 'electricity': 0, 'tools': 0, 'hospitals': 0, 'shops': 0, 'aid\\_centers': 0, 'other\\_infrastructure': 0, 'weather\\_related': 0, 'floods': 0, 'storm': 0, 'fire': 0, 'earthquake': 0, 'cold': 0, 'other\\_weather': 0, 'direct\\_report': 0}", "### Data Fields\n\n\n\\*split: Train, Test split\n\\*message: English text of actual messages related to disaster \n\\*original: Text of column 3 in native language as originally written\n\\*genre: Type of message, including direct messages, social posting, and news stories or bulletins\n\\*related: Is the message disaster related? 1= yes, 0=no, 2=maybe\n\\*PII: Does the message contain PII? 1= yes, 0=no \n\\*request: Does the message contain a request? 1= yes, 0=no \n\\*offer: Does the message contain an offer? 1= yes, 0=no \n\\*aid\\_related: Is the message aid related? 1= yes, 0=no \n\\*medical\\_help: Does the message concern medical help? 1= yes, 0=no \n\\*medical\\_products: Does the message concern medical products? 1= yes, 0=no \n\\*search\\_and\\_rescue: Does the message concern search and rescue? 1= yes, 0=no \n\\*security: Does the message concern security? 1= yes, 0=no \n\\*military: Does the message concern military? 1= yes, 0=no \n\\*child\\_alone: Does the message mention a child alone? 1= yes, 0=no\n\\*water: Does the message concern water? 1= yes, 0=no\n\\*food: Does the message concern food? 1= yes, 0=no \n\\*shelter: Does the message concern shelter? 1= yes, 0=no \n\\*clothing: Does the message concern clothing? 1= yes, 0=no \n\\*money: Does the message concern money? 1= yes, 0=no \n\\*missing\\_people: Does the message indicate missing people? 1= yes, 0=no\n\\*refugees: Does the message concern refugess? 1= yes, 0=no\n\\*death: Does the message imply death? 1= yes, 0=no \n\\*other\\_aid: Is there any other aid needed? 1=yes, 0=no \n\\*infrastructure\\_related: Does the message concern infrastructure? 1= yes, 0=no \n\\*transport: Does the message concern transport? 1= yes, 0=no \n\\*buildings: Does the message concern buildings? 1= yes, 0=no \n\\*electricity: Does the message concern electricity? 1= yes, 0=no \n\\*tools: Does the message concern tools? 1= yes, 0=no \n\\*hospitals: Does the message concern clothing? 1= yes, 0=no \n\\*shops: Does the message concern clothing? 1= yes, 0=no \n\\*aid\\_centers:Does the message concern clothing? 1= yes, 0=no \n\\*other\\_infrastructure:Does the message concern clothing? 1= yes, 0=no \n\\*weather\\_related: Does the message concern weather? 1= yes, 0=no\n\\*floods: Does the message indicate there was a flood? 1= yes, 0=no\n\\*storm: Does the message indicate there was a storm? 1= yes, 0=no \n\\*fire: Does the message indicate there was a fire? 1= yes, 0=no\n\\*earthquake: Does the message indicate there was an earthquake? 1= yes, 0=no\n\\*cold: Does the message indicate there was a cold? 1= yes, 0=no\n\\*other\\_weather: Does the message indicate there was other weather issues? 1= yes, 0=no\n\\*direct\\_report: Does the show a direct report? 1= yes, 0=no", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was built to understand about the sentiments of the citizens and also more about want was the emergency about and what kind of help they were seeking", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe dataset has a great usecase of understand more about the sentiments of the citizens around the globe during a disaster and how their responses are. Also, it helps the government to understand their citizens better and would eventually help to draft better policies accordingly.", "### Discussion of Biases\n\n\nThe messages since have been translated in English may not be able to judically imply the exact significance of the individual when they would have posted the message", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Appen", "### Licensing Information\n\n\nMultilingual Disaster Response Messages", "### Contributions\n\n\nThanks to @darshan-gandhi for adding this dataset." ]
[ 148, 210, 110, 40, 443, 868, 11, 39, 4, 10, 10, 5, 5, 9, 18, 64, 41, 14, 16, 14, 19 ]
[ "passage: TAGS\n#task_categories-text2text-generation #task_categories-text-classification #task_ids-intent-classification #task_ids-sentiment-classification #task_ids-text-simplification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-English #language-Spanish #language-French #language-Haitian #language-Urdu #license-unknown #region-us \n### Dataset Summary\n\n\nThis dataset contains 30,000 messages drawn from events including an earthquake in Haiti in 2010, an earthquake in Chile in 2010, floods in Pakistan in 2010, super-storm Sandy in the U.S.A. in 2012, and news articles spanning a large number of years and 100s of different disasters. The data has been encoded with 36 different categories related to disaster response and has been stripped of messages with sensitive information in their entirety. Upon release, this is the featured dataset of a new Udacity course on Data Science and the AI4ALL summer school and is especially utile for text analytics and natural language processing (NLP) tasks and models.The input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the “Data” tab above, you’ll find the annotated data, with 40 class labels for intent and content.### Supported Tasks and Leaderboards\n\n\nThe input data in this job contains thousands of untranslated disaster-related messages and their English translations. In the dataset, you’ll find the annotated data, with 40 class labels for intent and content. This dataset contains the original message in its original language, the English translation, and dozens of classes for message content. These classes are noted in column titles with a simple binary 1= yes, 0=no.", "passage: ### Languages\n\n\nThe dataset is a multilingual dataset which has the messages in the original language and also it's translated English form.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThe dataset consists of a message in English and also it's original language form. Adding on, there are 40 labels which help to understand more about the exact essence of the message.\n\n\nExample of a Disaster Response : { 'split': 'train', 'message': 'Weather update - a cold front from Cuba that could pass over Haiti', 'original': 'Un front froid se retrouve sur Cuba ce matin. Il pourrait traverser Haiti demain. Des averses de pluie isolee sont encore prevues sur notre region ce soi', 'genre': 'direct', 'related': 1, 'PII': 0, 'request': 0, 'offer': 0, 'aid\\_related': 0, 'medical\\_help': 0, 'medical\\_products': 0, 'search\\_and\\_rescue': 0, 'security': 0, 'military': 0, 'child\\_alone': 0, 'water': 0, 'food': 0, 'shelter': 0, 'clothing': 0, 'money': 0, 'missing\\_people': 0, 'refugees': 0, 'death': 0, 'other\\_aid': 0, 'infrastructure\\_related': 0, 'transport': 0, 'buildings': 0, 'electricity': 0, 'tools': 0, 'hospitals': 0, 'shops': 0, 'aid\\_centers': 0, 'other\\_infrastructure': 0, 'weather\\_related': 0, 'floods': 0, 'storm': 0, 'fire': 0, 'earthquake': 0, 'cold': 0, 'other\\_weather': 0, 'direct\\_report': 0}" ]
62abc43632002177ac8460f36e7206621d51d2eb
# Dataset Card for "discofuse" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/google-research-datasets/discofuse - **Paper:** [DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion](https://arxiv.org/abs/1902.10526) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 6.04 GB - **Size of the generated dataset:** 21.55 GB - **Total amount of disk used:** 27.59 GB ### Dataset Summary DiscoFuse is a large scale dataset for discourse-based sentence fusion. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### discofuse-sport - **Size of downloaded dataset files:** 4.33 GB - **Size of the generated dataset:** 15.04 GB - **Total amount of disk used:** 19.36 GB An example of 'train' looks as follows. ``` { "coherent_first_sentence": "Four LPr and three LC2000r HP Netservers handle customer management and web server functions .", "coherent_second_sentence": "Finally , an HP Netserver LT6000r hosts i2 Demand Planner and i2 Collaboration Planner .", "connective_string": "finally ,", "discourse_type": "PAIR_CONN", "has_coref_type_nominal": 0.0, "has_coref_type_pronoun": 0.0, "incoherent_first_sentence": "Four LPr and three LC2000r HP Netservers handle customer management and web server functions .", "incoherent_second_sentence": "An HP Netserver LT6000r hosts i2 Demand Planner and i2 Collaboration Planner ." } ``` #### discofuse-wikipedia - **Size of downloaded dataset files:** 1.72 GB - **Size of the generated dataset:** 6.51 GB - **Total amount of disk used:** 8.23 GB An example of 'validation' looks as follows. ``` { "coherent_first_sentence": "Four LPr and three LC2000r HP Netservers handle customer management and web server functions .", "coherent_second_sentence": "Finally , an HP Netserver LT6000r hosts i2 Demand Planner and i2 Collaboration Planner .", "connective_string": "finally ,", "discourse_type": "PAIR_CONN", "has_coref_type_nominal": 0.0, "has_coref_type_pronoun": 0.0, "incoherent_first_sentence": "Four LPr and three LC2000r HP Netservers handle customer management and web server functions .", "incoherent_second_sentence": "An HP Netserver LT6000r hosts i2 Demand Planner and i2 Collaboration Planner ." } ``` ### Data Fields The data fields are the same among all splits. #### discofuse-sport - `connective_string`: a `string` feature. - `discourse_type`: a `string` feature. - `coherent_second_sentence`: a `string` feature. - `has_coref_type_pronoun`: a `float32` feature. - `incoherent_first_sentence`: a `string` feature. - `incoherent_second_sentence`: a `string` feature. - `has_coref_type_nominal`: a `float32` feature. - `coherent_first_sentence`: a `string` feature. #### discofuse-wikipedia - `connective_string`: a `string` feature. - `discourse_type`: a `string` feature. - `coherent_second_sentence`: a `string` feature. - `has_coref_type_pronoun`: a `float32` feature. - `incoherent_first_sentence`: a `string` feature. - `incoherent_second_sentence`: a `string` feature. - `has_coref_type_nominal`: a `float32` feature. - `coherent_first_sentence`: a `string` feature. ### Data Splits | name | train |validation| test | |-------------------|-------:|---------:|-----:| |discofuse-sport |43291020| 440902|445521| |discofuse-wikipedia|16310585| 168081|163657| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The data is licensed under [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information ``` @InProceedings{GevaEtAl2019, title = {DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion}, author = {Geva, Mor and Malmi, Eric and Szpektor, Idan and Berant, Jonathan}, booktitle = {Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics}, note = {arXiv preprint arXiv:1902.10526}, year = {2019} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
discofuse
[ "task_categories:text2text-generation", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "sentence-fusion", "arxiv:1902.10526", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "discofuse", "pretty_name": "DiscoFuse", "tags": ["sentence-fusion"], "dataset_info": [{"config_name": "discofuse-sport", "features": [{"name": "connective_string", "dtype": "string"}, {"name": "discourse_type", "dtype": "string"}, {"name": "coherent_second_sentence", "dtype": "string"}, {"name": "has_coref_type_pronoun", "dtype": "float32"}, {"name": "incoherent_first_sentence", "dtype": "string"}, {"name": "incoherent_second_sentence", "dtype": "string"}, {"name": "has_coref_type_nominal", "dtype": "float32"}, {"name": "coherent_first_sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14736176073, "num_examples": 43291020}, {"name": "test", "num_bytes": 151655243, "num_examples": 445521}, {"name": "validation", "num_bytes": 150206657, "num_examples": 440902}], "download_size": 9422142544, "dataset_size": 15038037973}, {"config_name": "discofuse-wikipedia", "features": [{"name": "connective_string", "dtype": "string"}, {"name": "discourse_type", "dtype": "string"}, {"name": "coherent_second_sentence", "dtype": "string"}, {"name": "has_coref_type_pronoun", "dtype": "float32"}, {"name": "incoherent_first_sentence", "dtype": "string"}, {"name": "incoherent_second_sentence", "dtype": "string"}, {"name": "has_coref_type_nominal", "dtype": "float32"}, {"name": "coherent_first_sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6377885028, "num_examples": 16310585}, {"name": "test", "num_bytes": 64007750, "num_examples": 163657}, {"name": "validation", "num_bytes": 65681627, "num_examples": 168081}], "download_size": 3929336540, "dataset_size": 6507574405}], "configs": [{"config_name": "discofuse-sport", "data_files": [{"split": "train", "path": "discofuse-sport/train-*"}, {"split": "test", "path": "discofuse-sport/test-*"}, {"split": "validation", "path": "discofuse-sport/validation-*"}]}, {"config_name": "discofuse-wikipedia", "data_files": [{"split": "train", "path": "discofuse-wikipedia/train-*"}, {"split": "test", "path": "discofuse-wikipedia/test-*"}, {"split": "validation", "path": "discofuse-wikipedia/validation-*"}]}]}
2024-01-06T09:17:22+00:00
[ "1902.10526" ]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-sa-3.0 #sentence-fusion #arxiv-1902.10526 #region-us
Dataset Card for "discofuse" ============================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: URL * Paper: DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion * Point of Contact: * Size of downloaded dataset files: 6.04 GB * Size of the generated dataset: 21.55 GB * Total amount of disk used: 27.59 GB ### Dataset Summary DiscoFuse is a large scale dataset for discourse-based sentence fusion. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### discofuse-sport * Size of downloaded dataset files: 4.33 GB * Size of the generated dataset: 15.04 GB * Total amount of disk used: 19.36 GB An example of 'train' looks as follows. #### discofuse-wikipedia * Size of downloaded dataset files: 1.72 GB * Size of the generated dataset: 6.51 GB * Total amount of disk used: 8.23 GB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### discofuse-sport * 'connective\_string': a 'string' feature. * 'discourse\_type': a 'string' feature. * 'coherent\_second\_sentence': a 'string' feature. * 'has\_coref\_type\_pronoun': a 'float32' feature. * 'incoherent\_first\_sentence': a 'string' feature. * 'incoherent\_second\_sentence': a 'string' feature. * 'has\_coref\_type\_nominal': a 'float32' feature. * 'coherent\_first\_sentence': a 'string' feature. #### discofuse-wikipedia * 'connective\_string': a 'string' feature. * 'discourse\_type': a 'string' feature. * 'coherent\_second\_sentence': a 'string' feature. * 'has\_coref\_type\_pronoun': a 'float32' feature. * 'incoherent\_first\_sentence': a 'string' feature. * 'incoherent\_second\_sentence': a 'string' feature. * 'has\_coref\_type\_nominal': a 'float32' feature. * 'coherent\_first\_sentence': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The data is licensed under Creative Commons Attribution-ShareAlike 3.0 license. ### Contributions Thanks to @thomwolf, @patrickvonplaten, @mariamabarham, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nDiscoFuse is a large scale dataset for discourse-based sentence fusion.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### discofuse-sport\n\n\n* Size of downloaded dataset files: 4.33 GB\n* Size of the generated dataset: 15.04 GB\n* Total amount of disk used: 19.36 GB\n\n\nAn example of 'train' looks as follows.", "#### discofuse-wikipedia\n\n\n* Size of downloaded dataset files: 1.72 GB\n* Size of the generated dataset: 6.51 GB\n* Total amount of disk used: 8.23 GB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### discofuse-sport\n\n\n* 'connective\\_string': a 'string' feature.\n* 'discourse\\_type': a 'string' feature.\n* 'coherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_pronoun': a 'float32' feature.\n* 'incoherent\\_first\\_sentence': a 'string' feature.\n* 'incoherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_nominal': a 'float32' feature.\n* 'coherent\\_first\\_sentence': a 'string' feature.", "#### discofuse-wikipedia\n\n\n* 'connective\\_string': a 'string' feature.\n* 'discourse\\_type': a 'string' feature.\n* 'coherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_pronoun': a 'float32' feature.\n* 'incoherent\\_first\\_sentence': a 'string' feature.\n* 'incoherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_nominal': a 'float32' feature.\n* 'coherent\\_first\\_sentence': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe data is licensed under Creative Commons Attribution-ShareAlike 3.0 license.", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @mariamabarham, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-sa-3.0 #sentence-fusion #arxiv-1902.10526 #region-us \n", "### Dataset Summary\n\n\nDiscoFuse is a large scale dataset for discourse-based sentence fusion.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### discofuse-sport\n\n\n* Size of downloaded dataset files: 4.33 GB\n* Size of the generated dataset: 15.04 GB\n* Total amount of disk used: 19.36 GB\n\n\nAn example of 'train' looks as follows.", "#### discofuse-wikipedia\n\n\n* Size of downloaded dataset files: 1.72 GB\n* Size of the generated dataset: 6.51 GB\n* Total amount of disk used: 8.23 GB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### discofuse-sport\n\n\n* 'connective\\_string': a 'string' feature.\n* 'discourse\\_type': a 'string' feature.\n* 'coherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_pronoun': a 'float32' feature.\n* 'incoherent\\_first\\_sentence': a 'string' feature.\n* 'incoherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_nominal': a 'float32' feature.\n* 'coherent\\_first\\_sentence': a 'string' feature.", "#### discofuse-wikipedia\n\n\n* 'connective\\_string': a 'string' feature.\n* 'discourse\\_type': a 'string' feature.\n* 'coherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_pronoun': a 'float32' feature.\n* 'incoherent\\_first\\_sentence': a 'string' feature.\n* 'incoherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_nominal': a 'float32' feature.\n* 'coherent\\_first\\_sentence': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe data is licensed under Creative Commons Attribution-ShareAlike 3.0 license.", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @mariamabarham, @lewtun for adding this dataset." ]
[ 96, 24, 10, 11, 6, 53, 54, 17, 171, 171, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 20, 34 ]
[ "passage: TAGS\n#task_categories-text2text-generation #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-sa-3.0 #sentence-fusion #arxiv-1902.10526 #region-us \n### Dataset Summary\n\n\nDiscoFuse is a large scale dataset for discourse-based sentence fusion.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### discofuse-sport\n\n\n* Size of downloaded dataset files: 4.33 GB\n* Size of the generated dataset: 15.04 GB\n* Total amount of disk used: 19.36 GB\n\n\nAn example of 'train' looks as follows.#### discofuse-wikipedia\n\n\n* Size of downloaded dataset files: 1.72 GB\n* Size of the generated dataset: 6.51 GB\n* Total amount of disk used: 8.23 GB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### discofuse-sport\n\n\n* 'connective\\_string': a 'string' feature.\n* 'discourse\\_type': a 'string' feature.\n* 'coherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_pronoun': a 'float32' feature.\n* 'incoherent\\_first\\_sentence': a 'string' feature.\n* 'incoherent\\_second\\_sentence': a 'string' feature.\n* 'has\\_coref\\_type\\_nominal': a 'float32' feature.\n* 'coherent\\_first\\_sentence': a 'string' feature." ]
cbc53ef87663ab99defcc0de21e000556c78ab22
# Dataset Card for Discovery ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/sileod/Discovery - **Repository:** https://github.com/sileod/Discovery - **Paper:** https://www.aclweb.org/anthology/N19-1351/ - **Leaderboard:** - **Point of Contact:** damien.sileo at inria.fr ### Dataset Summary Discourse marker prediction with 174 markers ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure input : sentence1, sentence2, label: marker originally between sentence1 and sentence2 ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits Train/Val/Test ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data Aranea english web corpus #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations Self supervised (see paper) #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{sileo-etal-2019-mining, title = "Mining Discourse Markers for Unsupervised Sentence Representation Learning", author = "Sileo, Damien and Van De Cruys, Tim and Pradel, Camille and Muller, Philippe", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", month = jun, year = "2019", address = "Minneapolis, Minnesota", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/N19-1351", pages = "3477--3486", abstract = "Current state of the art systems in NLP heavily rely on manually annotated datasets, which are expensive to construct. Very little work adequately exploits unannotated data {--} such as discourse markers between sentences {--} mainly because of data sparseness and ineffective extraction methods. In the present work, we propose a method to automatically discover sentence pairs with relevant discourse markers, and apply it to massive amounts of data. Our resulting dataset contains 174 discourse markers with at least 10k examples each, even for rare markers such as {``}coincidentally{''} or {``}amazingly{''}. We use the resulting data as supervision for learning transferable sentence embeddings. In addition, we show that even though sentence representation learning through prediction of discourse marker yields state of the art results across different transfer tasks, it{'}s not clear that our models made use of the semantic relation between sentences, thus leaving room for further improvements.", } ``` ### Contributions Thanks to [@sileod](https://github.com/sileod) for adding this dataset.
discovery
[ "task_categories:text-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:apache-2.0", "discourse-marker-prediction", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "paperswithcode_id": "discovery", "pretty_name": "Discovery", "config_names": ["discovery", "discoverysmall"], "tags": ["discourse-marker-prediction"], "dataset_info": [{"config_name": "discovery", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "[no-conn]", "1": "absolutely,", "2": "accordingly", "3": "actually,", "4": "additionally", "5": "admittedly,", "6": "afterward", "7": "again,", "8": "already,", "9": "also,", "10": "alternately,", "11": "alternatively", "12": "although,", "13": "altogether,", "14": "amazingly,", "15": "and", "16": "anyway,", "17": "apparently,", "18": "arguably,", "19": "as_a_result,", "20": "basically,", "21": "because_of_that", "22": "because_of_this", "23": "besides,", "24": "but", "25": "by_comparison,", "26": "by_contrast,", "27": "by_doing_this,", "28": "by_then", "29": "certainly,", "30": "clearly,", "31": "coincidentally,", "32": "collectively,", "33": "consequently", "34": "conversely", "35": "curiously,", "36": "currently,", "37": "elsewhere,", "38": "especially,", "39": "essentially,", "40": "eventually,", "41": "evidently,", "42": "finally,", "43": "first,", "44": "firstly,", "45": "for_example", "46": "for_instance", "47": "fortunately,", "48": "frankly,", "49": "frequently,", "50": "further,", "51": "furthermore", "52": "generally,", "53": "gradually,", "54": "happily,", "55": "hence,", "56": "here,", "57": "historically,", "58": "honestly,", "59": "hopefully,", "60": "however", "61": "ideally,", "62": "immediately,", "63": "importantly,", "64": "in_contrast,", "65": "in_fact,", "66": "in_other_words", "67": "in_particular,", "68": "in_short,", "69": "in_sum,", "70": "in_the_end,", "71": "in_the_meantime,", "72": "in_turn,", "73": "incidentally,", "74": "increasingly,", "75": "indeed,", "76": "inevitably,", "77": "initially,", "78": "instead,", "79": "interestingly,", "80": "ironically,", "81": "lastly,", "82": "lately,", "83": "later,", "84": "likewise,", "85": "locally,", "86": "luckily,", "87": "maybe,", "88": "meaning,", "89": "meantime,", "90": "meanwhile,", "91": "moreover", "92": "mostly,", "93": "namely,", "94": "nationally,", "95": "naturally,", "96": "nevertheless", "97": "next,", "98": "nonetheless", "99": "normally,", "100": "notably,", "101": "now,", "102": "obviously,", "103": "occasionally,", "104": "oddly,", "105": "often,", "106": "on_the_contrary,", "107": "on_the_other_hand", "108": "once,", "109": "only,", "110": "optionally,", "111": "or,", "112": "originally,", "113": "otherwise,", "114": "overall,", "115": "particularly,", "116": "perhaps,", "117": "personally,", "118": "plus,", "119": "preferably,", "120": "presently,", "121": "presumably,", "122": "previously,", "123": "probably,", "124": "rather,", "125": "realistically,", "126": "really,", "127": "recently,", "128": "regardless,", "129": "remarkably,", "130": "sadly,", "131": "second,", "132": "secondly,", "133": "separately,", "134": "seriously,", "135": "significantly,", "136": "similarly,", "137": "simultaneously", "138": "slowly,", "139": "so,", "140": "sometimes,", "141": "soon,", "142": "specifically,", "143": "still,", "144": "strangely,", "145": "subsequently,", "146": "suddenly,", "147": "supposedly,", "148": "surely,", "149": "surprisingly,", "150": "technically,", "151": "thankfully,", "152": "then,", "153": "theoretically,", "154": "thereafter,", "155": "thereby,", "156": "therefore", "157": "third,", "158": "thirdly,", "159": "this,", "160": "though,", "161": "thus,", "162": "together,", "163": "traditionally,", "164": "truly,", "165": "truthfully,", "166": "typically,", "167": "ultimately,", "168": "undoubtedly,", "169": "unfortunately,", "170": "unsurprisingly,", "171": "usually,", "172": "well,", "173": "yet,"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 334809726, "num_examples": 1566000}, {"name": "validation", "num_bytes": 18607661, "num_examples": 87000}, {"name": "test", "num_bytes": 18615474, "num_examples": 87000}], "download_size": 146233621, "dataset_size": 372032861}, {"config_name": "discoverysmall", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "[no-conn]", "1": "absolutely,", "2": "accordingly", "3": "actually,", "4": "additionally", "5": "admittedly,", "6": "afterward", "7": "again,", "8": "already,", "9": "also,", "10": "alternately,", "11": "alternatively", "12": "although,", "13": "altogether,", "14": "amazingly,", "15": "and", "16": "anyway,", "17": "apparently,", "18": "arguably,", "19": "as_a_result,", "20": "basically,", "21": "because_of_that", "22": "because_of_this", "23": "besides,", "24": "but", "25": "by_comparison,", "26": "by_contrast,", "27": "by_doing_this,", "28": "by_then", "29": "certainly,", "30": "clearly,", "31": "coincidentally,", "32": "collectively,", "33": "consequently", "34": "conversely", "35": "curiously,", "36": "currently,", "37": "elsewhere,", "38": "especially,", "39": "essentially,", "40": "eventually,", "41": "evidently,", "42": "finally,", "43": "first,", "44": "firstly,", "45": "for_example", "46": "for_instance", "47": "fortunately,", "48": "frankly,", "49": "frequently,", "50": "further,", "51": "furthermore", "52": "generally,", "53": "gradually,", "54": "happily,", "55": "hence,", "56": "here,", "57": "historically,", "58": "honestly,", "59": "hopefully,", "60": "however", "61": "ideally,", "62": "immediately,", "63": "importantly,", "64": "in_contrast,", "65": "in_fact,", "66": "in_other_words", "67": "in_particular,", "68": "in_short,", "69": "in_sum,", "70": "in_the_end,", "71": "in_the_meantime,", "72": "in_turn,", "73": "incidentally,", "74": "increasingly,", "75": "indeed,", "76": "inevitably,", "77": "initially,", "78": "instead,", "79": "interestingly,", "80": "ironically,", "81": "lastly,", "82": "lately,", "83": "later,", "84": "likewise,", "85": "locally,", "86": "luckily,", "87": "maybe,", "88": "meaning,", "89": "meantime,", "90": "meanwhile,", "91": "moreover", "92": "mostly,", "93": "namely,", "94": "nationally,", "95": "naturally,", "96": "nevertheless", "97": "next,", "98": "nonetheless", "99": "normally,", "100": "notably,", "101": "now,", "102": "obviously,", "103": "occasionally,", "104": "oddly,", "105": "often,", "106": "on_the_contrary,", "107": "on_the_other_hand", "108": "once,", "109": "only,", "110": "optionally,", "111": "or,", "112": "originally,", "113": "otherwise,", "114": "overall,", "115": "particularly,", "116": "perhaps,", "117": "personally,", "118": "plus,", "119": "preferably,", "120": "presently,", "121": "presumably,", "122": "previously,", "123": "probably,", "124": "rather,", "125": "realistically,", "126": "really,", "127": "recently,", "128": "regardless,", "129": "remarkably,", "130": "sadly,", "131": "second,", "132": "secondly,", "133": "separately,", "134": "seriously,", "135": "significantly,", "136": "similarly,", "137": "simultaneously", "138": "slowly,", "139": "so,", "140": "sometimes,", "141": "soon,", "142": "specifically,", "143": "still,", "144": "strangely,", "145": "subsequently,", "146": "suddenly,", "147": "supposedly,", "148": "surely,", "149": "surprisingly,", "150": "technically,", "151": "thankfully,", "152": "then,", "153": "theoretically,", "154": "thereafter,", "155": "thereby,", "156": "therefore", "157": "third,", "158": "thirdly,", "159": "this,", "160": "though,", "161": "thus,", "162": "together,", "163": "traditionally,", "164": "truly,", "165": "truthfully,", "166": "typically,", "167": "ultimately,", "168": "undoubtedly,", "169": "unfortunately,", "170": "unsurprisingly,", "171": "usually,", "172": "well,", "173": "yet,"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 3355192, "num_examples": 15662}, {"name": "validation", "num_bytes": 185296, "num_examples": 871}, {"name": "test", "num_bytes": 187471, "num_examples": 869}], "download_size": 146233621, "dataset_size": 3727959}], "train-eval-index": [{"config": "discovery", "task": "text-classification", "task_id": "multi-class-classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "discoverysmall", "task": "text-classification", "task_id": "multi-class-classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]}
2024-01-18T11:02:42+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #discourse-marker-prediction #region-us
# Dataset Card for Discovery ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: URL at URL ### Dataset Summary Discourse marker prediction with 174 markers ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure input : sentence1, sentence2, label: marker originally between sentence1 and sentence2 ### Data Instances ### Data Fields ### Data Splits Train/Val/Test ## Dataset Creation ### Curation Rationale ### Source Data Aranea english web corpus #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations Self supervised (see paper) #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @sileod for adding this dataset.
[ "# Dataset Card for Discovery", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: URL at URL", "### Dataset Summary\n\nDiscourse marker prediction with 174 markers", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure\n\ninput : sentence1, sentence2, \nlabel: marker originally between sentence1 and sentence2", "### Data Instances", "### Data Fields", "### Data Splits\n\nTrain/Val/Test", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nAranea english web corpus", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations\n\nSelf supervised (see paper)", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @sileod for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #discourse-marker-prediction #region-us \n", "# Dataset Card for Discovery", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: URL at URL", "### Dataset Summary\n\nDiscourse marker prediction with 174 markers", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure\n\ninput : sentence1, sentence2, \nlabel: marker originally between sentence1 and sentence2", "### Data Instances", "### Data Fields", "### Data Splits\n\nTrain/Val/Test", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nAranea english web corpus", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations\n\nSelf supervised (see paper)", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @sileod for adding this dataset." ]
[ 98, 6, 120, 30, 16, 10, 5, 25, 6, 5, 10, 5, 7, 10, 10, 10, 13, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #discourse-marker-prediction #region-us \n# Dataset Card for Discovery## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: URL at URL### Dataset Summary\n\nDiscourse marker prediction with 174 markers### Supported Tasks and Leaderboards### Languages\n\nEnglish## Dataset Structure\n\ninput : sentence1, sentence2, \nlabel: marker originally between sentence1 and sentence2### Data Instances### Data Fields### Data Splits\n\nTrain/Val/Test## Dataset Creation### Curation Rationale### Source Data\n\nAranea english web corpus#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations\n\nSelf supervised (see paper)#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @sileod for adding this dataset." ]
33ba9ca625621ca4c085cd80d59d26fe865c96f8
# Dataset Card for DISFL-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Disfl-QA](https://github.com/google-research-datasets/disfl-qa) - **Paper:** [Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering](https://arxiv.org/pdf/2106.04016.pdf) - **Point of Contact:** [disfl-qa team](disfl-qa@google.com) ### Dataset Summary Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 ([Rajpurkar et al., 2018](https://www.aclweb.org/anthology/P18-2124/)) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors. The final dataset consists of ~12k (disfluent question, answer) pairs. Over 90\% of the disfluencies are corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a major gap between speech and NLP research community. The authors hope the dataset can serve as a benchmark dataset for testing robustness of models against disfluent inputs. The expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from Disfl-QA. Detailed experiments and analyses can be found in the [paper](https://arxiv.org/pdf/2106.04016.pdf). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is in English only. ## Dataset Structure ### Data Instances This example was too long and was cropped: ``` { "answers": { "answer_start": [94, 87, 94, 94], "text": ["10th and 11th centuries", "in the 10th and 11th centuries", "10th and 11th centuries", "10th and 11th centuries"] }, "context": "\"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave thei...", "id": "56ddde6b9a695914005b9629", "original question": "When were the Normans in Normandy?", "disfluent question": "From which countries no tell me when were the Normans in Normandy?" "title": "Normans" } ``` ### Data Fields - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `original question`: Original question from SQuAD-v2 (a `string` feature) - `disfluent question`: Disfluent question from Disfl-QA (a `string` feature) - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits Disfl-QA consists of ~12k disfluent questions with the following train/dev/test splits: | File | Questions | |-----|-----| |train.json | 7182 | |dev.json | 1000 | |test.json | 3643 | ## Dataset Creation ### Curation Rationale The research in NLP and speech community has been impeded by the lack of curated datasets containing such disfluencies. The datasets available today are mostly conversational in nature, and span a limited number of very specific domains (e.g., telephone conversations, court proceedings). Furthermore, only a small fraction of the utterances in these datasets contain disfluencies, with a limited and skewed distribution of disfluencies types. In the most popular dataset in the literature, the SWITCHBOARD corpus (Godfrey et al., 1992), only 5.9% of the words are disfluencies (Charniak and Johnson, 2001), of which > 50% are repetitions (Shriberg, 1996), which has been shown to be the relatively simpler form of disfluencies (Zayats et al., 2014; Jamshid Lou et al., 2018; Zayats et al., 2019). To fill this gap, the authors presented DISFL-QA, the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. ### Source Data #### Initial Data Collection and Normalization DISFL-QA is constructed by asking human raters to insert disfluencies in questions from SQUAD-v2, a popular question answering dataset, using the passage and remaining questions as context. These contextual disfluencies lend naturalness to DISFL-QA, and challenge models relying on shallow matching between question and context to predict an answer. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process Each question associated with the paragraph is sent for a human annotation task to add a contextual disfluency using the paragraph as a source of distractors. Finally, to ensure the quality of the dataset, a subsequent round of human evaluation with an option to re-annotate is conducted. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Disfl-QA dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ``` @inproceedings{gupta-etal-2021-disflqa, title = "{Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering}", author = "Gupta, Aditya and Xu, Jiacheng and Upadhyay, Shyam and Yang, Diyi and Faruqui, Manaal", booktitle = "Findings of ACL", year = "2021" } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
disfl_qa
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2106.04016", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "open-domain-qa"], "pretty_name": "DISFL-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering", "dataset_info": {"features": [{"name": "squad_v2_id", "dtype": "string"}, {"name": "original question", "dtype": "string"}, {"name": "disfluent question", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 7712523, "num_examples": 7182}, {"name": "test", "num_bytes": 3865097, "num_examples": 3643}, {"name": "validation", "num_bytes": 1072731, "num_examples": 1000}], "download_size": 48935038, "dataset_size": 12650351}}
2024-01-18T11:02:43+00:00
[ "2106.04016" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2106.04016 #region-us
Dataset Card for DISFL-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering =================================================================================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Disfl-QA * Paper: Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering * Point of Contact: disfl-qa team ### Dataset Summary Disfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2018) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors. The final dataset consists of ~12k (disfluent question, answer) pairs. Over 90% of the disfluencies are corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a major gap between speech and NLP research community. The authors hope the dataset can serve as a benchmark dataset for testing robustness of models against disfluent inputs. The expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from Disfl-QA. Detailed experiments and analyses can be found in the paper. ### Supported Tasks and Leaderboards ### Languages The dataset is in English only. Dataset Structure ----------------- ### Data Instances This example was too long and was cropped: ### Data Fields * 'id': a 'string' feature. * 'title': a 'string' feature. * 'context': a 'string' feature. * 'original question': Original question from SQuAD-v2 (a 'string' feature) * 'disfluent question': Disfluent question from Disfl-QA (a 'string' feature) * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. ### Data Splits Disfl-QA consists of ~12k disfluent questions with the following train/dev/test splits: Dataset Creation ---------------- ### Curation Rationale The research in NLP and speech community has been impeded by the lack of curated datasets containing such disfluencies. The datasets available today are mostly conversational in nature, and span a limited number of very specific domains (e.g., telephone conversations, court proceedings). Furthermore, only a small fraction of the utterances in these datasets contain disfluencies, with a limited and skewed distribution of disfluencies types. In the most popular dataset in the literature, the SWITCHBOARD corpus (Godfrey et al., 1992), only 5.9% of the words are disfluencies (Charniak and Johnson, 2001), of which > 50% are repetitions (Shriberg, 1996), which has been shown to be the relatively simpler form of disfluencies (Zayats et al., 2014; Jamshid Lou et al., 2018; Zayats et al., 2019). To fill this gap, the authors presented DISFL-QA, the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. ### Source Data #### Initial Data Collection and Normalization DISFL-QA is constructed by asking human raters to insert disfluencies in questions from SQUAD-v2, a popular question answering dataset, using the passage and remaining questions as context. These contextual disfluencies lend naturalness to DISFL-QA, and challenge models relying on shallow matching between question and context to predict an answer. #### Who are the source language producers? ### Annotations #### Annotation process Each question associated with the paragraph is sent for a human annotation task to add a contextual disfluency using the paragraph as a source of distractors. Finally, to ensure the quality of the dataset, a subsequent round of human evaluation with an option to re-annotate is conducted. #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Disfl-QA dataset is licensed under CC BY 4.0. ### Contributions Thanks to @bhavitvyamalik for adding this dataset.
[ "### Dataset Summary\n\n\nDisfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2018) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors.\n\n\nThe final dataset consists of ~12k (disfluent question, answer) pairs. Over 90% of the disfluencies are corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a major gap between speech and NLP research community. The authors hope the dataset can serve as a benchmark dataset for testing robustness of models against disfluent inputs.\n\n\nThe expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from Disfl-QA. Detailed experiments and analyses can be found in the paper.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is in English only.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis example was too long and was cropped:", "### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'original question': Original question from SQuAD-v2 (a 'string' feature)\n* 'disfluent question': Disfluent question from Disfl-QA (a 'string' feature)\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\nDisfl-QA consists of ~12k disfluent questions with the following train/dev/test splits:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe research in NLP and speech community has been impeded by the lack of curated datasets containing such disfluencies. The datasets available today are mostly conversational in nature, and span a limited number of very specific domains (e.g., telephone conversations, court proceedings). Furthermore, only a small fraction of the utterances in these datasets contain disfluencies, with a limited and skewed distribution of disfluencies types. In the most popular dataset in the literature, the SWITCHBOARD corpus (Godfrey et al., 1992), only 5.9% of the words are disfluencies (Charniak and Johnson, 2001), of which > 50% are repetitions (Shriberg, 1996), which has been shown to be the relatively simpler form of disfluencies (Zayats et al., 2014; Jamshid Lou et al., 2018; Zayats et al., 2019). To fill this gap, the authors presented DISFL-QA, the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDISFL-QA is constructed by asking human raters to insert disfluencies in questions from SQUAD-v2, a popular question answering dataset, using the passage and remaining questions as context. These contextual disfluencies lend naturalness to DISFL-QA, and challenge models relying on shallow matching between question and context to predict an answer.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nEach question associated with the paragraph is sent for a human annotation task to add a contextual disfluency using the paragraph as a source of distractors. Finally, to ensure the quality of the dataset, a subsequent round of human evaluation with an option to re-annotate is conducted.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nDisfl-QA dataset is licensed under CC BY 4.0.", "### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2106.04016 #region-us \n", "### Dataset Summary\n\n\nDisfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2018) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors.\n\n\nThe final dataset consists of ~12k (disfluent question, answer) pairs. Over 90% of the disfluencies are corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a major gap between speech and NLP research community. The authors hope the dataset can serve as a benchmark dataset for testing robustness of models against disfluent inputs.\n\n\nThe expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from Disfl-QA. Detailed experiments and analyses can be found in the paper.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is in English only.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis example was too long and was cropped:", "### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'original question': Original question from SQuAD-v2 (a 'string' feature)\n* 'disfluent question': Disfluent question from Disfl-QA (a 'string' feature)\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\nDisfl-QA consists of ~12k disfluent questions with the following train/dev/test splits:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe research in NLP and speech community has been impeded by the lack of curated datasets containing such disfluencies. The datasets available today are mostly conversational in nature, and span a limited number of very specific domains (e.g., telephone conversations, court proceedings). Furthermore, only a small fraction of the utterances in these datasets contain disfluencies, with a limited and skewed distribution of disfluencies types. In the most popular dataset in the literature, the SWITCHBOARD corpus (Godfrey et al., 1992), only 5.9% of the words are disfluencies (Charniak and Johnson, 2001), of which > 50% are repetitions (Shriberg, 1996), which has been shown to be the relatively simpler form of disfluencies (Zayats et al., 2014; Jamshid Lou et al., 2018; Zayats et al., 2019). To fill this gap, the authors presented DISFL-QA, the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nDISFL-QA is constructed by asking human raters to insert disfluencies in questions from SQUAD-v2, a popular question answering dataset, using the passage and remaining questions as context. These contextual disfluencies lend naturalness to DISFL-QA, and challenge models relying on shallow matching between question and context to predict an answer.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nEach question associated with the paragraph is sent for a human annotation task to add a contextual disfluency using the paragraph as a source of distractors. Finally, to ensure the quality of the dataset, a subsequent round of human evaluation with an option to re-annotate is conducted.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nDisfl-QA dataset is licensed under CC BY 4.0.", "### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset." ]
[ 111, 237, 10, 19, 16, 124, 35, 262, 4, 93, 10, 5, 67, 9, 18, 7, 8, 14, 6, 20, 19 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2106.04016 #region-us \n### Dataset Summary\n\n\nDisfl-QA is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the SQuAD-v2 (Rajpurkar et al., 2018) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors.\n\n\nThe final dataset consists of ~12k (disfluent question, answer) pairs. Over 90% of the disfluencies are corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a major gap between speech and NLP research community. The authors hope the dataset can serve as a benchmark dataset for testing robustness of models against disfluent inputs.\n\n\nThe expriments reveal that the state-of-the-art models are brittle when subjected to disfluent inputs from Disfl-QA. Detailed experiments and analyses can be found in the paper.### Supported Tasks and Leaderboards### Languages\n\n\nThe dataset is in English only.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThis example was too long and was cropped:", "passage: ### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'original question': Original question from SQuAD-v2 (a 'string' feature)\n* 'disfluent question': Disfluent question from Disfl-QA (a 'string' feature)\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.### Data Splits\n\n\nDisfl-QA consists of ~12k disfluent questions with the following train/dev/test splits:\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe research in NLP and speech community has been impeded by the lack of curated datasets containing such disfluencies. The datasets available today are mostly conversational in nature, and span a limited number of very specific domains (e.g., telephone conversations, court proceedings). Furthermore, only a small fraction of the utterances in these datasets contain disfluencies, with a limited and skewed distribution of disfluencies types. In the most popular dataset in the literature, the SWITCHBOARD corpus (Godfrey et al., 1992), only 5.9% of the words are disfluencies (Charniak and Johnson, 2001), of which > 50% are repetitions (Shriberg, 1996), which has been shown to be the relatively simpler form of disfluencies (Zayats et al., 2014; Jamshid Lou et al., 2018; Zayats et al., 2019). To fill this gap, the authors presented DISFL-QA, the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages.### Source Data#### Initial Data Collection and Normalization\n\n\nDISFL-QA is constructed by asking human raters to insert disfluencies in questions from SQUAD-v2, a popular question answering dataset, using the passage and remaining questions as context. These contextual disfluencies lend naturalness to DISFL-QA, and challenge models relying on shallow matching between question and context to predict an answer.#### Who are the source language producers?### Annotations#### Annotation process\n\n\nEach question associated with the paragraph is sent for a human annotation task to add a contextual disfluency using the paragraph as a source of distractors. Finally, to ensure the quality of the dataset, a subsequent round of human evaluation with an option to re-annotate is conducted.#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------" ]
fc8ede49f0d67fcb21dbcbc1bde387a6ef7a126b
# Dataset Card for doc2dial ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://doc2dial.github.io - **Repository:** [Needs More Information] - **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.652.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Doc2dial is dataset of goal-oriented dialogues that are grounded in the associated documents. It includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets this dataset covers a variety of dialogue scenes in information-seeking conversations. ### Supported Tasks and Leaderboards > Supported Task: [Shared Task](https://doc2dial.github.io/workshop2021/shared.html) hosted by DialDoc21 at ACL. > Leaderboard: [LINK](https://eval.ai/web/challenges/challenge-page/793) ### Languages English ## Dataset Structure ### Data Instances Sample data instance for `dialogue_domain` : ``` { "dial_id": "9f44c1539efe6f7e79b02eb1b413aa43", "doc_id": "Top 5 DMV Mistakes and How to Avoid Them#3_0", "domain": "dmv", "turns": [ { "da": "query_condition", "references": [ { "sp_id": "4", "label": "precondition" } ], "role": "user", "turn_id": 1, "utterance": "Hello, I forgot o update my address, can you help me with that?" }, { "da": "response_solution", "references": [ { "sp_id": "6", "label": "solution" }, { "sp_id": "7", "label": "solution" }, { "sp_id": "4", "label": "references" } ], "role": "agent", "turn_id": 2, "utterance": "hi, you have to report any change of address to DMV within 10 days after moving. You should do this both for the address associated with your license and all the addresses associated with all your vehicles." }, { "da": "query_solution", "references": [ { "sp_id": "56", "label": "solution" }, { "sp_id": "48", "label": "references" } ], "role": "user", "turn_id": 3, "utterance": "Can I do my DMV transactions online?" }, { "da": "respond_solution", "references": [ { "sp_id": "56", "label": "solution" }, { "sp_id": "48", "label": "references" } ], "role": "agent", "turn_id": 4, "utterance": "Yes, you can sign up for MyDMV for all the online transactions needed." }, { "da": "query_condition", "references": [ { "sp_id": "48", "label": "precondition" } ], "role": "user", "turn_id": 5, "utterance": "Thanks, and in case I forget to bring all of the documentation needed to the DMV office, what can I do?" }, { "da": "respond_solution", "references": [ { "sp_id": "49", "label": "solution" }, { "sp_id": "50", "label": "solution" }, { "sp_id": "52", "label": "solution" }, { "sp_id": "48", "label": "references" } ], "role": "agent", "turn_id": 6, "utterance": "This happens often with our customers so that's why our website and MyDMV are so useful for our customers. Just check if you can make your transaction online so you don't have to go to the DMV Office." }, { "da": "query_solution", "references": [ { "sp_id": "6", "label": "solution" }, { "sp_id": "7", "label": "solution" }, { "sp_id": "4", "label": "references" } ], "role": "user", "turn_id": 7, "utterance": "Ok, and can you tell me again where should I report my new address?" }, { "da": "respond_solution", "references": [ { "sp_id": "6", "label": "solution" }, { "sp_id": "7", "label": "solution" }, { "sp_id": "4", "label": "references" } ], "role": "agent", "turn_id": 8, "utterance": "Sure. Any change of address must be reported to the DMV, that's for the address associated with your license and any of your vehicles." }, { "da": "query_condition", "references": [ { "sp_id": "40", "label": "precondition" } ], "role": "user", "turn_id": 9, "utterance": "Can you tell me more about Traffic points and their cost?" }, { "da": "respond_solution", "references": [ { "sp_id": "41", "label": "solution" }, { "sp_id": "43", "label": "solution" }, { "sp_id": "40", "label": "references" } ], "role": "agent", "turn_id": 10, "utterance": "Traffic points is the system used by DMV to track dangerous drivers. The cost of the traffic points is independent of the DRA, so you get a separate charge based on the total points you accumulate." } ] } ``` Sample data instance for `document_domain` : ``` { "doc_id": "Benefits Planner: Retirement | Online Calculator (WEP Version)#1_0", "domain": "ssa", "doc_html_raw": "<main class=\"content\" id=\"content\" role=\"main\">\n\n<section>\n\n<div>\n<h2>\nBenefits Planner: Retirement\n</h2>\n</div>\n</section>\n\n\n<section>\n\n<div>\n\n<div>\n\n\n</div>\n\n<article>\n<section>\n\n<h3>Online Calculator (WEP Version)</h3>\n<p>The calculator shown below allows you to estimate your Social Security benefit.\nHowever, for the most accurate estimates, <a>use the Detailed Calculator</a>.</p>\n<p>You need to enter all your past earnings\n, which are shown on your <a>online </a>.</p>\n\n<p>Please Note:</p>\n<ul class=\"browser-default\">\n<li>The Online Calculator is updated periodically<span>*</span> with new benefit increases and other benefit amounts. Therefore, it is likely that your benefit estimates in the future will differ from those calculated today.</li>\n<li>The Online Calculator works on PCs and Macs with Javascript enabled.</li>\n<li>Some browsers may not allow you to print the table below. </li>\n</ul>\n<p></p>\n\n<div>\nThe Online Calculator temporarily stores information on your local computer while your browser is open. To protect your personal information, you should close your browser after you have finished your estimate.\n</div>\n<p></p>\n\n<div>\n<p>Note: If your birthday is on January 1st, we figure your benefit as if your birthday was in the previous year.</p>\n<p>If you qualify for benefits as a Survivor, your <a>full retirement age for survivors benefits</a> may be different.</p></div>\n\n<div>\n</div></section></article></div></section></main>", "doc_html_ts": "<main><section><div><h2 sent_id=\"1\" text_id=\"1\">Benefits Planner: Retirement</h2></div></section><section><div><article><section><h3 sent_id=\"2\" text_id=\"2\">Online Calculator (WEP Version)</h3><div tag_id=\"1\"><u sent_id=\"3\" tag_id=\"1\"><u sent_id=\"3\" tag_id=\"1\" text_id=\"3\">The calculator shown below allows you to estimate your Social Security benefit .</u></u><u sent_id=\"4\" tag_id=\"1\"><u sent_id=\"4\" tag_id=\"1\" text_id=\"4\">However ,</u><u sent_id=\"4\" tag_id=\"1\" text_id=\"5\">for the most accurate estimates ,</u><u sent_id=\"4\" tag_id=\"1\" text_id=\"6\">use the Detailed Calculator .</u></u></div><div tag_id=\"2\"><u sent_id=\"5\" tag_id=\"2\"><u sent_id=\"5\" tag_id=\"2\" text_id=\"7\">You need to enter all your past earnings , which are shown on your online .</u></u></div><div tag_id=\"3\"><u sent_id=\"6\" tag_id=\"3\"><u sent_id=\"6\" tag_id=\"3\" text_id=\"8\">Please Note:</u></u></div><ul class=\"browser-default\" tag_id=\"3\"><li tag_id=\"3\"><div tag_id=\"3\"><u sent_id=\"9\" tag_id=\"3\"><u sent_id=\"9\" tag_id=\"3\" text_id=\"9\">The Online Calculator is updated periodically * with new benefit increases and other benefit amounts .</u></u><u sent_id=\"10\" tag_id=\"3\"><u sent_id=\"10\" tag_id=\"3\" text_id=\"10\">Therefore ,</u><u sent_id=\"10\" tag_id=\"3\" text_id=\"11\">it is likely that your benefit estimates in the future will differ from those calculated today .</u></u></div></li><li tag_id=\"3\"><u sent_id=\"11\" tag_id=\"3\"><u sent_id=\"11\" tag_id=\"3\" text_id=\"12\">The Online Calculator works on PCs and Macs with Javascript enabled .</u></u></li><li tag_id=\"3\"><u sent_id=\"12\" tag_id=\"3\"><u sent_id=\"12\" tag_id=\"3\" text_id=\"13\">Some browsers may not allow you to print the table below .</u></u></li></ul><div>The Online Calculator temporarily stores information on your local computer while your browser is open. To protect your personal information, you should close your browser after you have finished your estimate.</div><div><div tag_id=\"4\"><u sent_id=\"13\" tag_id=\"4\"><u sent_id=\"13\" tag_id=\"4\" text_id=\"14\">Note:</u></u><u sent_id=\"14\" tag_id=\"4\"><u sent_id=\"14\" tag_id=\"4\" text_id=\"15\">If your birthday is on January 1st ,</u><u sent_id=\"14\" tag_id=\"4\" text_id=\"16\">we figure your benefit as if your birthday was in the previous year .</u></u></div><div tag_id=\"5\"><u sent_id=\"15\" tag_id=\"5\"><u sent_id=\"15\" tag_id=\"5\" text_id=\"17\">If you qualify for benefits as a Survivor ,</u><u sent_id=\"15\" tag_id=\"5\" text_id=\"18\">your full retirement age for survivors benefits may be different .</u></u></div></div></section></article></div></section></main>", "doc_text": "\n\nBenefits Planner: Retirement \n\n\nOnline Calculator (WEP Version) \nThe calculator shown below allows you to estimate your Social Security benefit. However , for the most accurate estimates , use the Detailed Calculator. You need to enter all your past earnings, which are shown on your online. Please Note: The Online Calculator is updated periodically * with new benefit increases and other benefit amounts. Therefore , it is likely that your benefit estimates in the future will differ from those calculated today. The Online Calculator works on PCs and Macs with Javascript enabled. Some browsers may not allow you to print the table below. Note: If your birthday is on January 1st , we figure your benefit as if your birthday was in the previous year. If you qualify for benefits as a Survivor , your full retirement age for survivors benefits may be different. ", "title": "Benefits Planner: Retirement | Online Calculator (WEP Version)#1", "spans": [ { "end_sec": 32, "end_sp": 32, "id_sec": "t_0", "id_sp": "1", "parent_titles": "[]", "start_sec": 0, "start_sp": 0, "tag": "h2", "text_sec": "\n\nBenefits Planner: Retirement \n", "text_sp": "\n\nBenefits Planner: Retirement \n", "title": "Benefits Planner: Retirement" }, { "end_sec": 67, "end_sp": 67, "id_sec": "t_1", "id_sp": "2", "parent_titles": "[{'id_sp': '1', 'text': 'Benefits Planner: Retirement', 'level': 'h2'}]", "start_sec": 32, "start_sp": 32, "tag": "h3", "text_sec": "\n\nOnline Calculator (WEP Version) \n", "text_sp": "\n\nOnline Calculator (WEP Version) \n", "title": "Online Calculator (WEP Version)" }, { "end_sec": 220, "end_sp": 147, "id_sec": "1", "id_sp": "3", "parent_titles": "[]", "start_sec": 67, "start_sp": 67, "tag": "u", "text_sec": "The calculator shown below allows you to estimate your Social Security benefit. However , for the most accurate estimates , use the Detailed Calculator. ", "text_sp": "The calculator shown below allows you to estimate your Social Security benefit. ", "title": "Online Calculator (WEP Version)" } ] } ``` Sample data instance for `doc2dial_rc` : ``` { "id": "78f72b08b43791a4a70363fe62b8de08_1", "is_impossible": false, "question": "Hello, I want to know about the retirement plan.", "answers": { "answer_start": [ 0 ], "text": [ "\n\nBenefits Planner: Retirement \n\n\nOnline Calculator (WEP Version) \n" ] }, "context": "\n\nBenefits Planner: Retirement \n\n\nOnline Calculator (WEP Version) \nThe calculator shown below allows you to estimate your Social Security benefit. However , for the most accurate estimates , use the Detailed Calculator. You need to enter all your past earnings, which are shown on your online. Please Note: The Online Calculator is updated periodically * with new benefit increases and other benefit amounts. Therefore , it is likely that your benefit estimates in the future will differ from those calculated today. The Online Calculator works on PCs and Macs with Javascript enabled. Some browsers may not allow you to print the table below. Note: If your birthday is on January 1st , we figure your benefit as if your birthday was in the previous year. If you qualify for benefits as a Survivor , your full retirement age for survivors benefits may be different. ", "title": "Benefits Planner: Retirement | Online Calculator (WEP Version)#1_0", "domain": "ssa" } ``` ### Data Fields For `document_domain`, - `doc_id`: the ID of a document; - `title`: the title of the document; - `domain`: the domain of the document; - `doc_text`: the text content of the document (without HTML markups); - `doc_html_ts`: the document content with HTML markups and the annotated spans that are indicated by `text_id` attribute, which corresponds to `id_sp`. - `doc_html_raw`: the document content with HTML markups and without span annotations. - `spans`: key-value pairs of all spans in the document, with `id_sp` as key. Each span includes the following, - `id_sp`: the id of a span as noted by `text_id` in `doc_html_ts`; - `start_sp`/ `end_sp`: the start/end position of the text span in `doc_text`; - `text_sp`: the text content of the span. - `id_sec`: the id of the (sub)section (e.g. `<p>`) or title (`<h2>`) that contains the span. - `start_sec` / `end_sec`: the start/end position of the (sub)section in `doc_text`. - `text_sec`: the text of the (sub)section. - `title`: the title of the (sub)section. - `parent_titles`: the parent titles of the `title`. For `dialogue_domain`: - `dial_id`: the ID of a dialogue; - `doc_id`: the ID of the associated document; - `domain`: domain of the document; - `turns`: a list of dialogue turns. Each turn includes, - `turn_id`: the time order of the turn; - `role`: either "agent" or "user"; - `da`: dialogue act; - `references`: the grounding span (`id_sp`) in the associated document. If a turn is an irrelevant turn, i.e., `da` ends with "ood", `reference` is empty. **Note** that spans with labels "*precondition*"/"*solution*" are the actual grounding spans. Spans with label "*reference*" are the related titles or contextual reference, which is used for the purpose of describing a dialogue scene better to crowd contributors. - `utterance`: the human-generated utterance based on the dialogue scene. For `doc2dial_rc`, this conforms to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) data format. For how to load Doc2Dial data for reading comprehension task, please refer [here](https://github.com/doc2dial/sharedtask-dialdoc2021). - `id`: the ID of a QA instance; - `question`: user query; - `answers`: the answers that are grounded in the associated document; - `answer_start`: the start position of the grounding span in the associated document (`context`); - `text`: the text content of the grounding span; - `title`: the title of the associated document; - `domain`: the domain of the associated document; - `context`: the text content of the associated document (without HTML markups). ### Data Splits Training & dev split for dialogue domain Training split only for document domain ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Song Feng, Hui Wan, Chulaka Gunasekara, Siva Sankalp Patel,Sachindra Joshi. Luis A. Lastras ### Licensing Information Creative Commons Attribution 3.0 Unported ### Citation Information @inproceedings{feng-etal-2020-doc2dial, title = "doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset", author = "Feng, Song and Wan, Hui and Gunasekara, Chulaka and Patel, Siva and Joshi, Sachindra and Lastras, Luis", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.652", } ### Contributions Thanks to [@songfeng](https://github.com/songfeng), [@KMFODA](https://github.com/KMFODA) for adding this dataset.
doc2dial
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "paperswithcode_id": "doc2dial", "pretty_name": "doc2dial", "dataset_info": [{"config_name": "dialogue_domain", "features": [{"name": "dial_id", "dtype": "string"}, {"name": "doc_id", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "turns", "list": [{"name": "turn_id", "dtype": "int32"}, {"name": "role", "dtype": "string"}, {"name": "da", "dtype": "string"}, {"name": "references", "list": [{"name": "sp_id", "dtype": "string"}, {"name": "label", "dtype": "string"}]}, {"name": "utterance", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 6924209, "num_examples": 3474}, {"name": "validation", "num_bytes": 1315815, "num_examples": 661}], "download_size": 5879543, "dataset_size": 8240024}, {"config_name": "document_domain", "features": [{"name": "domain", "dtype": "string"}, {"name": "doc_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "doc_text", "dtype": "string"}, {"name": "spans", "list": [{"name": "id_sp", "dtype": "string"}, {"name": "tag", "dtype": "string"}, {"name": "start_sp", "dtype": "int32"}, {"name": "end_sp", "dtype": "int32"}, {"name": "text_sp", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "parent_titles", "dtype": "string"}, {"name": "id_sec", "dtype": "string"}, {"name": "start_sec", "dtype": "int32"}, {"name": "text_sec", "dtype": "string"}, {"name": "end_sec", "dtype": "int32"}]}, {"name": "doc_html_ts", "dtype": "string"}, {"name": "doc_html_raw", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 204874908, "num_examples": 3416}], "download_size": 5879543, "dataset_size": 204874908}, {"config_name": "doc2dial_rc", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "domain", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 22705288, "num_examples": 3972}, {"name": "train", "num_bytes": 114778994, "num_examples": 20431}], "download_size": 5879543, "dataset_size": 137484282}]}
2024-01-18T11:02:44+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-3.0 #region-us
# Dataset Card for doc2dial ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary Doc2dial is dataset of goal-oriented dialogues that are grounded in the associated documents. It includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets this dataset covers a variety of dialogue scenes in information-seeking conversations. ### Supported Tasks and Leaderboards > Supported Task: Shared Task hosted by DialDoc21 at ACL. > Leaderboard: LINK ### Languages English ## Dataset Structure ### Data Instances Sample data instance for 'dialogue_domain' : Sample data instance for 'document_domain' : Sample data instance for 'doc2dial_rc' : ### Data Fields For 'document_domain', - 'doc_id': the ID of a document; - 'title': the title of the document; - 'domain': the domain of the document; - 'doc_text': the text content of the document (without HTML markups); - 'doc_html_ts': the document content with HTML markups and the annotated spans that are indicated by 'text_id' attribute, which corresponds to 'id_sp'. - 'doc_html_raw': the document content with HTML markups and without span annotations. - 'spans': key-value pairs of all spans in the document, with 'id_sp' as key. Each span includes the following, - 'id_sp': the id of a span as noted by 'text_id' in 'doc_html_ts'; - 'start_sp'/ 'end_sp': the start/end position of the text span in 'doc_text'; - 'text_sp': the text content of the span. - 'id_sec': the id of the (sub)section (e.g. '<p>') or title ('<h2>') that contains the span. - 'start_sec' / 'end_sec': the start/end position of the (sub)section in 'doc_text'. - 'text_sec': the text of the (sub)section. - 'title': the title of the (sub)section. - 'parent_titles': the parent titles of the 'title'. For 'dialogue_domain': - 'dial_id': the ID of a dialogue; - 'doc_id': the ID of the associated document; - 'domain': domain of the document; - 'turns': a list of dialogue turns. Each turn includes, - 'turn_id': the time order of the turn; - 'role': either "agent" or "user"; - 'da': dialogue act; - 'references': the grounding span ('id_sp') in the associated document. If a turn is an irrelevant turn, i.e., 'da' ends with "ood", 'reference' is empty. Note that spans with labels "*precondition*"/"*solution*" are the actual grounding spans. Spans with label "*reference*" are the related titles or contextual reference, which is used for the purpose of describing a dialogue scene better to crowd contributors. - 'utterance': the human-generated utterance based on the dialogue scene. For 'doc2dial_rc', this conforms to SQuAD data format. For how to load Doc2Dial data for reading comprehension task, please refer here. - 'id': the ID of a QA instance; - 'question': user query; - 'answers': the answers that are grounded in the associated document; - 'answer_start': the start position of the grounding span in the associated document ('context'); - 'text': the text content of the grounding span; - 'title': the title of the associated document; - 'domain': the domain of the associated document; - 'context': the text content of the associated document (without HTML markups). ### Data Splits Training & dev split for dialogue domain Training split only for document domain ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Song Feng, Hui Wan, Chulaka Gunasekara, Siva Sankalp Patel,Sachindra Joshi. Luis A. Lastras ### Licensing Information Creative Commons Attribution 3.0 Unported @inproceedings{feng-etal-2020-doc2dial, title = "doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset", author = "Feng, Song and Wan, Hui and Gunasekara, Chulaka and Patel, Siva and Joshi, Sachindra and Lastras, Luis", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", publisher = "Association for Computational Linguistics", url = "URL } ### Contributions Thanks to @songfeng, @KMFODA for adding this dataset.
[ "# Dataset Card for doc2dial", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nDoc2dial is dataset of goal-oriented dialogues that are grounded in the associated documents. It includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets this dataset covers a variety of dialogue scenes in information-seeking conversations.", "### Supported Tasks and Leaderboards\n\n> Supported Task: Shared Task hosted by DialDoc21 at ACL.\n\n> Leaderboard: LINK", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nSample data instance for 'dialogue_domain' :\n\n\n\n\n\nSample data instance for 'document_domain' :\n\n\n\nSample data instance for 'doc2dial_rc' :", "### Data Fields\n\nFor 'document_domain',\n\n- 'doc_id': the ID of a document;\n- 'title': the title of the document;\n- 'domain': the domain of the document;\n- 'doc_text': the text content of the document (without HTML markups);\n- 'doc_html_ts': the document content with HTML markups and the annotated spans that are indicated by 'text_id' attribute, which corresponds to 'id_sp'.\n- 'doc_html_raw': the document content with HTML markups and without span annotations.\n- 'spans': key-value pairs of all spans in the document, with 'id_sp' as key. Each span includes the following,\n - 'id_sp': the id of a span as noted by 'text_id' in 'doc_html_ts';\n - 'start_sp'/ 'end_sp': the start/end position of the text span in 'doc_text';\n - 'text_sp': the text content of the span.\n - 'id_sec': the id of the (sub)section (e.g. '<p>') or title ('<h2>') that contains the span.\n - 'start_sec' / 'end_sec': the start/end position of the (sub)section in 'doc_text'.\n - 'text_sec': the text of the (sub)section.\n - 'title': the title of the (sub)section.\n - 'parent_titles': the parent titles of the 'title'.\n\n\n\nFor 'dialogue_domain':\n\n- 'dial_id': the ID of a dialogue;\n- 'doc_id': the ID of the associated document;\n- 'domain': domain of the document;\n- 'turns': a list of dialogue turns. Each turn includes,\n - 'turn_id': the time order of the turn;\n - 'role': either \"agent\" or \"user\";\n - 'da': dialogue act;\n - 'references': the grounding span ('id_sp') in the associated document. If a turn is an irrelevant turn, i.e., 'da' ends with \"ood\", 'reference' is empty. Note that spans with labels \"*precondition*\"/\"*solution*\" are the actual grounding spans. Spans with label \"*reference*\" are the related titles or contextual reference, which is used for the purpose of describing a dialogue scene better to crowd contributors.\n - 'utterance': the human-generated utterance based on the dialogue scene.\n\n\n\nFor 'doc2dial_rc', this conforms to SQuAD data format. For how to load Doc2Dial data for reading comprehension task, please refer here.\n\n- 'id': the ID of a QA instance;\n- 'question': user query;\n- 'answers': the answers that are grounded in the associated document;\n - 'answer_start': the start position of the grounding span in the associated document ('context');\n - 'text': the text content of the grounding span;\n- 'title': the title of the associated document;\n- 'domain': the domain of the associated document;\n- 'context': the text content of the associated document (without HTML markups).", "### Data Splits\n\nTraining & dev split for dialogue domain\nTraining split only for document domain", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nSong Feng, Hui Wan, Chulaka Gunasekara, Siva Sankalp Patel,Sachindra Joshi. Luis A. Lastras", "### Licensing Information\n\nCreative Commons Attribution 3.0 Unported\n\n\n\n@inproceedings{feng-etal-2020-doc2dial,\n title = \"doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset\",\n author = \"Feng, Song and Wan, Hui and Gunasekara, Chulaka and Patel, Siva and Joshi, Sachindra and Lastras, Luis\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n}", "### Contributions\n\nThanks to @songfeng, @KMFODA for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-3.0 #region-us \n", "# Dataset Card for doc2dial", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nDoc2dial is dataset of goal-oriented dialogues that are grounded in the associated documents. It includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets this dataset covers a variety of dialogue scenes in information-seeking conversations.", "### Supported Tasks and Leaderboards\n\n> Supported Task: Shared Task hosted by DialDoc21 at ACL.\n\n> Leaderboard: LINK", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nSample data instance for 'dialogue_domain' :\n\n\n\n\n\nSample data instance for 'document_domain' :\n\n\n\nSample data instance for 'doc2dial_rc' :", "### Data Fields\n\nFor 'document_domain',\n\n- 'doc_id': the ID of a document;\n- 'title': the title of the document;\n- 'domain': the domain of the document;\n- 'doc_text': the text content of the document (without HTML markups);\n- 'doc_html_ts': the document content with HTML markups and the annotated spans that are indicated by 'text_id' attribute, which corresponds to 'id_sp'.\n- 'doc_html_raw': the document content with HTML markups and without span annotations.\n- 'spans': key-value pairs of all spans in the document, with 'id_sp' as key. Each span includes the following,\n - 'id_sp': the id of a span as noted by 'text_id' in 'doc_html_ts';\n - 'start_sp'/ 'end_sp': the start/end position of the text span in 'doc_text';\n - 'text_sp': the text content of the span.\n - 'id_sec': the id of the (sub)section (e.g. '<p>') or title ('<h2>') that contains the span.\n - 'start_sec' / 'end_sec': the start/end position of the (sub)section in 'doc_text'.\n - 'text_sec': the text of the (sub)section.\n - 'title': the title of the (sub)section.\n - 'parent_titles': the parent titles of the 'title'.\n\n\n\nFor 'dialogue_domain':\n\n- 'dial_id': the ID of a dialogue;\n- 'doc_id': the ID of the associated document;\n- 'domain': domain of the document;\n- 'turns': a list of dialogue turns. Each turn includes,\n - 'turn_id': the time order of the turn;\n - 'role': either \"agent\" or \"user\";\n - 'da': dialogue act;\n - 'references': the grounding span ('id_sp') in the associated document. If a turn is an irrelevant turn, i.e., 'da' ends with \"ood\", 'reference' is empty. Note that spans with labels \"*precondition*\"/\"*solution*\" are the actual grounding spans. Spans with label \"*reference*\" are the related titles or contextual reference, which is used for the purpose of describing a dialogue scene better to crowd contributors.\n - 'utterance': the human-generated utterance based on the dialogue scene.\n\n\n\nFor 'doc2dial_rc', this conforms to SQuAD data format. For how to load Doc2Dial data for reading comprehension task, please refer here.\n\n- 'id': the ID of a QA instance;\n- 'question': user query;\n- 'answers': the answers that are grounded in the associated document;\n - 'answer_start': the start position of the grounding span in the associated document ('context');\n - 'text': the text content of the grounding span;\n- 'title': the title of the associated document;\n- 'domain': the domain of the associated document;\n- 'context': the text content of the associated document (without HTML markups).", "### Data Splits\n\nTraining & dev split for dialogue domain\nTraining split only for document domain", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nSong Feng, Hui Wan, Chulaka Gunasekara, Siva Sankalp Patel,Sachindra Joshi. Luis A. Lastras", "### Licensing Information\n\nCreative Commons Attribution 3.0 Unported\n\n\n\n@inproceedings{feng-etal-2020-doc2dial,\n title = \"doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset\",\n author = \"Feng, Song and Wan, Hui and Gunasekara, Chulaka and Patel, Siva and Joshi, Sachindra and Lastras, Luis\",\n booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)\",\n month = nov,\n year = \"2020\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n}", "### Contributions\n\nThanks to @songfeng, @KMFODA for adding this dataset." ]
[ 93, 9, 120, 26, 90, 38, 5, 6, 46, 776, 18, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 35, 146, 22 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-3.0 #region-us \n# Dataset Card for doc2dial## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nDoc2dial is dataset of goal-oriented dialogues that are grounded in the associated documents. It includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets this dataset covers a variety of dialogue scenes in information-seeking conversations.### Supported Tasks and Leaderboards\n\n> Supported Task: Shared Task hosted by DialDoc21 at ACL.\n\n> Leaderboard: LINK### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nSample data instance for 'dialogue_domain' :\n\n\n\n\n\nSample data instance for 'document_domain' :\n\n\n\nSample data instance for 'doc2dial_rc' :" ]
7985b4e0371e6c61a756feb41b7b27becf71c666
# Dataset Card for DocRED ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/thunlp/DocRED](https://github.com/thunlp/DocRED) - **Paper:** [DocRED: A Large-Scale Document-Level Relation Extraction Dataset](https://arxiv.org/abs/1906.06127) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 21.00 MB - **Size of the generated dataset:** 20.12 MB - **Total amount of disk used:** 41.14 MB ### Dataset Summary Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features: - DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text. - DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document. - Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 21.00 MB - **Size of the generated dataset:** 20.12 MB - **Total amount of disk used:** 41.14 MB An example of 'train_annotated' looks as follows. ``` { "labels": { "evidence": [[0]], "head": [0], "relation_id": ["P1"], "relation_text": ["is_a"], "tail": [0] }, "sents": [["This", "is", "a", "sentence"], ["This", "is", "another", "sentence"]], "title": "Title of the document", "vertexSet": [[{ "name": "sentence", "pos": [3], "sent_id": 0, "type": "NN" }, { "name": "sentence", "pos": [3], "sent_id": 1, "type": "NN" }], [{ "name": "This", "pos": [0], "sent_id": 0, "type": "NN" }]] } ``` ### Data Fields The data fields are the same among all splits. #### default - `title`: a `string` feature. - `sents`: a dictionary feature containing: - `feature`: a `string` feature. - `name`: a `string` feature. - `sent_id`: a `int32` feature. - `pos`: a `list` of `int32` features. - `type`: a `string` feature. - `labels`: a dictionary feature containing: - `head`: a `int32` feature. - `tail`: a `int32` feature. - `relation_id`: a `string` feature. - `relation_text`: a `string` feature. - `evidence`: a `list` of `int32` features. ### Data Splits | name |train_annotated|train_distant|validation|test| |-------|--------------:|------------:|---------:|---:| |default| 3053| 101873| 998|1000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{yao-etal-2019-docred, title = "{D}oc{RED}: A Large-Scale Document-Level Relation Extraction Dataset", author = "Yao, Yuan and Ye, Deming and Li, Peng and Han, Xu and Lin, Yankai and Liu, Zhenghao and Liu, Zhiyuan and Huang, Lixin and Zhou, Jie and Sun, Maosong", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1074", doi = "10.18653/v1/P19-1074", pages = "764--777", } ``` ### Contributions Thanks to [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
docred
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:mit", "arxiv:1906.06127", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["entity-linking-retrieval"], "paperswithcode_id": "docred", "pretty_name": "DocRED", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "sents", "sequence": {"sequence": "string"}}, {"name": "vertexSet", "list": {"list": [{"name": "name", "dtype": "string"}, {"name": "sent_id", "dtype": "int32"}, {"name": "pos", "sequence": "int32"}, {"name": "type", "dtype": "string"}]}}, {"name": "labels", "sequence": [{"name": "head", "dtype": "int32"}, {"name": "tail", "dtype": "int32"}, {"name": "relation_id", "dtype": "string"}, {"name": "relation_text", "dtype": "string"}, {"name": "evidence", "sequence": "int32"}]}], "splits": [{"name": "validation", "num_bytes": 3425030, "num_examples": 998}, {"name": "test", "num_bytes": 2843877, "num_examples": 1000}, {"name": "train_annotated", "num_bytes": 10413156, "num_examples": 3053}, {"name": "train_distant", "num_bytes": 346001876, "num_examples": 101873}], "download_size": 458040413, "dataset_size": 362683939}}
2023-06-14T13:07:55+00:00
[ "1906.06127" ]
[ "en" ]
TAGS #task_categories-text-retrieval #task_ids-entity-linking-retrieval #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-mit #arxiv-1906.06127 #region-us
Dataset Card for DocRED ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: URL * Paper: DocRED: A Large-Scale Document-Level Relation Extraction Dataset * Point of Contact: * Size of downloaded dataset files: 21.00 MB * Size of the generated dataset: 20.12 MB * Total amount of disk used: 41.14 MB ### Dataset Summary Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features: - DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text. - DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document. - Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 21.00 MB * Size of the generated dataset: 20.12 MB * Total amount of disk used: 41.14 MB An example of 'train\_annotated' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'title': a 'string' feature. * 'sents': a dictionary feature containing: + 'feature': a 'string' feature. * 'name': a 'string' feature. * 'sent\_id': a 'int32' feature. * 'pos': a 'list' of 'int32' features. * 'type': a 'string' feature. * 'labels': a dictionary feature containing: + 'head': a 'int32' feature. + 'tail': a 'int32' feature. + 'relation\_id': a 'string' feature. + 'relation\_text': a 'string' feature. + 'evidence': a 'list' of 'int32' features. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @ghomasHudson, @thomwolf, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nMultiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features:\n- DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text.\n- DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document.\n- Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 21.00 MB\n* Size of the generated dataset: 20.12 MB\n* Total amount of disk used: 41.14 MB\n\n\nAn example of 'train\\_annotated' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'title': a 'string' feature.\n* 'sents': a dictionary feature containing:\n\t+ 'feature': a 'string' feature.\n* 'name': a 'string' feature.\n* 'sent\\_id': a 'int32' feature.\n* 'pos': a 'list' of 'int32' features.\n* 'type': a 'string' feature.\n* 'labels': a dictionary feature containing:\n\t+ 'head': a 'int32' feature.\n\t+ 'tail': a 'int32' feature.\n\t+ 'relation\\_id': a 'string' feature.\n\t+ 'relation\\_text': a 'string' feature.\n\t+ 'evidence': a 'list' of 'int32' features.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ghomasHudson, @thomwolf, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #task_ids-entity-linking-retrieval #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-mit #arxiv-1906.06127 #region-us \n", "### Dataset Summary\n\n\nMultiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features:\n- DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text.\n- DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document.\n- Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 21.00 MB\n* Size of the generated dataset: 20.12 MB\n* Total amount of disk used: 41.14 MB\n\n\nAn example of 'train\\_annotated' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'title': a 'string' feature.\n* 'sents': a dictionary feature containing:\n\t+ 'feature': a 'string' feature.\n* 'name': a 'string' feature.\n* 'sent\\_id': a 'int32' feature.\n* 'pos': a 'list' of 'int32' features.\n* 'type': a 'string' feature.\n* 'labels': a dictionary feature containing:\n\t+ 'head': a 'int32' feature.\n\t+ 'tail': a 'int32' feature.\n\t+ 'relation\\_id': a 'string' feature.\n\t+ 'relation\\_text': a 'string' feature.\n\t+ 'evidence': a 'list' of 'int32' features.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ghomasHudson, @thomwolf, @lhoestq for adding this dataset." ]
[ 102, 202, 10, 11, 6, 54, 17, 179, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 30 ]
[ "passage: TAGS\n#task_categories-text-retrieval #task_ids-entity-linking-retrieval #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-mit #arxiv-1906.06127 #region-us \n### Dataset Summary\n\n\nMultiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features:\n- DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text.\n- DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document.\n- Along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 21.00 MB\n* Size of the generated dataset: 20.12 MB\n* Total amount of disk used: 41.14 MB\n\n\nAn example of 'train\\_annotated' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits." ]
fa9b1b90b17acc6ec0570afd6e031b50d525ef50
# Dataset Card for "doqa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/RevanthRameshkumar/CRD3](https://github.com/RevanthRameshkumar/CRD3) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 12.59 MB - **Size of the generated dataset:** 17.70 MB - **Total amount of disk used:** 30.28 MB ### Dataset Summary DoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues (10,917 questions in total) on three different domains: cooking, travel and movies. Note that we include in the generic concept of FAQs also Community Question Answering sites, as well as corporate information in intranets which is maintained in textual form similar to FAQs, often referred to as internal “knowledge bases”. These dialogues are created by crowd workers that play the following two roles: the user who asks questions about a given topic posted in Stack Exchange (https://stackexchange.com/), and the domain expert who replies to the questions by selecting a short span of text from the long textual reply in the original post. The expert can rephrase the selected span, in order to make it look more natural. The dataset covers unanswerable questions and some relevant dialogue acts. DoQA enables the development and evaluation of conversational QA systems that help users access the knowledge buried in domain specific FAQs. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### cooking - **Size of downloaded dataset files:** 4.19 MB - **Size of the generated dataset:** 11.31 MB - **Total amount of disk used:** 15.51 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [852], "text": ["CANNOTANSWER"] }, "background": "\"So, over mixing batter forms gluten, which in turn hardens the cake. Fine.The problem is that I don't want lumps in the cakes, ...", "context": "\"Milk won't help you - it's mostly water, and gluten develops from flour (more accurately, specific proteins in flour) and water...", "followup": "n", "id": "C_64ce44d5f14347f488eb04b50387f022_q#2", "orig_answer": { "answer_start": [852], "text": ["CANNOTANSWER"] }, "question": "Ok. What can I add to make it more softer and avoid hardening?", "title": "What to add to the batter of the cake to avoid hardening when the gluten formation can't be avoided?", "yesno": "x" } ``` #### movies - **Size of downloaded dataset files:** 4.19 MB - **Size of the generated dataset:** 3.17 MB - **Total amount of disk used:** 7.36 MB An example of 'test' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [852], "text": ["CANNOTANSWER"] }, "background": "\"So, over mixing batter forms gluten, which in turn hardens the cake. Fine.The problem is that I don't want lumps in the cakes, ...", "context": "\"Milk won't help you - it's mostly water, and gluten develops from flour (more accurately, specific proteins in flour) and water...", "followup": "n", "id": "C_64ce44d5f14347f488eb04b50387f022_q#2", "orig_answer": { "answer_start": [852], "text": ["CANNOTANSWER"] }, "question": "Ok. What can I add to make it more softer and avoid hardening?", "title": "What to add to the batter of the cake to avoid hardening when the gluten formation can't be avoided?", "yesno": "x" } ``` #### travel - **Size of downloaded dataset files:** 4.19 MB - **Size of the generated dataset:** 3.22 MB - **Total amount of disk used:** 7.41 MB An example of 'test' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [852], "text": ["CANNOTANSWER"] }, "background": "\"So, over mixing batter forms gluten, which in turn hardens the cake. Fine.The problem is that I don't want lumps in the cakes, ...", "context": "\"Milk won't help you - it's mostly water, and gluten develops from flour (more accurately, specific proteins in flour) and water...", "followup": "n", "id": "C_64ce44d5f14347f488eb04b50387f022_q#2", "orig_answer": { "answer_start": [852], "text": ["CANNOTANSWER"] }, "question": "Ok. What can I add to make it more softer and avoid hardening?", "title": "What to add to the batter of the cake to avoid hardening when the gluten formation can't be avoided?", "yesno": "x" } ``` ### Data Fields The data fields are the same among all splits. #### cooking - `title`: a `string` feature. - `background`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `id`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. - `followup`: a `string` feature. - `yesno`: a `string` feature. - `orig_answer`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. #### movies - `title`: a `string` feature. - `background`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `id`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. - `followup`: a `string` feature. - `yesno`: a `string` feature. - `orig_answer`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. #### travel - `title`: a `string` feature. - `background`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `id`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. - `followup`: a `string` feature. - `yesno`: a `string` feature. - `orig_answer`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits #### cooking | |train|validation|test| |-------|----:|---------:|---:| |cooking| 4612| 911|1797| #### movies | |test| |------|---:| |movies|1884| #### travel | |test| |------|---:| |travel|1713| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @misc{campos2020doqa, title={DoQA -- Accessing Domain-Specific FAQs via Conversational QA}, author={Jon Ander Campos and Arantxa Otegi and Aitor Soroa and Jan Deriu and Mark Cieliebak and Eneko Agirre}, year={2020}, eprint={2005.01328}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
doqa
[ "language:en", "arxiv:2005.01328", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "paperswithcode_id": "doqa", "pretty_name": "DoQA", "dataset_info": [{"config_name": "cooking", "features": [{"name": "title", "dtype": "string"}, {"name": "background", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "followup", "dtype": "string"}, {"name": "yesno", "dtype": "string"}, {"name": "orig_answer", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "test", "num_bytes": 2969064, "num_examples": 1797}, {"name": "validation", "num_bytes": 1461613, "num_examples": 911}, {"name": "train", "num_bytes": 6881681, "num_examples": 4612}], "download_size": 4197671, "dataset_size": 11312358}, {"config_name": "movies", "features": [{"name": "title", "dtype": "string"}, {"name": "background", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "followup", "dtype": "string"}, {"name": "yesno", "dtype": "string"}, {"name": "orig_answer", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "test", "num_bytes": 3166075, "num_examples": 1884}], "download_size": 4197671, "dataset_size": 3166075}, {"config_name": "travel", "features": [{"name": "title", "dtype": "string"}, {"name": "background", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "followup", "dtype": "string"}, {"name": "yesno", "dtype": "string"}, {"name": "orig_answer", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "test", "num_bytes": 3216374, "num_examples": 1713}], "download_size": 4197671, "dataset_size": 3216374}]}
2024-01-18T11:02:46+00:00
[ "2005.01328" ]
[ "en" ]
TAGS #language-English #arxiv-2005.01328 #region-us
Dataset Card for "doqa" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 12.59 MB * Size of the generated dataset: 17.70 MB * Total amount of disk used: 30.28 MB ### Dataset Summary DoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues (10,917 questions in total) on three different domains: cooking, travel and movies. Note that we include in the generic concept of FAQs also Community Question Answering sites, as well as corporate information in intranets which is maintained in textual form similar to FAQs, often referred to as internal “knowledge bases”. These dialogues are created by crowd workers that play the following two roles: the user who asks questions about a given topic posted in Stack Exchange (URL and the domain expert who replies to the questions by selecting a short span of text from the long textual reply in the original post. The expert can rephrase the selected span, in order to make it look more natural. The dataset covers unanswerable questions and some relevant dialogue acts. DoQA enables the development and evaluation of conversational QA systems that help users access the knowledge buried in domain specific FAQs. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### cooking * Size of downloaded dataset files: 4.19 MB * Size of the generated dataset: 11.31 MB * Total amount of disk used: 15.51 MB An example of 'train' looks as follows. #### movies * Size of downloaded dataset files: 4.19 MB * Size of the generated dataset: 3.17 MB * Total amount of disk used: 7.36 MB An example of 'test' looks as follows. #### travel * Size of downloaded dataset files: 4.19 MB * Size of the generated dataset: 3.22 MB * Total amount of disk used: 7.41 MB An example of 'test' looks as follows. ### Data Fields The data fields are the same among all splits. #### cooking * 'title': a 'string' feature. * 'background': a 'string' feature. * 'context': a 'string' feature. * 'question': a 'string' feature. * 'id': a 'string' feature. * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. * 'followup': a 'string' feature. * 'yesno': a 'string' feature. * 'orig\_answer': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. #### movies * 'title': a 'string' feature. * 'background': a 'string' feature. * 'context': a 'string' feature. * 'question': a 'string' feature. * 'id': a 'string' feature. * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. * 'followup': a 'string' feature. * 'yesno': a 'string' feature. * 'orig\_answer': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. #### travel * 'title': a 'string' feature. * 'background': a 'string' feature. * 'context': a 'string' feature. * 'question': a 'string' feature. * 'id': a 'string' feature. * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. * 'followup': a 'string' feature. * 'yesno': a 'string' feature. * 'orig\_answer': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. ### Data Splits #### cooking #### movies #### travel Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @mariamabarham, @thomwolf, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nDoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues\n(10,917 questions in total) on three different domains: cooking, travel and movies. Note that we include in the generic concept of FAQs also\nCommunity Question Answering sites, as well as corporate information in intranets which is maintained in textual form similar to FAQs, often\nreferred to as internal “knowledge bases”.\n\n\nThese dialogues are created by crowd workers that play the following two roles: the user who asks questions about a given topic posted in Stack\nExchange (URL and the domain expert who replies to the questions by selecting a short span of text from the long textual\nreply in the original post. The expert can rephrase the selected span, in order to make it look more natural. The dataset covers unanswerable\nquestions and some relevant dialogue acts.\n\n\nDoQA enables the development and evaluation of conversational QA systems that help users access the knowledge buried in domain specific FAQs.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### cooking\n\n\n* Size of downloaded dataset files: 4.19 MB\n* Size of the generated dataset: 11.31 MB\n* Total amount of disk used: 15.51 MB\n\n\nAn example of 'train' looks as follows.", "#### movies\n\n\n* Size of downloaded dataset files: 4.19 MB\n* Size of the generated dataset: 3.17 MB\n* Total amount of disk used: 7.36 MB\n\n\nAn example of 'test' looks as follows.", "#### travel\n\n\n* Size of downloaded dataset files: 4.19 MB\n* Size of the generated dataset: 3.22 MB\n* Total amount of disk used: 7.41 MB\n\n\nAn example of 'test' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### cooking\n\n\n* 'title': a 'string' feature.\n* 'background': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'id': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n* 'followup': a 'string' feature.\n* 'yesno': a 'string' feature.\n* 'orig\\_answer': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "#### movies\n\n\n* 'title': a 'string' feature.\n* 'background': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'id': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n* 'followup': a 'string' feature.\n* 'yesno': a 'string' feature.\n* 'orig\\_answer': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "#### travel\n\n\n* 'title': a 'string' feature.\n* 'background': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'id': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n* 'followup': a 'string' feature.\n* 'yesno': a 'string' feature.\n* 'orig\\_answer': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits", "#### cooking", "#### movies", "#### travel\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @mariamabarham, @thomwolf, @lhoestq for adding this dataset." ]
[ "TAGS\n#language-English #arxiv-2005.01328 #region-us \n", "### Dataset Summary\n\n\nDoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues\n(10,917 questions in total) on three different domains: cooking, travel and movies. Note that we include in the generic concept of FAQs also\nCommunity Question Answering sites, as well as corporate information in intranets which is maintained in textual form similar to FAQs, often\nreferred to as internal “knowledge bases”.\n\n\nThese dialogues are created by crowd workers that play the following two roles: the user who asks questions about a given topic posted in Stack\nExchange (URL and the domain expert who replies to the questions by selecting a short span of text from the long textual\nreply in the original post. The expert can rephrase the selected span, in order to make it look more natural. The dataset covers unanswerable\nquestions and some relevant dialogue acts.\n\n\nDoQA enables the development and evaluation of conversational QA systems that help users access the knowledge buried in domain specific FAQs.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### cooking\n\n\n* Size of downloaded dataset files: 4.19 MB\n* Size of the generated dataset: 11.31 MB\n* Total amount of disk used: 15.51 MB\n\n\nAn example of 'train' looks as follows.", "#### movies\n\n\n* Size of downloaded dataset files: 4.19 MB\n* Size of the generated dataset: 3.17 MB\n* Total amount of disk used: 7.36 MB\n\n\nAn example of 'test' looks as follows.", "#### travel\n\n\n* Size of downloaded dataset files: 4.19 MB\n* Size of the generated dataset: 3.22 MB\n* Total amount of disk used: 7.41 MB\n\n\nAn example of 'test' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### cooking\n\n\n* 'title': a 'string' feature.\n* 'background': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'id': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n* 'followup': a 'string' feature.\n* 'yesno': a 'string' feature.\n* 'orig\\_answer': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "#### movies\n\n\n* 'title': a 'string' feature.\n* 'background': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'id': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n* 'followup': a 'string' feature.\n* 'yesno': a 'string' feature.\n* 'orig\\_answer': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "#### travel\n\n\n* 'title': a 'string' feature.\n* 'background': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'id': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.\n* 'followup': a 'string' feature.\n* 'yesno': a 'string' feature.\n* 'orig\\_answer': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits", "#### cooking", "#### movies", "#### travel\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @mariamabarham, @thomwolf, @lhoestq for adding this dataset." ]
[ 19, 236, 10, 11, 6, 49, 48, 48, 17, 171, 171, 171, 5, 3, 3, 9, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 28 ]
[ "passage: TAGS\n#language-English #arxiv-2005.01328 #region-us \n### Dataset Summary\n\n\nDoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues\n(10,917 questions in total) on three different domains: cooking, travel and movies. Note that we include in the generic concept of FAQs also\nCommunity Question Answering sites, as well as corporate information in intranets which is maintained in textual form similar to FAQs, often\nreferred to as internal “knowledge bases”.\n\n\nThese dialogues are created by crowd workers that play the following two roles: the user who asks questions about a given topic posted in Stack\nExchange (URL and the domain expert who replies to the questions by selecting a short span of text from the long textual\nreply in the original post. The expert can rephrase the selected span, in order to make it look more natural. The dataset covers unanswerable\nquestions and some relevant dialogue acts.\n\n\nDoQA enables the development and evaluation of conversational QA systems that help users access the knowledge buried in domain specific FAQs.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### cooking\n\n\n* Size of downloaded dataset files: 4.19 MB\n* Size of the generated dataset: 11.31 MB\n* Total amount of disk used: 15.51 MB\n\n\nAn example of 'train' looks as follows.#### movies\n\n\n* Size of downloaded dataset files: 4.19 MB\n* Size of the generated dataset: 3.17 MB\n* Total amount of disk used: 7.36 MB\n\n\nAn example of 'test' looks as follows.#### travel\n\n\n* Size of downloaded dataset files: 4.19 MB\n* Size of the generated dataset: 3.22 MB\n* Total amount of disk used: 7.41 MB\n\n\nAn example of 'test' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits." ]
78b128b6aa3ac08913a19a3c71064bf23206bdf6
# Dataset Card for DREAM ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]() - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]() - **Paper:** [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]() - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]() - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]() ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
dream
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "dream", "pretty_name": "DREAM", "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "dialogue_id", "dtype": "string"}, {"name": "dialogue", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "choice", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 4775235, "num_examples": 6116}, {"name": "validation", "num_bytes": 1539272, "num_examples": 2040}, {"name": "test", "num_bytes": 1556379, "num_examples": 2041}], "download_size": 5558190, "dataset_size": 7870886}}
2024-01-18T11:02:47+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
# Dataset Card for DREAM ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]() - Repository: [If the dataset is hosted on github or has a github homepage, add URL here]() - Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]() - Leaderboard: [If the dataset supports an active leaderboard, add link here]() - Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]() ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patil-suraj for adding this dataset.
[ "# Dataset Card for DREAM", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: [If the dataset is hosted on github or has a github homepage, add URL here]()\n- Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n", "# Dataset Card for DREAM", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: [If the dataset is hosted on github or has a github homepage, add URL here]()\n- Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ 94, 8, 120, 160, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n# Dataset Card for DREAM## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: [Add homepage URL here if available (unless it's a GitHub repository)]()\n- Repository: [If the dataset is hosted on github or has a github homepage, add URL here]()\n- Paper: [If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)]()\n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact: [If known, name and email of at least one person the reader can contact for questions about the dataset.]()### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset" ]
95cda593fae71b60b5b19f82de3fcf3298c1239c
# Dataset Card for "drop" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://allenai.org/data/drop - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** https://aclanthology.org/N19-1246/ - **Paper:** https://arxiv.org/abs/1903.00161 - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 8.30 MB - **Size of the generated dataset:** 110.91 MB - **Total amount of disk used:** 119.21 MB ### Dataset Summary DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. . DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 8.30 MB - **Size of the generated dataset:** 110.91 MB - **Total amount of disk used:** 119.21 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answers_spans": { "spans": ["Chaz Schilens"] }, "passage": "\" Hoping to rebound from their loss to the Patriots, the Raiders stayed at home for a Week 16 duel with the Houston Texans. Oak...", "question": "Who scored the first touchdown of the game?" } ``` ### Data Fields The data fields are the same among all splits. #### default - `passage`: a `string` feature. - `question`: a `string` feature. - `answers_spans`: a dictionary feature containing: - `spans`: a `string` feature. ### Data Splits | name |train|validation| |-------|----:|---------:| |default|77409| 9536| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{Dua2019DROP, author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner}, title={ {DROP}: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs}, booktitle={Proc. of NAACL}, year={2019} } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
drop
[ "task_categories:question-answering", "task_categories:text2text-generation", "task_ids:extractive-qa", "task_ids:abstractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "arxiv:1903.00161", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text2text-generation"], "task_ids": ["extractive-qa", "abstractive-qa"], "paperswithcode_id": "drop", "pretty_name": "DROP", "dataset_info": {"features": [{"name": "section_id", "dtype": "string"}, {"name": "query_id", "dtype": "string"}, {"name": "passage", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers_spans", "sequence": [{"name": "spans", "dtype": "string"}, {"name": "types", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 105572506, "num_examples": 77400}, {"name": "validation", "num_bytes": 11737755, "num_examples": 9535}], "download_size": 11538387, "dataset_size": 117310261}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-17T08:15:43+00:00
[ "1903.00161" ]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-text2text-generation #task_ids-extractive-qa #task_ids-abstractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-1903.00161 #region-us
Dataset Card for "drop" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Paper: URL * Point of Contact: * Size of downloaded dataset files: 8.30 MB * Size of the generated dataset: 110.91 MB * Total amount of disk used: 119.21 MB ### Dataset Summary DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. . DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 8.30 MB * Size of the generated dataset: 110.91 MB * Total amount of disk used: 119.21 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'passage': a 'string' feature. * 'question': a 'string' feature. * 'answers\_spans': a dictionary feature containing: + 'spans': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patrickvonplaten, @thomwolf, @mariamabarham, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nDROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs.\n. DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a\nquestion, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or\nsorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was\nnecessary for prior datasets.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 8.30 MB\n* Size of the generated dataset: 110.91 MB\n* Total amount of disk used: 119.21 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'passage': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers\\_spans': a dictionary feature containing:\n\t+ 'spans': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @mariamabarham, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text2text-generation #task_ids-extractive-qa #task_ids-abstractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-1903.00161 #region-us \n", "### Dataset Summary\n\n\nDROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs.\n. DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a\nquestion, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or\nsorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was\nnecessary for prior datasets.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 8.30 MB\n* Size of the generated dataset: 110.91 MB\n* Total amount of disk used: 119.21 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'passage': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers\\_spans': a dictionary feature containing:\n\t+ 'spans': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @mariamabarham, @lewtun for adding this dataset." ]
[ 128, 117, 10, 11, 6, 51, 17, 57, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 34 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-text2text-generation #task_ids-extractive-qa #task_ids-abstractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-1903.00161 #region-us \n### Dataset Summary\n\n\nDROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs.\n. DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a\nquestion, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or\nsorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was\nnecessary for prior datasets.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 8.30 MB\n* Size of the generated dataset: 110.91 MB\n* Total amount of disk used: 119.21 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'passage': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers\\_spans': a dictionary feature containing:\n\t+ 'spans': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------" ]
2de12436b8945030c283bf4af83925d60efe10c6
# Dataset Card for duorc ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [DuoRC](https://duorc.github.io/) - **Repository:** [GitHub](https://github.com/duorc/duorc) - **Paper:** [arXiv](https://arxiv.org/abs/1804.07927) - **Leaderboard:** [DuoRC Leaderboard](https://duorc.github.io/#leaderboard) - **Point of Contact:** [Needs More Information] ### Dataset Summary The DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the answers are given based on corresponding IMDb movie plots. ### Supported Tasks and Leaderboards - `abstractive-qa` : The dataset can be used to train a model for Abstractive Question Answering. An abstractive question answering model is presented with a passage and a question and is expected to generate a multi-word answer. The model performance is measured by exact-match and F1 score, similar to [SQuAD V1.1](https://huggingface.co/metrics/squad) or [SQuAD V2](https://huggingface.co/metrics/squad_v2). A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) may be used for this task. - `extractive-qa`: The dataset can be used to train a model for Extractive Question Answering. An extractive question answering model is presented with a passage and a question and is expected to predict the start and end of the answer span in the passage. The model performance is measured by exact-match and F1 score, similar to [SQuAD V1.1](https://huggingface.co/metrics/squad) or [SQuAD V2](https://huggingface.co/metrics/squad_v2). [BertForQuestionAnswering](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering) or any other similar model may be used for this task. ### Languages The text in the dataset is in English, as spoken by Wikipedia writers for movie plots. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances ``` {'answers': ['They arrived by train.'], 'no_answer': False, 'plot': "200 years in the future, Mars has been colonized by a high-tech company.\nMelanie Ballard (Natasha Henstridge) arrives by train to a Mars mining camp which has cut all communication links with the company headquarters. She's not alone, as she is with a group of fellow police officers. They find the mining camp deserted except for a person in the prison, Desolation Williams (Ice Cube), who seems to laugh about them because they are all going to die. They were supposed to take Desolation to headquarters, but decide to explore first to find out what happened.They find a man inside an encapsulated mining car, who tells them not to open it. However, they do and he tries to kill them. One of the cops witnesses strange men with deep scarred and heavily tattooed faces killing the remaining survivors. The cops realise they need to leave the place fast.Desolation explains that the miners opened a kind of Martian construction in the soil which unleashed red dust. Those who breathed that dust became violent psychopaths who started to build weapons and kill the uninfected. They changed genetically, becoming distorted but much stronger.The cops and Desolation leave the prison with difficulty, and devise a plan to kill all the genetically modified ex-miners on the way out. However, the plan goes awry, and only Melanie and Desolation reach headquarters alive. Melanie realises that her bosses won't ever believe her. However, the red dust eventually arrives to headquarters, and Melanie and Desolation need to fight once again.", 'plot_id': '/m/03vyhn', 'question': 'How did the police arrive at the Mars mining camp?', 'question_id': 'b440de7d-9c3f-841c-eaec-a14bdff950d1', 'title': 'Ghosts of Mars'} ``` ### Data Fields - `plot_id`: a `string` feature containing the movie plot ID. - `plot`: a `string` feature containing the movie plot text. - `title`: a `string` feature containing the movie title. - `question_id`: a `string` feature containing the question ID. - `question`: a `string` feature containing the question text. - `answers`: a `list` of `string` features containing list of answers. - `no_answer`: a `bool` feature informing whether the question has no answer or not. ### Data Splits The data is split into a training, dev and test set in such a way that the resulting sets contain 70%, 15%, and 15% of the total QA pairs and no QA pairs for any movie seen in train are included in the test set. The final split sizes are as follows: Name Train Dec Test SelfRC 60721 12961 12599 ParaphraseRC 69524 15591 15857 ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data Wikipedia and IMDb movie plots #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process For SelfRC, the annotators were allowed to mark an answer span in the plot or synthesize their own answers after reading Wikipedia movie plots. For ParaphraseRC, questions from the Wikipedia movie plots from SelfRC were used and the annotators were asked to answer based on IMDb movie plots. #### Who are the annotators? Amazon Mechanical Turk Workers ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The dataset was intially created by Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan in a collaborated work between IIT Madras and IBM Research. ### Licensing Information [MIT License](https://github.com/duorc/duorc/blob/master/LICENSE) ### Citation Information ``` @inproceedings{DuoRC, author = { Amrita Saha and Rahul Aralikatte and Mitesh M. Khapra and Karthik Sankaranarayanan}, title = {{DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension}}, booktitle = {Meeting of the Association for Computational Linguistics (ACL)}, year = {2018} } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset.
ibm/duorc
[ "task_categories:question-answering", "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "arxiv:1804.07927", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text2text-generation"], "task_ids": ["abstractive-qa", "extractive-qa"], "paperswithcode_id": "duorc", "pretty_name": "DuoRC", "config_names": ["ParaphraseRC", "SelfRC"], "dataset_info": [{"config_name": "ParaphraseRC", "features": [{"name": "plot_id", "dtype": "string"}, {"name": "plot", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "no_answer", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 496682909, "num_examples": 69524}, {"name": "validation", "num_bytes": 106510489, "num_examples": 15591}, {"name": "test", "num_bytes": 115215760, "num_examples": 15857}], "download_size": 37709127, "dataset_size": 718409158}, {"config_name": "SelfRC", "features": [{"name": "plot_id", "dtype": "string"}, {"name": "plot", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "no_answer", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 239852729, "num_examples": 60721}, {"name": "validation", "num_bytes": 51662519, "num_examples": 12961}, {"name": "test", "num_bytes": 49142710, "num_examples": 12559}], "download_size": 21001846, "dataset_size": 340657958}], "configs": [{"config_name": "ParaphraseRC", "data_files": [{"split": "train", "path": "ParaphraseRC/train-*"}, {"split": "validation", "path": "ParaphraseRC/validation-*"}, {"split": "test", "path": "ParaphraseRC/test-*"}]}, {"config_name": "SelfRC", "data_files": [{"split": "train", "path": "SelfRC/train-*"}, {"split": "validation", "path": "SelfRC/validation-*"}, {"split": "test", "path": "SelfRC/test-*"}]}]}
2024-01-04T10:17:55+00:00
[ "1804.07927" ]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #arxiv-1804.07927 #region-us
# Dataset Card for duorc ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: DuoRC - Repository: GitHub - Paper: arXiv - Leaderboard: DuoRC Leaderboard - Point of Contact: ### Dataset Summary The DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the answers are given based on corresponding IMDb movie plots. ### Supported Tasks and Leaderboards - 'abstractive-qa' : The dataset can be used to train a model for Abstractive Question Answering. An abstractive question answering model is presented with a passage and a question and is expected to generate a multi-word answer. The model performance is measured by exact-match and F1 score, similar to SQuAD V1.1 or SQuAD V2. A BART-based model with a dense retriever may be used for this task. - 'extractive-qa': The dataset can be used to train a model for Extractive Question Answering. An extractive question answering model is presented with a passage and a question and is expected to predict the start and end of the answer span in the passage. The model performance is measured by exact-match and F1 score, similar to SQuAD V1.1 or SQuAD V2. BertForQuestionAnswering or any other similar model may be used for this task. ### Languages The text in the dataset is in English, as spoken by Wikipedia writers for movie plots. The associated BCP-47 code is 'en'. ## Dataset Structure ### Data Instances ### Data Fields - 'plot_id': a 'string' feature containing the movie plot ID. - 'plot': a 'string' feature containing the movie plot text. - 'title': a 'string' feature containing the movie title. - 'question_id': a 'string' feature containing the question ID. - 'question': a 'string' feature containing the question text. - 'answers': a 'list' of 'string' features containing list of answers. - 'no_answer': a 'bool' feature informing whether the question has no answer or not. ### Data Splits The data is split into a training, dev and test set in such a way that the resulting sets contain 70%, 15%, and 15% of the total QA pairs and no QA pairs for any movie seen in train are included in the test set. The final split sizes are as follows: Name Train Dec Test SelfRC 60721 12961 12599 ParaphraseRC 69524 15591 15857 ## Dataset Creation ### Curation Rationale ### Source Data Wikipedia and IMDb movie plots #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process For SelfRC, the annotators were allowed to mark an answer span in the plot or synthesize their own answers after reading Wikipedia movie plots. For ParaphraseRC, questions from the Wikipedia movie plots from SelfRC were used and the annotators were asked to answer based on IMDb movie plots. #### Who are the annotators? Amazon Mechanical Turk Workers ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The dataset was intially created by Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan in a collaborated work between IIT Madras and IBM Research. ### Licensing Information MIT License ### Contributions Thanks to @gchhablani for adding this dataset.
[ "# Dataset Card for duorc", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: DuoRC\n- Repository: GitHub\n- Paper: arXiv\n- Leaderboard: DuoRC Leaderboard\n- Point of Contact:", "### Dataset Summary\n\nThe DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the answers are given based on corresponding IMDb movie plots.", "### Supported Tasks and Leaderboards\n\n- 'abstractive-qa' : The dataset can be used to train a model for Abstractive Question Answering. An abstractive question answering model is presented with a passage and a question and is expected to generate a multi-word answer. The model performance is measured by exact-match and F1 score, similar to SQuAD V1.1 or SQuAD V2. A BART-based model with a dense retriever may be used for this task.\n\n- 'extractive-qa': The dataset can be used to train a model for Extractive Question Answering. An extractive question answering model is presented with a passage and a question and is expected to predict the start and end of the answer span in the passage. The model performance is measured by exact-match and F1 score, similar to SQuAD V1.1 or SQuAD V2. BertForQuestionAnswering or any other similar model may be used for this task.", "### Languages\n\nThe text in the dataset is in English, as spoken by Wikipedia writers for movie plots. The associated BCP-47 code is 'en'.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'plot_id': a 'string' feature containing the movie plot ID.\n- 'plot': a 'string' feature containing the movie plot text.\n- 'title': a 'string' feature containing the movie title.\n- 'question_id': a 'string' feature containing the question ID.\n- 'question': a 'string' feature containing the question text.\n- 'answers': a 'list' of 'string' features containing list of answers.\n- 'no_answer': a 'bool' feature informing whether the question has no answer or not.", "### Data Splits\n\nThe data is split into a training, dev and test set in such a way that the resulting sets contain 70%, 15%, and 15% of the total QA pairs and no QA pairs for any movie seen in train are included in the test set. The final split sizes are as follows:\n\nName Train Dec Test\nSelfRC 60721 12961 12599\nParaphraseRC 69524 15591 15857", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nWikipedia and IMDb movie plots", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nFor SelfRC, the annotators were allowed to mark an answer span in the plot or synthesize their own answers after reading Wikipedia movie plots. \nFor ParaphraseRC, questions from the Wikipedia movie plots from SelfRC were used and the annotators were asked to answer based on IMDb movie plots.", "#### Who are the annotators?\n\nAmazon Mechanical Turk Workers", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset was intially created by Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan in a collaborated work between IIT Madras and IBM Research.", "### Licensing Information\n\nMIT License", "### Contributions\n\nThanks to @gchhablani for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #arxiv-1804.07927 #region-us \n", "# Dataset Card for duorc", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: DuoRC\n- Repository: GitHub\n- Paper: arXiv\n- Leaderboard: DuoRC Leaderboard\n- Point of Contact:", "### Dataset Summary\n\nThe DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the answers are given based on corresponding IMDb movie plots.", "### Supported Tasks and Leaderboards\n\n- 'abstractive-qa' : The dataset can be used to train a model for Abstractive Question Answering. An abstractive question answering model is presented with a passage and a question and is expected to generate a multi-word answer. The model performance is measured by exact-match and F1 score, similar to SQuAD V1.1 or SQuAD V2. A BART-based model with a dense retriever may be used for this task.\n\n- 'extractive-qa': The dataset can be used to train a model for Extractive Question Answering. An extractive question answering model is presented with a passage and a question and is expected to predict the start and end of the answer span in the passage. The model performance is measured by exact-match and F1 score, similar to SQuAD V1.1 or SQuAD V2. BertForQuestionAnswering or any other similar model may be used for this task.", "### Languages\n\nThe text in the dataset is in English, as spoken by Wikipedia writers for movie plots. The associated BCP-47 code is 'en'.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'plot_id': a 'string' feature containing the movie plot ID.\n- 'plot': a 'string' feature containing the movie plot text.\n- 'title': a 'string' feature containing the movie title.\n- 'question_id': a 'string' feature containing the question ID.\n- 'question': a 'string' feature containing the question text.\n- 'answers': a 'list' of 'string' features containing list of answers.\n- 'no_answer': a 'bool' feature informing whether the question has no answer or not.", "### Data Splits\n\nThe data is split into a training, dev and test set in such a way that the resulting sets contain 70%, 15%, and 15% of the total QA pairs and no QA pairs for any movie seen in train are included in the test set. The final split sizes are as follows:\n\nName Train Dec Test\nSelfRC 60721 12961 12599\nParaphraseRC 69524 15591 15857", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nWikipedia and IMDb movie plots", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nFor SelfRC, the annotators were allowed to mark an answer span in the plot or synthesize their own answers after reading Wikipedia movie plots. \nFor ParaphraseRC, questions from the Wikipedia movie plots from SelfRC were used and the annotators were asked to answer based on IMDb movie plots.", "#### Who are the annotators?\n\nAmazon Mechanical Turk Workers", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset was intially created by Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan in a collaborated work between IIT Madras and IBM Research.", "### Licensing Information\n\nMIT License", "### Contributions\n\nThanks to @gchhablani for adding this dataset." ]
[ 135, 7, 120, 36, 124, 218, 38, 6, 6, 139, 94, 5, 7, 12, 10, 10, 5, 74, 15, 8, 8, 7, 8, 7, 5, 51, 8, 18 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #arxiv-1804.07927 #region-us \n# Dataset Card for duorc## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: DuoRC\n- Repository: GitHub\n- Paper: arXiv\n- Leaderboard: DuoRC Leaderboard\n- Point of Contact:### Dataset Summary\n\nThe DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the answers are given based on corresponding IMDb movie plots.", "passage: ### Supported Tasks and Leaderboards\n\n- 'abstractive-qa' : The dataset can be used to train a model for Abstractive Question Answering. An abstractive question answering model is presented with a passage and a question and is expected to generate a multi-word answer. The model performance is measured by exact-match and F1 score, similar to SQuAD V1.1 or SQuAD V2. A BART-based model with a dense retriever may be used for this task.\n\n- 'extractive-qa': The dataset can be used to train a model for Extractive Question Answering. An extractive question answering model is presented with a passage and a question and is expected to predict the start and end of the answer span in the passage. The model performance is measured by exact-match and F1 score, similar to SQuAD V1.1 or SQuAD V2. BertForQuestionAnswering or any other similar model may be used for this task.### Languages\n\nThe text in the dataset is in English, as spoken by Wikipedia writers for movie plots. The associated BCP-47 code is 'en'.## Dataset Structure### Data Instances### Data Fields\n\n- 'plot_id': a 'string' feature containing the movie plot ID.\n- 'plot': a 'string' feature containing the movie plot text.\n- 'title': a 'string' feature containing the movie title.\n- 'question_id': a 'string' feature containing the question ID.\n- 'question': a 'string' feature containing the question text.\n- 'answers': a 'list' of 'string' features containing list of answers.\n- 'no_answer': a 'bool' feature informing whether the question has no answer or not.### Data Splits\n\nThe data is split into a training, dev and test set in such a way that the resulting sets contain 70%, 15%, and 15% of the total QA pairs and no QA pairs for any movie seen in train are included in the test set. The final split sizes are as follows:\n\nName Train Dec Test\nSelfRC 60721 12961 12599\nParaphraseRC 69524 15591 15857## Dataset Creation### Curation Rationale### Source Data\n\nWikipedia and IMDb movie plots#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process\n\nFor SelfRC, the annotators were allowed to mark an answer span in the plot or synthesize their own answers after reading Wikipedia movie plots. \nFor ParaphraseRC, questions from the Wikipedia movie plots from SelfRC were used and the annotators were asked to answer based on IMDb movie plots.#### Who are the annotators?\n\nAmazon Mechanical Turk Workers### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information" ]
8b7bc6230ebd78f04aa3661acb912f4567f21c76
# Dataset Card for Dutch Social Media Collection ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Dutch Social Media Collection](http://datasets.coronawhy.org/dataset.xhtml?persistentId=doi:10.5072/FK2/MTPTL7) - **Repository:** - **Paper:** *(in-progress)* https://doi.org/10.5072/FK2/MTPTL7 - **Leaderboard:** - **Point of Contact:** [Aakash Gupta](mailto:aakashg80@gmail.com) ### Dataset Summary The dataset contains 10 files with around 271,342 tweets. The tweets are filtered via the official Twitter API to contain tweets in Dutch language or by users who have specified their location information within Netherlands geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes. If the user has provided their location within Dutch boundaries, we have also classified them to their respective provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible, Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (2020-10-27) ### Supported Tasks and Leaderboards `sentiment analysis`, `multi-label classification`, `entity-extraction` ### Languages The text is primarily in Dutch with some tweets in English and other languages. The BCP 47 code is `nl` and `en` ## Dataset Structure ### Data Instances An example of the data field will be: ``` { "full_text": "@pflegearzt @Friedelkorn @LAguja44 Pardon, wollte eigentlich das zitieren: \nhttps://t.co/ejO7bIMyj8\nMeine mentions sind inzw komplett undurchschaubar weil da Leute ihren supporterclub zwecks Likes zusammengerufen haben.", "text_translation": "@pflegearzt @Friedelkorn @ LAguja44 Pardon wollte zitieren eigentlich das:\nhttps://t.co/ejO7bIMyj8\nMeine mentions inzw sind komplett undurchschaubar weil da Leute ihren supporter club Zwecks Likes zusammengerufen haben.", "created_at": 1583756789000, "screen_name": "TheoRettich", "description": "I ❤️science, therefore a Commie. ☭ FALGSC: Part of a conspiracy which wants to achieve world domination. Tankie-Cornucopian. Ecology is a myth", "desc_translation": "I ❤️science, Therefore a Commie. ☭ FALGSC: Part of a conspiracy How many followers wants to Achieve World Domination. Tankie-Cornucopian. Ecology is a myth", "weekofyear": 11, "weekday": 0, "day": 9, "month": 3, "year": 2020, "location": "Netherlands", "point_info": "Nederland", "point": "(52.5001698, 5.7480821, 0.0)", "latitude": 52.5001698, "longitude": 5.7480821, "altitude": 0, "province": "Flevoland", "hisco_standard": null, "hisco_code": null, "industry": false, "sentiment_pattern": 0, "subjective_pattern": 0 } ``` ### Data Fields | Column Name | Description | | --- | --- | | full_text | Original text in the tweet | | text_translation | English translation of the full text | | created_at | Date of tweet creation | | screen_name | username of the tweet author | | description | description as provided in the users bio | | desc_translation | English translation of user's bio/ description | | location | Location information as provided in the user's bio | | weekofyear | week of the year | | weekday | Day of the week information; Monday=0....Sunday = 6| | month | Month of tweet creation | | year | year of tweet creation | | day | day of tweet creation | | point_info | point information from location columnd | | point | tuple giving lat, lon & altitude information | | latitude | geo-referencing information derived from location data | | longitude | geo-referencing information derived from location data | | altitude | geo-referencing information derived from location data| | province | Province given location data of user | | hisco_standard | HISCO standard key word; if available in tweet | | hisco_code| HISCO standard code as derived from `hisco_standard`| | industry | Whether the tweet talks about industry `(True/False)` | | sentiment_score | Sentiment score -1.0 to 1.0 | | subjectivity_score | Subjectivity scores 0 to 1 | Missing values are replaced with empty strings or -1 (-100 for missing sentiment_score). ### Data Splits Data has been split into Train: 60%, Validation: 20% and Test: 20% ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The tweets were hydrated using Twitter's API and then filtered for those which were in Dutch language and/or for users who had mentioned that they were from within Netherlands geographical borders. #### Who are the source language producers? The language producers are twitter users who have identified their location within the geographical boundaries of Netherland. Or those who have tweeted in the dutch language! ### Annotations Using Natural language processing, we have classified the tweets on industry and for HSN HISCO codes. Depending on the user's location, their provincial information is also added. Please check the file/column for detailed information. The tweets are also classified on the sentiment & subjectivity scores. Sentiment scores are between -1 to +1 Subjectivity scores are between 0 to 1 #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information As of writing this data card no anonymization has been carried out on the tweets or user data. As such, if the twitter user has shared any personal & sensitive information, then it may be available in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [Aakash Gupta](mailto:aakashg80@gmail.com) *Th!nkEvolve Consulting* and Researcher at CoronaWhy ### Licensing Information CC BY-NC 4.0 ### Citation Information @data{FK2/MTPTL7_2020, author = {Gupta, Aakash}, publisher = {COVID-19 Data Hub}, title = {{Dutch social media collection}}, year = {2020}, version = {DRAFT VERSION}, doi = {10.5072/FK2/MTPTL7}, url = {https://doi.org/10.5072/FK2/MTPTL7} } ### Contributions Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset.
dutch_social
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:multi-label-classification", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:nl", "license:cc-by-nc-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["en", "nl"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "multi-label-classification"], "pretty_name": "Dutch Social Media Collection", "dataset_info": {"features": [{"name": "full_text", "dtype": "string"}, {"name": "text_translation", "dtype": "string"}, {"name": "screen_name", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "desc_translation", "dtype": "string"}, {"name": "location", "dtype": "string"}, {"name": "weekofyear", "dtype": "int64"}, {"name": "weekday", "dtype": "int64"}, {"name": "month", "dtype": "int64"}, {"name": "year", "dtype": "int64"}, {"name": "day", "dtype": "int64"}, {"name": "point_info", "dtype": "string"}, {"name": "point", "dtype": "string"}, {"name": "latitude", "dtype": "float64"}, {"name": "longitude", "dtype": "float64"}, {"name": "altitude", "dtype": "float64"}, {"name": "province", "dtype": "string"}, {"name": "hisco_standard", "dtype": "string"}, {"name": "hisco_code", "dtype": "string"}, {"name": "industry", "dtype": "bool_"}, {"name": "sentiment_pattern", "dtype": "float64"}, {"name": "subjective_pattern", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "neu", "2": "pos"}}}}], "config_name": "dutch_social", "splits": [{"name": "train", "num_bytes": 105569586, "num_examples": 162805}, {"name": "test", "num_bytes": 35185351, "num_examples": 54268}, {"name": "validation", "num_bytes": 34334756, "num_examples": 54269}], "download_size": 68740666, "dataset_size": 175089693}}
2024-01-18T11:02:48+00:00
[]
[ "en", "nl" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #task_ids-multi-label-classification #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-Dutch #license-cc-by-nc-4.0 #region-us
Dataset Card for Dutch Social Media Collection ============================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Dutch Social Media Collection * Repository: * Paper: *(in-progress)* URL * Leaderboard: * Point of Contact: Aakash Gupta ### Dataset Summary The dataset contains 10 files with around 271,342 tweets. The tweets are filtered via the official Twitter API to contain tweets in Dutch language or by users who have specified their location information within Netherlands geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes. If the user has provided their location within Dutch boundaries, we have also classified them to their respective provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible, Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (2020-10-27) ### Supported Tasks and Leaderboards 'sentiment analysis', 'multi-label classification', 'entity-extraction' ### Languages The text is primarily in Dutch with some tweets in English and other languages. The BCP 47 code is 'nl' and 'en' Dataset Structure ----------------- ### Data Instances An example of the data field will be: ### Data Fields Missing values are replaced with empty strings or -1 (-100 for missing sentiment\_score). ### Data Splits Data has been split into Train: 60%, Validation: 20% and Test: 20% Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The tweets were hydrated using Twitter's API and then filtered for those which were in Dutch language and/or for users who had mentioned that they were from within Netherlands geographical borders. #### Who are the source language producers? The language producers are twitter users who have identified their location within the geographical boundaries of Netherland. Or those who have tweeted in the dutch language! ### Annotations Using Natural language processing, we have classified the tweets on industry and for HSN HISCO codes. Depending on the user's location, their provincial information is also added. Please check the file/column for detailed information. The tweets are also classified on the sentiment & subjectivity scores. Sentiment scores are between -1 to +1 Subjectivity scores are between 0 to 1 #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information As of writing this data card no anonymization has been carried out on the tweets or user data. As such, if the twitter user has shared any personal & sensitive information, then it may be available in this dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. Additional Information ---------------------- ### Dataset Curators Aakash Gupta *Th!nkEvolve Consulting* and Researcher at CoronaWhy ### Licensing Information CC BY-NC 4.0 @data{FK2/MTPTL7\_2020, author = {Gupta, Aakash}, publisher = {COVID-19 Data Hub}, title = {{Dutch social media collection}}, year = {2020}, version = {DRAFT VERSION}, doi = {10.5072/FK2/MTPTL7}, url = {URL } ### Contributions Thanks to @skyprince999 for adding this dataset.
[ "### Dataset Summary\n\n\nThe dataset contains 10 files with around 271,342 tweets. The tweets are filtered via the official Twitter API to contain tweets in Dutch language or by users who have specified their location information within Netherlands geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes. If the user has provided their location within Dutch boundaries, we have also classified them to their respective provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible, Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (2020-10-27)", "### Supported Tasks and Leaderboards\n\n\n'sentiment analysis', 'multi-label classification', 'entity-extraction'", "### Languages\n\n\nThe text is primarily in Dutch with some tweets in English and other languages. The BCP 47 code is 'nl' and 'en'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of the data field will be:", "### Data Fields\n\n\n\nMissing values are replaced with empty strings or -1 (-100 for missing sentiment\\_score).", "### Data Splits\n\n\nData has been split into Train: 60%, Validation: 20% and Test: 20%\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe tweets were hydrated using Twitter's API and then filtered for those which were in Dutch language and/or for users who had mentioned that they were from within Netherlands geographical borders.", "#### Who are the source language producers?\n\n\nThe language producers are twitter users who have identified their location within the geographical boundaries of Netherland. Or those who have tweeted in the dutch language!", "### Annotations\n\n\nUsing Natural language processing, we have classified the tweets on industry and for HSN HISCO codes.\nDepending on the user's location, their provincial information is also added. Please check the file/column for detailed information.\n\n\nThe tweets are also classified on the sentiment & subjectivity scores.\nSentiment scores are between -1 to +1\nSubjectivity scores are between 0 to 1", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nAs of writing this data card no anonymization has been carried out on the tweets or user data. As such, if the twitter user has shared any personal & sensitive information, then it may be available in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nDataset provided for research purposes only. Please check dataset license for additional information.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nAakash Gupta\n*Th!nkEvolve Consulting* and Researcher at CoronaWhy", "### Licensing Information\n\n\nCC BY-NC 4.0\n\n\n@data{FK2/MTPTL7\\_2020,\nauthor = {Gupta, Aakash},\npublisher = {COVID-19 Data Hub},\ntitle = {{Dutch social media collection}},\nyear = {2020},\nversion = {DRAFT VERSION},\ndoi = {10.5072/FK2/MTPTL7},\nurl = {URL\n}", "### Contributions\n\n\nThanks to @skyprince999 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-multi-label-classification #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-Dutch #license-cc-by-nc-4.0 #region-us \n", "### Dataset Summary\n\n\nThe dataset contains 10 files with around 271,342 tweets. The tweets are filtered via the official Twitter API to contain tweets in Dutch language or by users who have specified their location information within Netherlands geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes. If the user has provided their location within Dutch boundaries, we have also classified them to their respective provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible, Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (2020-10-27)", "### Supported Tasks and Leaderboards\n\n\n'sentiment analysis', 'multi-label classification', 'entity-extraction'", "### Languages\n\n\nThe text is primarily in Dutch with some tweets in English and other languages. The BCP 47 code is 'nl' and 'en'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of the data field will be:", "### Data Fields\n\n\n\nMissing values are replaced with empty strings or -1 (-100 for missing sentiment\\_score).", "### Data Splits\n\n\nData has been split into Train: 60%, Validation: 20% and Test: 20%\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe tweets were hydrated using Twitter's API and then filtered for those which were in Dutch language and/or for users who had mentioned that they were from within Netherlands geographical borders.", "#### Who are the source language producers?\n\n\nThe language producers are twitter users who have identified their location within the geographical boundaries of Netherland. Or those who have tweeted in the dutch language!", "### Annotations\n\n\nUsing Natural language processing, we have classified the tweets on industry and for HSN HISCO codes.\nDepending on the user's location, their provincial information is also added. Please check the file/column for detailed information.\n\n\nThe tweets are also classified on the sentiment & subjectivity scores.\nSentiment scores are between -1 to +1\nSubjectivity scores are between 0 to 1", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nAs of writing this data card no anonymization has been carried out on the tweets or user data. As such, if the twitter user has shared any personal & sensitive information, then it may be available in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nDataset provided for research purposes only. Please check dataset license for additional information.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nAakash Gupta\n*Th!nkEvolve Consulting* and Researcher at CoronaWhy", "### Licensing Information\n\n\nCC BY-NC 4.0\n\n\n@data{FK2/MTPTL7\\_2020,\nauthor = {Gupta, Aakash},\npublisher = {COVID-19 Data Hub},\ntitle = {{Dutch social media collection}},\nyear = {2020},\nversion = {DRAFT VERSION},\ndoi = {10.5072/FK2/MTPTL7},\nurl = {URL\n}", "### Contributions\n\n\nThanks to @skyprince999 for adding this dataset." ]
[ 113, 164, 31, 42, 15, 27, 29, 7, 4, 52, 47, 94, 5, 9, 64, 7, 8, 32, 25, 94, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-multi-label-classification #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-Dutch #license-cc-by-nc-4.0 #region-us \n### Dataset Summary\n\n\nThe dataset contains 10 files with around 271,342 tweets. The tweets are filtered via the official Twitter API to contain tweets in Dutch language or by users who have specified their location information within Netherlands geographical boundaries. Using natural language processing we have classified the tweets for their HISCO codes. If the user has provided their location within Dutch boundaries, we have also classified them to their respective provinces The objective of this dataset is to make research data available publicly in a FAIR (Findable, Accessible, Interoperable, Reusable) way. Twitter's Terms of Service Licensed under Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (2020-10-27)### Supported Tasks and Leaderboards\n\n\n'sentiment analysis', 'multi-label classification', 'entity-extraction'### Languages\n\n\nThe text is primarily in Dutch with some tweets in English and other languages. The BCP 47 code is 'nl' and 'en'\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of the data field will be:### Data Fields\n\n\n\nMissing values are replaced with empty strings or -1 (-100 for missing sentiment\\_score).### Data Splits\n\n\nData has been split into Train: 60%, Validation: 20% and Test: 20%\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization\n\n\nThe tweets were hydrated using Twitter's API and then filtered for those which were in Dutch language and/or for users who had mentioned that they were from within Netherlands geographical borders." ]
58819e0e580b70f780708640437373018a09d936
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.pwr.wroc.pl/en/tools-and-resources/resources/czy-wiesz-question-answering-dataset - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - q_id: question id - question: question sentence - answer: answer sentence - target: 1 if the answer is correct, 0 otherwise. Note that the test split doesn't have target values so -1 is used instead ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-SA 3.0 ### Citation Information [More Information Needed] ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
dyk
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:bsd-3-clause", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["pl"], "license": ["bsd-3-clause"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "pretty_name": "dyk", "dataset_info": {"features": [{"name": "q_id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "target", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 1388690, "num_examples": 4154}, {"name": "test", "num_bytes": 353643, "num_examples": 1029}], "download_size": 685462, "dataset_size": 1742333}}
2024-01-18T11:02:49+00:00
[]
[ "pl" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-bsd-3-clause #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary The Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question. ### Supported Tasks and Leaderboards ### Languages Polish ## Dataset Structure ### Data Instances ### Data Fields - q_id: question id - question: question sentence - answer: answer sentence - target: 1 if the answer is correct, 0 otherwise. Note that the test split doesn't have target values so -1 is used instead ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information CC BY-SA 3.0 ### Contributions Thanks to @abecadel for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question.", "### Supported Tasks and Leaderboards", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- q_id: question id\n- question: question sentence\n- answer: answer sentence\n- target: 1 if the answer is correct, 0 otherwise. Note that the test split doesn't have target values so -1 is used instead", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY-SA 3.0", "### Contributions\n\nThanks to @abecadel for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-bsd-3-clause #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question.", "### Supported Tasks and Leaderboards", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- q_id: question id\n- question: question sentence\n- answer: answer sentence\n- target: 1 if the answer is correct, 0 otherwise. Note that the test split doesn't have target values so -1 is used instead", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY-SA 3.0", "### Contributions\n\nThanks to @abecadel for adding this dataset." ]
[ 94, 10, 120, 25, 62, 10, 6, 6, 6, 51, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 11, 17 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-bsd-3-clause #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThe Did You Know (pol. Czy wiesz?) dataset consists of human-annotated question-answer pairs. The task is to predict if the answer is correct. We chose the negatives which have the largest token overlap with a question.### Supported Tasks and Leaderboards### Languages\n\nPolish## Dataset Structure### Data Instances### Data Fields\n\n- q_id: question id\n- question: question sentence\n- answer: answer sentence\n- target: 1 if the answer is correct, 0 otherwise. Note that the test split doesn't have target values so -1 is used instead### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators" ]
1b8962d197d195ee7e9700c2b9272fcfd964fb08
# Dataset Card for End-to-End NLG Challenge ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepage](http://www.macs.hw.ac.uk/InteractionLab/E2E/) - **Repository:** [repository](https://github.com/tuetschek/e2e-dataset/) - **Paper:** [paper](https://arxiv.org/abs/1706.09254) - **Leaderboard:** [leaderboard](http://www.macs.hw.ac.uk/InteractionLab/E2E/) ### Dataset Summary The E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. E2E is released in the following paper where you can find more details and baseline results: https://arxiv.org/abs/1706.09254 ### Supported Tasks and Leaderboards - `text2text-generation-other-meaning-representation-to-text`: The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* [BLEU](https://huggingface.co/metrics/bleu), [NIST](https://huggingface.co/metrics/nist), [METEOR](https://huggingface.co/metrics/meteor), [Rouge-L](https://huggingface.co/metrics/rouge), [CIDEr](https://huggingface.co/metrics/cider). The TGen model (Dusek and Jurcıcek, 2016a) was used a baseline, had the following scores: | | BLEU | NIST | METEOR | ROUGE_L | CIDEr | | -------- | ------ | ------ | ------ | ------- | ------ | | BASELINE | 0.6593 | 8.6094 | 0.4483 | 0.6850 | 2.2338 | This task has an inactive leaderboard which can be found [here](http://www.macs.hw.ac.uk/InteractionLab/E2E/) and ranks models based on the metrics above. ### Languages The dataset is in english (en). ## Dataset Structure ### Data Instances Example of one instance: ``` {'human_reference': 'The Vaults pub near Café Adriatic has a 5 star rating. Prices start at £30.', 'meaning_representation': 'name[The Vaults], eatType[pub], priceRange[more than £30], customer rating[5 out of 5], near[Café Adriatic]'} ``` ### Data Fields - `human_reference`: string, the text is natural language that describes the different characteristics in the meaning representation - `meaning_representation`: list of slots and values to generate a description from Each MR consists of 3–8 attributes (slots), such as name, food or area, and their values. ### Data Splits The dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct. | | train | validation | test | | ----- |-------:|------------:|------:| | N. Instances | 42061 | 4672 | 4693 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization The data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016). #### Who are the source language producers? [More Information Needed] ### Annotations Following Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{dusek.etal2020:csl, title = {Evaluating the {{State}}-of-the-{{Art}} of {{End}}-to-{{End Natural Language Generation}}: {{The E2E NLG Challenge}}}, author = {Du{\v{s}}ek, Ond\v{r}ej and Novikova, Jekaterina and Rieser, Verena}, year = {2020}, month = jan, volume = {59}, pages = {123--156}, doi = {10.1016/j.csl.2019.06.009}, archivePrefix = {arXiv}, eprint = {1901.11528}, eprinttype = {arxiv}, journal = {Computer Speech \& Language} ``` ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset.
e2e_nlg
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "meaning-representation-to-text", "arxiv:1706.09254", "arxiv:1901.11528", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "e2e", "pretty_name": "End-to-End NLG Challenge", "tags": ["meaning-representation-to-text"], "dataset_info": {"features": [{"name": "meaning_representation", "dtype": "string"}, {"name": "human_reference", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9435824, "num_examples": 42061}, {"name": "validation", "num_bytes": 1171723, "num_examples": 4672}, {"name": "test", "num_bytes": 1320205, "num_examples": 4693}], "download_size": 11812316, "dataset_size": 11927752}}
2024-01-18T11:02:51+00:00
[ "1706.09254", "1901.11528" ]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #meaning-representation-to-text #arxiv-1706.09254 #arxiv-1901.11528 #region-us
Dataset Card for End-to-End NLG Challenge ========================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: homepage * Repository: repository * Paper: paper * Leaderboard: leaderboard ### Dataset Summary The E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. E2E is released in the following paper where you can find more details and baseline results: URL ### Supported Tasks and Leaderboards * 'text2text-generation-other-meaning-representation-to-text': The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* BLEU, NIST, METEOR, Rouge-L, CIDEr. The TGen model (Dusek and Jurcıcek, 2016a) was used a baseline, had the following scores: This task has an inactive leaderboard which can be found here and ranks models based on the metrics above. ### Languages The dataset is in english (en). Dataset Structure ----------------- ### Data Instances Example of one instance: ### Data Fields * 'human\_reference': string, the text is natural language that describes the different characteristics in the meaning representation * 'meaning\_representation': list of slots and values to generate a description from Each MR consists of 3–8 attributes (slots), such as name, food or area, and their values. ### Data Splits The dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016). #### Who are the source language producers? ### Annotations Following Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nThe E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area.\nThe E2E dataset poses new challenges:\n(1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena;\n(2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances.\n\n\nE2E is released in the following paper where you can find more details and baseline results:\nURL", "### Supported Tasks and Leaderboards\n\n\n* 'text2text-generation-other-meaning-representation-to-text': The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* BLEU, NIST, METEOR, Rouge-L, CIDEr. The TGen model (Dusek and Jurcıcek, 2016a) was used a baseline, had the following scores:\n\n\n\nThis task has an inactive leaderboard which can be found here and ranks models based on the metrics above.", "### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nExample of one instance:", "### Data Fields\n\n\n* 'human\\_reference': string, the text is natural language that describes the different characteristics in the meaning representation\n* 'meaning\\_representation': list of slots and values to generate a description from\n\n\nEach MR consists of 3–8 attributes (slots), such as name, food or area, and their values.", "### Data Splits\n\n\nThe dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016).", "#### Who are the source language producers?", "### Annotations\n\n\nFollowing Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #meaning-representation-to-text #arxiv-1706.09254 #arxiv-1901.11528 #region-us \n", "### Dataset Summary\n\n\nThe E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area.\nThe E2E dataset poses new challenges:\n(1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena;\n(2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances.\n\n\nE2E is released in the following paper where you can find more details and baseline results:\nURL", "### Supported Tasks and Leaderboards\n\n\n* 'text2text-generation-other-meaning-representation-to-text': The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* BLEU, NIST, METEOR, Rouge-L, CIDEr. The TGen model (Dusek and Jurcıcek, 2016a) was used a baseline, had the following scores:\n\n\n\nThis task has an inactive leaderboard which can be found here and ranks models based on the metrics above.", "### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nExample of one instance:", "### Data Fields\n\n\n* 'human\\_reference': string, the text is natural language that describes the different characteristics in the meaning representation\n* 'meaning\\_representation': list of slots and values to generate a description from\n\n\nEach MR consists of 3–8 attributes (slots), such as name, food or area, and their values.", "### Data Splits\n\n\nThe dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016).", "#### Who are the source language producers?", "### Annotations\n\n\nFollowing Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lhoestq for adding this dataset." ]
[ 114, 143, 173, 20, 12, 80, 60, 7, 4, 36, 10, 54, 5, 9, 18, 7, 8, 14, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #meaning-representation-to-text #arxiv-1706.09254 #arxiv-1901.11528 #region-us \n### Dataset Summary\n\n\nThe E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area.\nThe E2E dataset poses new challenges:\n(1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena;\n(2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances.\n\n\nE2E is released in the following paper where you can find more details and baseline results:\nURL### Supported Tasks and Leaderboards\n\n\n* 'text2text-generation-other-meaning-representation-to-text': The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* BLEU, NIST, METEOR, Rouge-L, CIDEr. The TGen model (Dusek and Jurcıcek, 2016a) was used a baseline, had the following scores:\n\n\n\nThis task has an inactive leaderboard which can be found here and ranks models based on the metrics above.### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nExample of one instance:" ]
bccd5397cf8257ecdae03078fd7195604cf1add9
# Dataset Card for the Cleaned Version of the E2E Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [homepage](http://www.macs.hw.ac.uk/InteractionLab/E2E/) - **Repository:** [repository](https://github.com/tuetschek/e2e-dataset/) - **Paper:** [paper](https://arxiv.org/abs/1706.09254) - **Leaderboard:** [leaderboard](http://www.macs.hw.ac.uk/InteractionLab/E2E/) ### Dataset Summary An update release of E2E NLG Challenge data with cleaned MRs and scripts, accompanying the following paper: The E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. E2E is released in the following paper where you can find more details and baseline results: https://arxiv.org/abs/1706.09254 ### Supported Tasks and Leaderboards - `text2text-generation-other-meaning-representtion-to-text`: The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* [BLEU](https://huggingface.co/metrics/bleu), [NIST](https://huggingface.co/metrics/nist), [METEOR](https://huggingface.co/metrics/meteor), [Rouge-L](https://huggingface.co/metrics/rouge), [CIDEr](https://huggingface.co/metrics/cider). This task has an inactive leaderboard which can be found [here](http://www.macs.hw.ac.uk/InteractionLab/E2E/) and ranks models based on the metrics above. ### Languages The dataset is in english (en). ## Dataset Structure ### Data Instances Example of one instance: ``` {'human_reference': 'The Vaults pub near Café Adriatic has a 5 star rating. Prices start at £30.', 'meaning_representation': 'name[The Vaults], eatType[pub], priceRange[more than £30], customer rating[5 out of 5], near[Café Adriatic]'} ``` ### Data Fields - `human_reference`: string, the text is natural language that describes the different characteristics in the meaning representation - `meaning_representation`: list of slots and values to generate a description from Each MR consists of 3–8 attributes (slots), such as name, food or area, and their values. ### Data Splits The dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct. | | train | validation | test | |--------------|------:|-----------:|-----:| | N. Instances | 33525 | 4299 | 4693 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization The data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016). #### Who are the source language producers? [More Information Needed] ### Annotations Following Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{dusek.etal2020:csl, title = {Evaluating the {{State}}-of-the-{{Art}} of {{End}}-to-{{End Natural Language Generation}}: {{The E2E NLG Challenge}}}, author = {Du{\v{s}}ek, Ond\v{r}ej and Novikova, Jekaterina and Rieser, Verena}, year = {2020}, month = jan, volume = {59}, pages = {123--156}, doi = {10.1016/j.csl.2019.06.009}, archivePrefix = {arXiv}, eprint = {1901.11528}, eprinttype = {arxiv}, journal = {Computer Speech \& Language} ``` ### Contributions Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
e2e_nlg_cleaned
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "meaning-representation-to-text", "arxiv:1706.09254", "arxiv:1901.11528", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "the Cleaned Version of the E2E Dataset", "tags": ["meaning-representation-to-text"], "dataset_info": {"features": [{"name": "meaning_representation", "dtype": "string"}, {"name": "human_reference", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7474936, "num_examples": 33525}, {"name": "validation", "num_bytes": 1056527, "num_examples": 4299}, {"name": "test", "num_bytes": 1262597, "num_examples": 4693}], "download_size": 14597407, "dataset_size": 9794060}}
2024-01-18T11:02:52+00:00
[ "1706.09254", "1901.11528" ]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #meaning-representation-to-text #arxiv-1706.09254 #arxiv-1901.11528 #region-us
Dataset Card for the Cleaned Version of the E2E Dataset ======================================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: homepage * Repository: repository * Paper: paper * Leaderboard: leaderboard ### Dataset Summary An update release of E2E NLG Challenge data with cleaned MRs and scripts, accompanying the following paper: The E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. E2E is released in the following paper where you can find more details and baseline results: URL ### Supported Tasks and Leaderboards * 'text2text-generation-other-meaning-representtion-to-text': The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* BLEU, NIST, METEOR, Rouge-L, CIDEr. This task has an inactive leaderboard which can be found here and ranks models based on the metrics above. ### Languages The dataset is in english (en). Dataset Structure ----------------- ### Data Instances Example of one instance: ### Data Fields * 'human\_reference': string, the text is natural language that describes the different characteristics in the meaning representation * 'meaning\_representation': list of slots and values to generate a description from Each MR consists of 3–8 attributes (slots), such as name, food or area, and their values. ### Data Splits The dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016). #### Who are the source language producers? ### Annotations Following Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @yjernite for adding this dataset.
[ "### Dataset Summary\n\n\nAn update release of E2E NLG Challenge data with cleaned MRs and scripts, accompanying the following paper:\n\n\nThe E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area.\nThe E2E dataset poses new challenges:\n(1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena;\n(2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances.\n\n\nE2E is released in the following paper where you can find more details and baseline results:\nURL", "### Supported Tasks and Leaderboards\n\n\n* 'text2text-generation-other-meaning-representtion-to-text': The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* BLEU, NIST, METEOR, Rouge-L, CIDEr.\n\n\nThis task has an inactive leaderboard which can be found here and ranks models based on the metrics above.", "### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nExample of one instance:", "### Data Fields\n\n\n* 'human\\_reference': string, the text is natural language that describes the different characteristics in the meaning representation\n* 'meaning\\_representation': list of slots and values to generate a description from\n\n\nEach MR consists of 3–8 attributes (slots), such as name, food or area, and their values.", "### Data Splits\n\n\nThe dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016).", "#### Who are the source language producers?", "### Annotations\n\n\nFollowing Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @yjernite for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #meaning-representation-to-text #arxiv-1706.09254 #arxiv-1901.11528 #region-us \n", "### Dataset Summary\n\n\nAn update release of E2E NLG Challenge data with cleaned MRs and scripts, accompanying the following paper:\n\n\nThe E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area.\nThe E2E dataset poses new challenges:\n(1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena;\n(2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances.\n\n\nE2E is released in the following paper where you can find more details and baseline results:\nURL", "### Supported Tasks and Leaderboards\n\n\n* 'text2text-generation-other-meaning-representtion-to-text': The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* BLEU, NIST, METEOR, Rouge-L, CIDEr.\n\n\nThis task has an inactive leaderboard which can be found here and ranks models based on the metrics above.", "### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nExample of one instance:", "### Data Fields\n\n\n* 'human\\_reference': string, the text is natural language that describes the different characteristics in the meaning representation\n* 'meaning\\_representation': list of slots and values to generate a description from\n\n\nEach MR consists of 3–8 attributes (slots), such as name, food or area, and their values.", "### Data Splits\n\n\nThe dataset is split into training, validation and testing sets (in a 76.5-8.5-15 ratio), keeping a similar distribution of MR and reference text lengths and ensuring that MRs in different sets are distinct.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was collected using the CrowdFlower platform and quality-controlled following Novikova et al. (2016).", "#### Who are the source language producers?", "### Annotations\n\n\nFollowing Novikova et al. (2016), the E2E data was collected using pictures as stimuli, which was shown to elicit significantly more natural, more informative, and better phrased human references than textual MRs.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @yjernite for adding this dataset." ]
[ 114, 170, 147, 20, 12, 80, 60, 7, 4, 36, 10, 54, 5, 9, 18, 7, 8, 14, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #meaning-representation-to-text #arxiv-1706.09254 #arxiv-1901.11528 #region-us \n### Dataset Summary\n\n\nAn update release of E2E NLG Challenge data with cleaned MRs and scripts, accompanying the following paper:\n\n\nThe E2E dataset is used for training end-to-end, data-driven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area.\nThe E2E dataset poses new challenges:\n(1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena;\n(2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances.\n\n\nE2E is released in the following paper where you can find more details and baseline results:\nURL### Supported Tasks and Leaderboards\n\n\n* 'text2text-generation-other-meaning-representtion-to-text': The dataset can be used to train a model to generate descriptions in the restaurant domain from meaning representations, which consists in taking as input some data about a restaurant and generate a sentence in natural language that presents the different aspects of the data about the restaurant.. Success on this task is typically measured by achieving a *high* BLEU, NIST, METEOR, Rouge-L, CIDEr.\n\n\nThis task has an inactive leaderboard which can be found here and ranks models based on the metrics above.### Languages\n\n\nThe dataset is in english (en).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nExample of one instance:" ]
c60b38a838ae5880f4a79a84827402ec2fd91d46
# Dataset Card for extension to the EventCorefBank ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/ECB.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/ECB.php E.g. `dataset = load_dataset("ecb", lang1="en", lang2="fi")` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
ecb
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:sk", "language:sl", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "sk", "sl"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "ecb", "pretty_name": "extension to the EventCorefBank", "dataset_info": [{"config_name": "de-fr", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["de", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 39514115, "num_examples": 105116}], "download_size": 10326178, "dataset_size": 39514115}, {"config_name": "cs-en", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["cs", "en"]}}}], "splits": [{"name": "train", "num_bytes": 19524831, "num_examples": 63716}], "download_size": 5360485, "dataset_size": 19524831}, {"config_name": "el-it", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["el", "it"]}}}], "splits": [{"name": "train", "num_bytes": 47300471, "num_examples": 94712}], "download_size": 10394277, "dataset_size": 47300471}, {"config_name": "en-nl", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "nl"]}}}], "splits": [{"name": "train", "num_bytes": 43118164, "num_examples": 126482}], "download_size": 11360895, "dataset_size": 43118164}, {"config_name": "fi-pl", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["fi", "pl"]}}}], "splits": [{"name": "train", "num_bytes": 12973283, "num_examples": 41686}], "download_size": 3521950, "dataset_size": 12973283}]}
2024-01-18T11:02:53+00:00
[]
[ "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "sk", "sl" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Slovak #language-Slovenian #license-unknown #region-us
# Dataset Card for extension to the EventCorefBank ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: None - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: URL E.g. 'dataset = load_dataset("ecb", lang1="en", lang2="fi")' ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for extension to the EventCorefBank", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n'dataset = load_dataset(\"ecb\", lang1=\"en\", lang2=\"fi\")'", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are some examples of questions and facts:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Slovak #language-Slovenian #license-unknown #region-us \n", "# Dataset Card for extension to the EventCorefBank", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n'dataset = load_dataset(\"ecb\", lang1=\"en\", lang2=\"fi\")'", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are some examples of questions and facts:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ 176, 12, 120, 28, 80, 10, 4, 6, 17, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 20 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Slovak #language-Slovenian #license-unknown #region-us \n# Dataset Card for extension to the EventCorefBank## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n'dataset = load_dataset(\"ecb\", lang1=\"en\", lang2=\"fi\")'### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances\n\nHere are some examples of questions and facts:### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations" ]
1351ba4211ed9bc44692e6a1e2f237c4e3775b41
# Dataset Card for the ECtHR cases dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://archive.org/details/ECtHR-NAACL2021/ - **Repository:** http://archive.org/details/ECtHR-NAACL2021/ - **Paper:** https://arxiv.org/abs/2103.13084 - **Leaderboard:** TBA - **Point of Contact:** [Ilias Chalkidis](mailto:ihalk@aueb.gr) ### Dataset Summary The European Court of Human Rights (ECtHR) hears allegations regarding breaches in human rights provisions of the European Convention of Human Rights (ECHR) by European states. The Convention is available at https://www.echr.coe.int/Documents/Convention_ENG.pdf. The court rules on a subset of all ECHR articles, which are predefined (alleged) by the applicants (*plaintiffs*). Our dataset comprises 11k ECtHR cases and can be viewed as an enriched version of the ECtHR dataset of Chalkidis et al. (2019), which did not provide ground truth for alleged article violations (articles discussed) and rationales. The new dataset includes the following: **Facts:** Each judgment includes a list of paragraphs that represent the facts of the case, i.e., they describe the main events that are relevant to the case, in numbered paragraphs. We hereafter call these paragraphs *facts* for simplicity. Note that the facts are presented in chronological order. Not all facts have the same impact or hold crucial information with respect to alleged article violations and the court's assessment; i.e., facts may refer to information that is trivial or otherwise irrelevant to the legally crucial allegations against *defendant* states. **Allegedly violated articles:** Judges rule on specific accusations (allegations) made by the applicants (Harris, 2018). In ECtHR cases, the judges discuss and rule on the violation, or not, of specific articles of the Convention. The articles to be discussed (and ruled on) are put forward (as alleged article violations) by the applicants and are included in the dataset as ground truth; we identify 40 violable articles in total. The rest of the articles are procedural, i.e., the number of judges, criteria for office, election of judges, etc. In our experiments, however, the models are not aware of the allegations. They predict the Convention articles that will be discussed (the allegations) based on the case's facts, and they also produce rationales for their predictions. Models of this kind could be used by potential applicants to help them formulate future allegations (articles they could claim to have been violated), as already noted, but here we mainly use the task as a test-bed for rationale extraction. **Violated articles:** The court decides which allegedly violated articles have indeed been violated. These decisions are also included in our dataset and could be used for full legal judgment prediction experiments (Chalkidis et al., 2019). However, they are not used in the experiments of this work. **Silver allegation rationales:** Each decision of the ECtHR includes references to facts of the case (e.g., *"See paragraphs 2 and 4."*) and case law (e.g., *"See Draci vs. Russia (2010)"*.). We identified references to each case's facts and retrieved the corresponding paragraphs using regular expressions. These are included in the dataset as silver allegation rationales, on the grounds that the judges refer to these paragraphs when ruling on the allegations. **Gold allegation rationales:** A legal expert with experience in ECtHR cases annotated a subset of 50 test cases to identify the relevant facts (paragraphs) of the case that support the allegations (alleged article violations). In other words, each identified fact justifies (hints) one or more alleged violations. ### Supported Tasks and Leaderboards The dataset supports: **Alleged violation prediction** (`alleged-violation-prediction`): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the 40 violable ECHR articles were allegedly violated according to the applicant(s). Consult Chalkidis et al. (2021), for details. **Violation prediction** (`violation-prediction`): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the allegedly violated ECHR articles were violated, as decided (ruled) by the ECtHR court. Consult Chalkidis et al. (2019), for details. **Rationale extraction:** A model can also predict the facts of the case that most prominently support its decision with respect to a classification task. Silver rationales can be used for both classification tasks, while gold rationales are only focused on the *alleged violation prediction* task. ### Languages All documents are written in English. ## Dataset Structure ### Data Instances This example was too long and was cropped: ```json { "facts": [ "8. In 1991 Mr Dusan Slobodnik, a research worker in the field of literature, ...", "9. On 20 July 1992 the newspaper Telegraf published a poem by the applicant.", "10. The poem was later published in another newspaper.", "...", "39. The City Court further dismissed the claim in respect of non-pecuniary damage ... ", "40. The City Court ordered the plaintiff to pay SKK 56,780 to the applicant ...", "41. On 25 November 1998 the Supreme Court upheld the decision of the Bratislava City Court ..." ], "labels": ["14", "10", "9", "36"], "silver_rationales": [27], "gold_rationales": [] } ``` ### Data Fields `facts`: (**List[str]**) The paragraphs (facts) of the case.\ `labels`: (**List[str]**) The ECHR articles under discussion (*Allegedly violated articles*); or the allegedly violated ECHR articles that found to be violated by the court (judges).\ `silver_rationales`: (**List[int]**) Indices of the paragraphs (facts) that are present in the court's assessment.\ `gold_rationales`: (**List[int]**) Indices of the paragraphs (facts) that support alleged violations, according to a legal expert. ### Data Splits | Split | No of ECtHR cases | Silver rationales ratio | Avg. allegations / case | | ------------------- | ------------------------------------ | --- | --- | | Train | 9,000 | 24% | 1.8 | |Development | 1,000 | 30% | 1.7 | |Test | 1,000 | 31% | 1.7 | ## Dataset Creation ### Curation Rationale The dataset was curated by Chalkidis et al. (2021).\ The annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School). ### Source Data #### Initial Data Collection and Normalization The original data are available at HUDOC database (https://hudoc.echr.coe.int/eng) in an unprocessed format. The data were downloaded and all information was extracted from the HTML files and several JSON metadata files. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process * The original documents are available in HTML format at HUDOC database (https://hudoc.echr.coe.int/eng), except the gold rationales. The metadata are provided by additional JSON files, produced by REST services. * The annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School). #### Who are the annotators? Dimitris Tsarapatsanis (Lecturer, York Law School). ### Personal and Sensitive Information Privacy statement / Protection of personal data from HUDOC (https://www.echr.coe.int/Pages/home.aspx?p=privacy) ``` The Court complies with the Council of Europe's policy on protection of personal data, in so far as this is consistent with exercising its functions under the European Convention on Human Rights. The Council of Europe is committed to respect for private life. Its policy on protection of personal data is founded on the Secretary General’s Regulation of 17 April 1989 outlining a data protection system for personal data files in the Council of Europe. Most pages of the Council of Europe site require no personal information except in certain cases to allow requests for on-line services to be met. In such cases, the information is processed in accordance with the Confidentiality policy described below. ``` ## Considerations for Using the Data ### Social Impact of Dataset The publication of this dataset complies with the ECtHR data policy (https://www.echr.coe.int/Pages/home.aspx?p=privacy). By no means do we aim to build a 'robot' lawyer or judge, and we acknowledge the possible harmful impact (Angwin et al., 2016, Dressel et al., 2018) of irresponsible deployment. Instead, we aim to support fair and explainable AI-assisted judicial decision making and empirical legal studies. For example, automated services can help applicants (plaintiffs) identify alleged violations that are supported by the facts of a case. They can help judges identify more quickly facts that support the alleged violations, contributing towards more informed judicial decision making (Zhong et al., 2020). They can also help legal experts identify previous cases related to particular allegations, helping analyze case law (Katz et al., 2012). Also, consider ongoing critical research on responsible AI (Elish et al., 2021) that aims to provide explainable and fair systems to support human experts. ### Discussion of Biases Consider the work of Chalkidis et al. (2019) for the identification of demographic bias by models. ### Other Known Limitations N/A ## Additional Information ### Dataset Curators Ilias Chalkidis and Dimitris Tsarapatsanis ### Licensing Information **CC BY-NC-SA (Creative Commons / Attribution-NonCommercial-ShareAlike)** Read more: https://creativecommons.org/licenses/by-nc-sa/4.0/. ### Citation Information *Ilias Chalkidis, Manos Fergadiotis, Dimitrios Tsarapatsanis, Nikolaos Aletras, Ion Androutsopoulos and Prodromos Malakasiotis. Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases.* *Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021). Mexico City, Mexico. 2021.* ``` @InProceedings{chalkidis-et-al-2021-ecthr, title = "Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases", author = "Chalkidis, Ilias and Fergadiotis, Manos and Tsarapatsanis, Dimitrios and Aletras, Nikolaos and Androutsopoulos, Ion and Malakasiotis, Prodromos", booktitle = "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics", year = "2021", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics" } ``` *Ilias Chalkidis, Ion Androutsopoulos and Nikolaos Aletras. Neural Legal Judgment Prediction in English.* *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019.* ``` @InProceedings{chalkidis-etal-2019-neural, title = "Neural Legal Judgment Prediction in {E}nglish", author = "Chalkidis, Ilias and Androutsopoulos, Ion and Aletras, Nikolaos", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1424", doi = "10.18653/v1/P19-1424", pages = "4317--4323" } ``` ### Contributions Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
ecthr_cases
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "rationale-extraction", "legal-judgment-prediction", "arxiv:2103.13084", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated", "found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "paperswithcode_id": "ecthr", "pretty_name": "European Court of Human Rights Cases", "tags": ["rationale-extraction", "legal-judgment-prediction"], "dataset_info": [{"config_name": "alleged-violation-prediction", "features": [{"name": "facts", "sequence": "string"}, {"name": "labels", "sequence": "string"}, {"name": "silver_rationales", "sequence": "int32"}, {"name": "gold_rationales", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 89835266, "num_examples": 9000}, {"name": "test", "num_bytes": 11917598, "num_examples": 1000}, {"name": "validation", "num_bytes": 11015998, "num_examples": 1000}], "download_size": 32815448, "dataset_size": 112768862}, {"config_name": "violation-prediction", "features": [{"name": "facts", "sequence": "string"}, {"name": "labels", "sequence": "string"}, {"name": "silver_rationales", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 89776410, "num_examples": 9000}, {"name": "test", "num_bytes": 11909314, "num_examples": 1000}, {"name": "validation", "num_bytes": 11009350, "num_examples": 1000}], "download_size": 32815448, "dataset_size": 112695074}]}
2024-01-18T11:02:54+00:00
[ "2103.13084" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #rationale-extraction #legal-judgment-prediction #arxiv-2103.13084 #region-us
Dataset Card for the ECtHR cases dataset ======================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: TBA * Point of Contact: Ilias Chalkidis ### Dataset Summary The European Court of Human Rights (ECtHR) hears allegations regarding breaches in human rights provisions of the European Convention of Human Rights (ECHR) by European states. The Convention is available at URL The court rules on a subset of all ECHR articles, which are predefined (alleged) by the applicants (*plaintiffs*). Our dataset comprises 11k ECtHR cases and can be viewed as an enriched version of the ECtHR dataset of Chalkidis et al. (2019), which did not provide ground truth for alleged article violations (articles discussed) and rationales. The new dataset includes the following: Facts: Each judgment includes a list of paragraphs that represent the facts of the case, i.e., they describe the main events that are relevant to the case, in numbered paragraphs. We hereafter call these paragraphs *facts* for simplicity. Note that the facts are presented in chronological order. Not all facts have the same impact or hold crucial information with respect to alleged article violations and the court's assessment; i.e., facts may refer to information that is trivial or otherwise irrelevant to the legally crucial allegations against *defendant* states. Allegedly violated articles: Judges rule on specific accusations (allegations) made by the applicants (Harris, 2018). In ECtHR cases, the judges discuss and rule on the violation, or not, of specific articles of the Convention. The articles to be discussed (and ruled on) are put forward (as alleged article violations) by the applicants and are included in the dataset as ground truth; we identify 40 violable articles in total. The rest of the articles are procedural, i.e., the number of judges, criteria for office, election of judges, etc. In our experiments, however, the models are not aware of the allegations. They predict the Convention articles that will be discussed (the allegations) based on the case's facts, and they also produce rationales for their predictions. Models of this kind could be used by potential applicants to help them formulate future allegations (articles they could claim to have been violated), as already noted, but here we mainly use the task as a test-bed for rationale extraction. Violated articles: The court decides which allegedly violated articles have indeed been violated. These decisions are also included in our dataset and could be used for full legal judgment prediction experiments (Chalkidis et al., 2019). However, they are not used in the experiments of this work. Silver allegation rationales: Each decision of the ECtHR includes references to facts of the case (e.g., *"See paragraphs 2 and 4."*) and case law (e.g., *"See Draci vs. Russia (2010)"*.). We identified references to each case's facts and retrieved the corresponding paragraphs using regular expressions. These are included in the dataset as silver allegation rationales, on the grounds that the judges refer to these paragraphs when ruling on the allegations. Gold allegation rationales: A legal expert with experience in ECtHR cases annotated a subset of 50 test cases to identify the relevant facts (paragraphs) of the case that support the allegations (alleged article violations). In other words, each identified fact justifies (hints) one or more alleged violations. ### Supported Tasks and Leaderboards The dataset supports: Alleged violation prediction ('alleged-violation-prediction'): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the 40 violable ECHR articles were allegedly violated according to the applicant(s). Consult Chalkidis et al. (2021), for details. Violation prediction ('violation-prediction'): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the allegedly violated ECHR articles were violated, as decided (ruled) by the ECtHR court. Consult Chalkidis et al. (2019), for details. Rationale extraction: A model can also predict the facts of the case that most prominently support its decision with respect to a classification task. Silver rationales can be used for both classification tasks, while gold rationales are only focused on the *alleged violation prediction* task. ### Languages All documents are written in English. Dataset Structure ----------------- ### Data Instances This example was too long and was cropped: ### Data Fields 'facts': (List[str]) The paragraphs (facts) of the case. 'labels': (List[str]) The ECHR articles under discussion (*Allegedly violated articles*); or the allegedly violated ECHR articles that found to be violated by the court (judges). 'silver\_rationales': (List[int]) Indices of the paragraphs (facts) that are present in the court's assessment. 'gold\_rationales': (List[int]) Indices of the paragraphs (facts) that support alleged violations, according to a legal expert. ### Data Splits Dataset Creation ---------------- ### Curation Rationale The dataset was curated by Chalkidis et al. (2021). The annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School). ### Source Data #### Initial Data Collection and Normalization The original data are available at HUDOC database (URL in an unprocessed format. The data were downloaded and all information was extracted from the HTML files and several JSON metadata files. #### Who are the source language producers? ### Annotations #### Annotation process * The original documents are available in HTML format at HUDOC database (URL except the gold rationales. The metadata are provided by additional JSON files, produced by REST services. * The annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School). #### Who are the annotators? Dimitris Tsarapatsanis (Lecturer, York Law School). ### Personal and Sensitive Information Privacy statement / Protection of personal data from HUDOC (URL Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The publication of this dataset complies with the ECtHR data policy (URL By no means do we aim to build a 'robot' lawyer or judge, and we acknowledge the possible harmful impact (Angwin et al., 2016, Dressel et al., 2018) of irresponsible deployment. Instead, we aim to support fair and explainable AI-assisted judicial decision making and empirical legal studies. For example, automated services can help applicants (plaintiffs) identify alleged violations that are supported by the facts of a case. They can help judges identify more quickly facts that support the alleged violations, contributing towards more informed judicial decision making (Zhong et al., 2020). They can also help legal experts identify previous cases related to particular allegations, helping analyze case law (Katz et al., 2012). Also, consider ongoing critical research on responsible AI (Elish et al., 2021) that aims to provide explainable and fair systems to support human experts. ### Discussion of Biases Consider the work of Chalkidis et al. (2019) for the identification of demographic bias by models. ### Other Known Limitations N/A Additional Information ---------------------- ### Dataset Curators Ilias Chalkidis and Dimitris Tsarapatsanis ### Licensing Information CC BY-NC-SA (Creative Commons / Attribution-NonCommercial-ShareAlike) Read more: URL *Ilias Chalkidis, Manos Fergadiotis, Dimitrios Tsarapatsanis, Nikolaos Aletras, Ion Androutsopoulos and Prodromos Malakasiotis. Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases.* *Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021). Mexico City, Mexico. 2021.* *Ilias Chalkidis, Ion Androutsopoulos and Nikolaos Aletras. Neural Legal Judgment Prediction in English.* *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019.* ### Contributions Thanks to @iliaschalkidis for adding this dataset.
[ "### Dataset Summary\n\n\nThe European Court of Human Rights (ECtHR) hears allegations regarding breaches in human rights provisions of the European Convention of Human Rights (ECHR) by European states. The Convention is available at URL\nThe court rules on a subset of all ECHR articles, which are predefined (alleged) by the applicants (*plaintiffs*).\n\n\nOur dataset comprises 11k ECtHR cases and can be viewed as an enriched version of the ECtHR dataset of Chalkidis et al. (2019), which did not provide ground truth for alleged article violations (articles discussed) and rationales. The new dataset includes the following:\n\n\nFacts: Each judgment includes a list of paragraphs that represent the facts of the case, i.e., they describe the main events that are relevant to the case, in numbered paragraphs. We hereafter call these paragraphs *facts* for simplicity. Note that the facts are presented in chronological order. Not all facts have the same impact or hold crucial information with respect to alleged article violations and the court's assessment; i.e., facts may refer to information that is trivial or otherwise irrelevant to the legally crucial allegations against *defendant* states.\n\n\nAllegedly violated articles: Judges rule on specific accusations (allegations) made by the applicants (Harris, 2018). In ECtHR cases, the judges discuss and rule on the violation, or not, of specific articles of the Convention. The articles to be discussed (and ruled on) are put forward (as alleged article violations) by the applicants and are included in the dataset as ground truth; we identify 40 violable articles in total. The rest of the articles are procedural, i.e., the number of judges, criteria for office, election of judges, etc. In our experiments, however, the models are not aware of the allegations. They predict the Convention articles that will be discussed (the allegations) based on the case's facts, and they also produce rationales for their predictions. Models of this kind could be used by potential applicants to help them formulate future allegations (articles they could claim to have been violated), as already noted, but here we mainly use the task as a test-bed for rationale extraction.\n\n\nViolated articles: The court decides which allegedly violated articles have indeed been violated. These decisions are also included in our dataset and could be used for full legal judgment prediction experiments (Chalkidis et al., 2019). However, they are not used in the experiments of this work.\n\n\nSilver allegation rationales: Each decision of the ECtHR includes references to facts of the case (e.g., *\"See paragraphs 2 and 4.\"*) and case law (e.g., *\"See Draci vs. Russia (2010)\"*.). We identified references to each case's facts and retrieved the corresponding paragraphs using regular expressions. These are included in the dataset as silver allegation rationales, on the grounds that the judges refer to these paragraphs when ruling on the allegations.\n\n\nGold allegation rationales: A legal expert with experience in ECtHR cases annotated a subset of 50 test cases to identify the relevant facts (paragraphs) of the case that support the allegations (alleged article violations). In other words, each identified fact justifies (hints) one or more alleged violations.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nAlleged violation prediction ('alleged-violation-prediction'): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the 40 violable ECHR articles were allegedly violated according to the applicant(s). Consult Chalkidis et al. (2021), for details.\n\n\nViolation prediction ('violation-prediction'): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the allegedly violated ECHR articles were violated, as decided (ruled) by the ECtHR court. Consult Chalkidis et al. (2019), for details.\n\n\nRationale extraction: A model can also predict the facts of the case that most prominently support its decision with respect to a classification task. Silver rationales can be used for both classification tasks, while gold rationales are only focused on the *alleged violation prediction* task.", "### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis example was too long and was cropped:", "### Data Fields\n\n\n'facts': (List[str]) The paragraphs (facts) of the case. \n\n'labels': (List[str]) The ECHR articles under discussion (*Allegedly violated articles*); or the allegedly violated ECHR articles that found to be violated by the court (judges). \n\n'silver\\_rationales': (List[int]) Indices of the paragraphs (facts) that are present in the court's assessment. \n\n'gold\\_rationales': (List[int]) Indices of the paragraphs (facts) that support alleged violations, according to a legal expert.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2021). \n\nThe annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School).", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe original data are available at HUDOC database (URL in an unprocessed format. The data were downloaded and all information was extracted from the HTML files and several JSON metadata files.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\n* The original documents are available in HTML format at HUDOC database (URL except the gold rationales. The metadata are provided by additional JSON files, produced by REST services.\n* The annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School).", "#### Who are the annotators?\n\n\nDimitris Tsarapatsanis (Lecturer, York Law School).", "### Personal and Sensitive Information\n\n\nPrivacy statement / Protection of personal data from HUDOC (URL\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe publication of this dataset complies with the ECtHR data policy (URL\n\n\nBy no means do we aim to build a 'robot' lawyer or judge, and we acknowledge the possible harmful impact (Angwin et al., 2016, Dressel et al., 2018) of irresponsible deployment.\nInstead, we aim to support fair and explainable AI-assisted judicial decision making and empirical legal studies.\n\n\nFor example, automated services can help applicants (plaintiffs) identify alleged violations that are supported by the facts of a case. They can help judges identify more quickly facts that support the alleged violations, contributing towards more informed judicial decision making (Zhong et al., 2020). They can also help legal experts identify previous cases related to particular allegations, helping analyze case law (Katz et al., 2012).\n\n\nAlso, consider ongoing critical research on responsible AI (Elish et al., 2021) that aims to provide explainable and fair systems to support human experts.", "### Discussion of Biases\n\n\nConsider the work of Chalkidis et al. (2019) for the identification of demographic bias by models.", "### Other Known Limitations\n\n\nN/A\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nIlias Chalkidis and Dimitris Tsarapatsanis", "### Licensing Information\n\n\nCC BY-NC-SA (Creative Commons / Attribution-NonCommercial-ShareAlike)\n\n\nRead more: URL\n\n\n*Ilias Chalkidis, Manos Fergadiotis, Dimitrios Tsarapatsanis, Nikolaos Aletras, Ion Androutsopoulos and Prodromos Malakasiotis. Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases.*\n*Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021). Mexico City, Mexico. 2021.*\n\n\n*Ilias Chalkidis, Ion Androutsopoulos and Nikolaos Aletras. Neural Legal Judgment Prediction in English.*\n*Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019.*", "### Contributions\n\n\nThanks to @iliaschalkidis for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #rationale-extraction #legal-judgment-prediction #arxiv-2103.13084 #region-us \n", "### Dataset Summary\n\n\nThe European Court of Human Rights (ECtHR) hears allegations regarding breaches in human rights provisions of the European Convention of Human Rights (ECHR) by European states. The Convention is available at URL\nThe court rules on a subset of all ECHR articles, which are predefined (alleged) by the applicants (*plaintiffs*).\n\n\nOur dataset comprises 11k ECtHR cases and can be viewed as an enriched version of the ECtHR dataset of Chalkidis et al. (2019), which did not provide ground truth for alleged article violations (articles discussed) and rationales. The new dataset includes the following:\n\n\nFacts: Each judgment includes a list of paragraphs that represent the facts of the case, i.e., they describe the main events that are relevant to the case, in numbered paragraphs. We hereafter call these paragraphs *facts* for simplicity. Note that the facts are presented in chronological order. Not all facts have the same impact or hold crucial information with respect to alleged article violations and the court's assessment; i.e., facts may refer to information that is trivial or otherwise irrelevant to the legally crucial allegations against *defendant* states.\n\n\nAllegedly violated articles: Judges rule on specific accusations (allegations) made by the applicants (Harris, 2018). In ECtHR cases, the judges discuss and rule on the violation, or not, of specific articles of the Convention. The articles to be discussed (and ruled on) are put forward (as alleged article violations) by the applicants and are included in the dataset as ground truth; we identify 40 violable articles in total. The rest of the articles are procedural, i.e., the number of judges, criteria for office, election of judges, etc. In our experiments, however, the models are not aware of the allegations. They predict the Convention articles that will be discussed (the allegations) based on the case's facts, and they also produce rationales for their predictions. Models of this kind could be used by potential applicants to help them formulate future allegations (articles they could claim to have been violated), as already noted, but here we mainly use the task as a test-bed for rationale extraction.\n\n\nViolated articles: The court decides which allegedly violated articles have indeed been violated. These decisions are also included in our dataset and could be used for full legal judgment prediction experiments (Chalkidis et al., 2019). However, they are not used in the experiments of this work.\n\n\nSilver allegation rationales: Each decision of the ECtHR includes references to facts of the case (e.g., *\"See paragraphs 2 and 4.\"*) and case law (e.g., *\"See Draci vs. Russia (2010)\"*.). We identified references to each case's facts and retrieved the corresponding paragraphs using regular expressions. These are included in the dataset as silver allegation rationales, on the grounds that the judges refer to these paragraphs when ruling on the allegations.\n\n\nGold allegation rationales: A legal expert with experience in ECtHR cases annotated a subset of 50 test cases to identify the relevant facts (paragraphs) of the case that support the allegations (alleged article violations). In other words, each identified fact justifies (hints) one or more alleged violations.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nAlleged violation prediction ('alleged-violation-prediction'): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the 40 violable ECHR articles were allegedly violated according to the applicant(s). Consult Chalkidis et al. (2021), for details.\n\n\nViolation prediction ('violation-prediction'): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the allegedly violated ECHR articles were violated, as decided (ruled) by the ECtHR court. Consult Chalkidis et al. (2019), for details.\n\n\nRationale extraction: A model can also predict the facts of the case that most prominently support its decision with respect to a classification task. Silver rationales can be used for both classification tasks, while gold rationales are only focused on the *alleged violation prediction* task.", "### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis example was too long and was cropped:", "### Data Fields\n\n\n'facts': (List[str]) The paragraphs (facts) of the case. \n\n'labels': (List[str]) The ECHR articles under discussion (*Allegedly violated articles*); or the allegedly violated ECHR articles that found to be violated by the court (judges). \n\n'silver\\_rationales': (List[int]) Indices of the paragraphs (facts) that are present in the court's assessment. \n\n'gold\\_rationales': (List[int]) Indices of the paragraphs (facts) that support alleged violations, according to a legal expert.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2021). \n\nThe annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School).", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe original data are available at HUDOC database (URL in an unprocessed format. The data were downloaded and all information was extracted from the HTML files and several JSON metadata files.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\n* The original documents are available in HTML format at HUDOC database (URL except the gold rationales. The metadata are provided by additional JSON files, produced by REST services.\n* The annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School).", "#### Who are the annotators?\n\n\nDimitris Tsarapatsanis (Lecturer, York Law School).", "### Personal and Sensitive Information\n\n\nPrivacy statement / Protection of personal data from HUDOC (URL\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe publication of this dataset complies with the ECtHR data policy (URL\n\n\nBy no means do we aim to build a 'robot' lawyer or judge, and we acknowledge the possible harmful impact (Angwin et al., 2016, Dressel et al., 2018) of irresponsible deployment.\nInstead, we aim to support fair and explainable AI-assisted judicial decision making and empirical legal studies.\n\n\nFor example, automated services can help applicants (plaintiffs) identify alleged violations that are supported by the facts of a case. They can help judges identify more quickly facts that support the alleged violations, contributing towards more informed judicial decision making (Zhong et al., 2020). They can also help legal experts identify previous cases related to particular allegations, helping analyze case law (Katz et al., 2012).\n\n\nAlso, consider ongoing critical research on responsible AI (Elish et al., 2021) that aims to provide explainable and fair systems to support human experts.", "### Discussion of Biases\n\n\nConsider the work of Chalkidis et al. (2019) for the identification of demographic bias by models.", "### Other Known Limitations\n\n\nN/A\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nIlias Chalkidis and Dimitris Tsarapatsanis", "### Licensing Information\n\n\nCC BY-NC-SA (Creative Commons / Attribution-NonCommercial-ShareAlike)\n\n\nRead more: URL\n\n\n*Ilias Chalkidis, Manos Fergadiotis, Dimitrios Tsarapatsanis, Nikolaos Aletras, Ion Androutsopoulos and Prodromos Malakasiotis. Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases.*\n*Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021). Mexico City, Mexico. 2021.*\n\n\n*Ilias Chalkidis, Ion Androutsopoulos and Nikolaos Aletras. Neural Legal Judgment Prediction in English.*\n*Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019.*", "### Contributions\n\n\nThanks to @iliaschalkidis for adding this dataset." ]
[ 130, 798, 240, 18, 16, 149, 11, 51, 4, 52, 10, 5, 73, 24, 31, 230, 33, 17, 18, 203, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #rationale-extraction #legal-judgment-prediction #arxiv-2103.13084 #region-us \n", "passage: ### Dataset Summary\n\n\nThe European Court of Human Rights (ECtHR) hears allegations regarding breaches in human rights provisions of the European Convention of Human Rights (ECHR) by European states. The Convention is available at URL\nThe court rules on a subset of all ECHR articles, which are predefined (alleged) by the applicants (*plaintiffs*).\n\n\nOur dataset comprises 11k ECtHR cases and can be viewed as an enriched version of the ECtHR dataset of Chalkidis et al. (2019), which did not provide ground truth for alleged article violations (articles discussed) and rationales. The new dataset includes the following:\n\n\nFacts: Each judgment includes a list of paragraphs that represent the facts of the case, i.e., they describe the main events that are relevant to the case, in numbered paragraphs. We hereafter call these paragraphs *facts* for simplicity. Note that the facts are presented in chronological order. Not all facts have the same impact or hold crucial information with respect to alleged article violations and the court's assessment; i.e., facts may refer to information that is trivial or otherwise irrelevant to the legally crucial allegations against *defendant* states.\n\n\nAllegedly violated articles: Judges rule on specific accusations (allegations) made by the applicants (Harris, 2018). In ECtHR cases, the judges discuss and rule on the violation, or not, of specific articles of the Convention. The articles to be discussed (and ruled on) are put forward (as alleged article violations) by the applicants and are included in the dataset as ground truth; we identify 40 violable articles in total. The rest of the articles are procedural, i.e., the number of judges, criteria for office, election of judges, etc. In our experiments, however, the models are not aware of the allegations. They predict the Convention articles that will be discussed (the allegations) based on the case's facts, and they also produce rationales for their predictions. Models of this kind could be used by potential applicants to help them formulate future allegations (articles they could claim to have been violated), as already noted, but here we mainly use the task as a test-bed for rationale extraction.\n\n\nViolated articles: The court decides which allegedly violated articles have indeed been violated. These decisions are also included in our dataset and could be used for full legal judgment prediction experiments (Chalkidis et al., 2019). However, they are not used in the experiments of this work.\n\n\nSilver allegation rationales: Each decision of the ECtHR includes references to facts of the case (e.g., *\"See paragraphs 2 and 4.\"*) and case law (e.g., *\"See Draci vs. Russia (2010)\"*.). We identified references to each case's facts and retrieved the corresponding paragraphs using regular expressions. These are included in the dataset as silver allegation rationales, on the grounds that the judges refer to these paragraphs when ruling on the allegations.\n\n\nGold allegation rationales: A legal expert with experience in ECtHR cases annotated a subset of 50 test cases to identify the relevant facts (paragraphs) of the case that support the allegations (alleged article violations). In other words, each identified fact justifies (hints) one or more alleged violations.### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nAlleged violation prediction ('alleged-violation-prediction'): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the 40 violable ECHR articles were allegedly violated according to the applicant(s). Consult Chalkidis et al. (2021), for details.\n\n\nViolation prediction ('violation-prediction'): A multi-label text classification task where, given the facts of a ECtHR case, a model predicts which of the allegedly violated ECHR articles were violated, as decided (ruled) by the ECtHR court. Consult Chalkidis et al. (2019), for details.\n\n\nRationale extraction: A model can also predict the facts of the case that most prominently support its decision with respect to a classification task. Silver rationales can be used for both classification tasks, while gold rationales are only focused on the *alleged violation prediction* task.### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThis example was too long and was cropped:### Data Fields\n\n\n'facts': (List[str]) The paragraphs (facts) of the case. \n\n'labels': (List[str]) The ECHR articles under discussion (*Allegedly violated articles*); or the allegedly violated ECHR articles that found to be violated by the court (judges). \n\n'silver\\_rationales': (List[int]) Indices of the paragraphs (facts) that are present in the court's assessment. \n\n'gold\\_rationales': (List[int]) Indices of the paragraphs (facts) that support alleged violations, according to a legal expert.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2021). \n\nThe annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School).### Source Data", "passage: #### Initial Data Collection and Normalization\n\n\nThe original data are available at HUDOC database (URL in an unprocessed format. The data were downloaded and all information was extracted from the HTML files and several JSON metadata files.#### Who are the source language producers?### Annotations#### Annotation process\n\n\n* The original documents are available in HTML format at HUDOC database (URL except the gold rationales. The metadata are provided by additional JSON files, produced by REST services.\n* The annotations for the gold rationales are available thanks to Dimitris Tsarapatsanis (Lecturer, York Law School).#### Who are the annotators?\n\n\nDimitris Tsarapatsanis (Lecturer, York Law School).### Personal and Sensitive Information\n\n\nPrivacy statement / Protection of personal data from HUDOC (URL\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nThe publication of this dataset complies with the ECtHR data policy (URL\n\n\nBy no means do we aim to build a 'robot' lawyer or judge, and we acknowledge the possible harmful impact (Angwin et al., 2016, Dressel et al., 2018) of irresponsible deployment.\nInstead, we aim to support fair and explainable AI-assisted judicial decision making and empirical legal studies.\n\n\nFor example, automated services can help applicants (plaintiffs) identify alleged violations that are supported by the facts of a case. They can help judges identify more quickly facts that support the alleged violations, contributing towards more informed judicial decision making (Zhong et al., 2020). They can also help legal experts identify previous cases related to particular allegations, helping analyze case law (Katz et al., 2012).\n\n\nAlso, consider ongoing critical research on responsible AI (Elish et al., 2021) that aims to provide explainable and fair systems to support human experts.### Discussion of Biases\n\n\nConsider the work of Chalkidis et al. (2019) for the identification of demographic bias by models.### Other Known Limitations\n\n\nN/A\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nIlias Chalkidis and Dimitris Tsarapatsanis" ]
4f8f90b0adb4988caa6f4cce2800f1bf0d484515
# Dataset Card for Eduge ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://eduge.mn/ - **Repository:** https://github.com/tugstugi/mongolian-nlp - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Eduge news classification dataset provided by Bolorsoft LLC. Used to train the Eduge.mn production news classifier 75K news articles in 9 categories: урлаг соёл, эдийн засаг, эрүүл мэнд, хууль, улс төр, спорт, технологи, боловсрол and байгал орчин ### Supported Tasks and Leaderboards - `text-classification`: We can transform the above into a 9-class classification task. ### Languages The text in the dataset is in Mongolian ## Dataset Structure ### Data Instances For the `default` configuration: ``` { 'label': 0, # 'урлаг соёл' 'news': 'Шударга өрсөлдөөн, хэрэглэгчийн төлөө газар 2013 оны дөрөвдүгээр сараас эхлэн Монгол киноны ашиг орлогын мэдээллийг олон нийтэд хүргэж байгаа. Ингэснээр Монголын кино үйлдвэрлэгчид улсад ашиг орлогоо шударгаар төлөх, мөн  чанартай уран бүтээлийн тоо өсөх боломж бүрдэж байгаа юм.', } ``` ### Data Fields - `news`: a complete news article on a specific topic as a string - `label`: the single class of the topic, among these values: "урлаг соёл" (0), "эдийн засаг" (1), "эрүүл мэнд" (2), "хууль" (3), "улс төр" (4), "спорт" (5), "технологи" (6), "боловсрол" (7), "байгал орчин" (8). ### Data Splits The set of complete articles is split into a training and test set. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? Eduge.mn which is a combination from shuud.mn, ikon.mn, olloo.mn, news.gogo.mn, montsame.mn, zaluu.com, sonin.mn, medee.mn, bloombergtv.mn. ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information No citation available for this dataset. ### Contributions Thanks to [@enod](https://github.com/enod) for adding this dataset.
eduge
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:mn", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["mn"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "Eduge", "dataset_info": {"features": [{"name": "news", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "\u0443\u0440\u043b\u0430\u0433 \u0441\u043e\u0451\u043b", "1": "\u044d\u0434\u0438\u0439\u043d \u0437\u0430\u0441\u0430\u0433", "2": "\u044d\u0440\u04af\u04af\u043b \u043c\u044d\u043d\u0434", "3": "\u0445\u0443\u0443\u043b\u044c", "4": "\u0443\u043b\u0441 \u0442\u04e9\u0440", "5": "\u0441\u043f\u043e\u0440\u0442", "6": "\u0442\u0435\u0445\u043d\u043e\u043b\u043e\u0433\u0438", "7": "\u0431\u043e\u043b\u043e\u0432\u0441\u0440\u043e\u043b", "8": "\u0431\u0430\u0439\u0433\u0430\u043b \u043e\u0440\u0447\u0438\u043d"}}}}], "splits": [{"name": "train", "num_bytes": 255275842, "num_examples": 60528}, {"name": "test", "num_bytes": 64451731, "num_examples": 15133}], "download_size": 320395067, "dataset_size": 319727573}}
2024-01-18T11:02:56+00:00
[]
[ "mn" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Mongolian #license-unknown #region-us
# Dataset Card for Eduge ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Eduge news classification dataset provided by Bolorsoft LLC. Used to train the URL production news classifier 75K news articles in 9 categories: урлаг соёл, эдийн засаг, эрүүл мэнд, хууль, улс төр, спорт, технологи, боловсрол and байгал орчин ### Supported Tasks and Leaderboards - 'text-classification': We can transform the above into a 9-class classification task. ### Languages The text in the dataset is in Mongolian ## Dataset Structure ### Data Instances For the 'default' configuration: ### Data Fields - 'news': a complete news article on a specific topic as a string - 'label': the single class of the topic, among these values: "урлаг соёл" (0), "эдийн засаг" (1), "эрүүл мэнд" (2), "хууль" (3), "улс төр" (4), "спорт" (5), "технологи" (6), "боловсрол" (7), "байгал орчин" (8). ### Data Splits The set of complete articles is split into a training and test set. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? URL which is a combination from URL, URL, URL, URL, URL, URL, URL, URL, URL. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information No citation available for this dataset. ### Contributions Thanks to @enod for adding this dataset.
[ "# Dataset Card for Eduge", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nEduge news classification dataset provided by Bolorsoft LLC. Used to train the URL production news classifier\n75K news articles in 9 categories: урлаг соёл, эдийн засаг, эрүүл мэнд, хууль, улс төр, спорт, технологи, боловсрол and байгал орчин", "### Supported Tasks and Leaderboards\n\n- 'text-classification': We can transform the above into a 9-class classification task.", "### Languages\n\nThe text in the dataset is in Mongolian", "## Dataset Structure", "### Data Instances\n\nFor the 'default' configuration:", "### Data Fields\n\n- 'news': a complete news article on a specific topic as a string\n- 'label': the single class of the topic, among these values: \"урлаг соёл\" (0), \"эдийн засаг\" (1), \"эрүүл мэнд\" (2), \"хууль\" (3), \"улс төр\" (4), \"спорт\" (5), \"технологи\" (6), \"боловсрол\" (7), \"байгал орчин\" (8).", "### Data Splits\n\nThe set of complete articles is split into a training and test set.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nURL which is a combination from URL, URL, URL, URL, URL, URL, URL, URL, URL.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\nNo citation available for this dataset.", "### Contributions\n\nThanks to @enod for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Mongolian #license-unknown #region-us \n", "# Dataset Card for Eduge", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nEduge news classification dataset provided by Bolorsoft LLC. Used to train the URL production news classifier\n75K news articles in 9 categories: урлаг соёл, эдийн засаг, эрүүл мэнд, хууль, улс төр, спорт, технологи, боловсрол and байгал орчин", "### Supported Tasks and Leaderboards\n\n- 'text-classification': We can transform the above into a 9-class classification task.", "### Languages\n\nThe text in the dataset is in Mongolian", "## Dataset Structure", "### Data Instances\n\nFor the 'default' configuration:", "### Data Fields\n\n- 'news': a complete news article on a specific topic as a string\n- 'label': the single class of the topic, among these values: \"урлаг соёл\" (0), \"эдийн засаг\" (1), \"эрүүл мэнд\" (2), \"хууль\" (3), \"улс төр\" (4), \"спорт\" (5), \"технологи\" (6), \"боловсрол\" (7), \"байгал орчин\" (8).", "### Data Splits\n\nThe set of complete articles is split into a training and test set.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nURL which is a combination from URL, URL, URL, URL, URL, URL, URL, URL, URL.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\nNo citation available for this dataset.", "### Contributions\n\nThanks to @enod for adding this dataset." ]
[ 94, 7, 120, 26, 62, 31, 14, 6, 13, 97, 19, 5, 7, 4, 10, 34, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 15, 16 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Mongolian #license-unknown #region-us \n# Dataset Card for Eduge## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nEduge news classification dataset provided by Bolorsoft LLC. Used to train the URL production news classifier\n75K news articles in 9 categories: урлаг соёл, эдийн засаг, эрүүл мэнд, хууль, улс төр, спорт, технологи, боловсрол and байгал орчин### Supported Tasks and Leaderboards\n\n- 'text-classification': We can transform the above into a 9-class classification task.### Languages\n\nThe text in the dataset is in Mongolian## Dataset Structure### Data Instances\n\nFor the 'default' configuration:### Data Fields\n\n- 'news': a complete news article on a specific topic as a string\n- 'label': the single class of the topic, among these values: \"урлаг соёл\" (0), \"эдийн засаг\" (1), \"эрүүл мэнд\" (2), \"хууль\" (3), \"улс төр\" (4), \"спорт\" (5), \"технологи\" (6), \"боловсрол\" (7), \"байгал орчин\" (8).### Data Splits\n\nThe set of complete articles is split into a training and test set.## Dataset Creation### Curation Rationale### Source Data" ]
018cc323e16563fbccb1e3210d2c80cf7a3a313a
# Dataset Card for eHealth-KD ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [eHealth-KD homepage](https://knowledge-learning.github.io/ehealthkd-2020/) - **Repository:** [eHealth-KD repository](https://github.com/knowledge-learning/ehealthkd-2020) - **Paper:** [eHealth-KD overview paper](http://ceur-ws.org/Vol-2664/eHealth-KD_overview.pdf) - **Leaderboard:** [eHealth-KD Challenge 2020 official results](https://knowledge-learning.github.io/ehealthkd-2020/results) - **Point of Contact:** [Yoan Gutiérrez Vázquez](mailto:ygutierrez@dlsi.ua.es) (Organization Committee), [María Grandury](mailto:yacine@huggingface.co) (Dataset Submitter) ### Dataset Summary Dataset of the eHealth-KD Challenge at IberLEF 2020. It is designed for the identification of semantic entities and relations in Spanish health documents. ### Supported Tasks and Leaderboards The eHealth-KD challenge proposes two computational subtasks: - `named-entity-recognition`: Given a sentence of an eHealth document written in Spanish, the goal of this subtask is to identify all the entities and their types. - `relation-prediction`: The purpose of this subtask is to recognise all relevant semantic relationships between the entities recognised. For an analysis of the most successful approaches of this challenge, read the [eHealth-KD overview paper](http://ceur-ws.org/Vol-2664/eHealth-KD_overview.pdf). ### Languages The text in the dataset is in Spanish (BCP-47 code: `es`). ## Dataset Structure ### Data Instances The first example of the eHeatlh-KD Corpus train set looks as follows: ``` { 'sentence': 'En la leucemia linfocítica crónica, hay demasiados linfocitos, un tipo de glóbulos blancos.', 'entities': { [ 'ent_id: 'T1', 'ent_text': 'leucemia linfocítica crónica', 'ent_label': 0, 'start_character': 6, 'end_character': 34 ], [ 'ent_id: 'T2', 'ent_text': 'linfocitos', 'ent_label': 0, 'start_character': 51, 'end_character': 61 ], [ 'ent_id: 'T3', 'ent_text': 'glóbulos blancos', 'ent_label': 0, 'start_character': 74, 'end_character': 90 ] }, relations: { [ 'rel_id: 'R0' 'rel_label': 0, 'arg1': T2 'arg2': T3 ], [ 'rel_id': 'R1' 'rel_label': 5, 'arg1': T1, 'arg2': T2 ] } } ``` ### Data Fields - `sentence`: sentence of an eHealth document written in Spanish - `entities`: list of entities identified in the sentence - `ent_id`: entity identifier (`T`+ a number) - `ent_text`: entity, can consist of one or more complete words (i.e., not a prefix or a suffix of a word), and will never include any surrounding punctuation symbols, parenthesis, etc. - `ent_label`: type of entity (`Concept`, `Action`, `Predicate` or `Reference`) - `start_character`: position of the first character of the entity - `end_character`: position of the last character of the entity - `relations`: list of semantic relationships between the entities recognised - `rel_id`: relation identifier (`R` + a number) - `rel_label`: type of relation, can be a general relation (`is-a`, `same-as`, `has-property`, `part-of`, `causes`, `entails`), a contextual relation (`in-time`, `in-place`, `in-context`) an action role (`subject`, `target`) or a predicate role (`domain`, `arg`). - `arg1`: ID of the first entity of the relation - `arg2`: ID of the second entity of the relation For more information about the types of entities and relations, click [here](https://knowledge-learning.github.io/ehealthkd-2020/tasks). ### Data Splits The data is split into a training, validation and test set. The split sizes are as follow: | | Train | Val | Test | | ----- | ------ | ----- | ---- | | eHealth-KD 2020 | 800 | 199 | 100 | In the challenge there are 4 different scenarios for testing. The test data of this dataset corresponds to the third scenario. More information about the testing data [here](https://github.com/knowledge-learning/ehealthkd-2020/tree/master/data/testing). ## Dataset Creation ### Curation Rationale The vast amount of clinical text available online has motivated the development of automatic knowledge discovery systems that can analyse this data and discover relevant facts. The eHealth Knowledge Discovery (eHealth-KD) challenge, in its third edition, leverages a semantic model of human language that encodes the most common expressions of factual knowledge, via a set of four general-purpose entity types and thirteen semantic relations among them. The challenge proposes the design of systems that can automatically annotate entities and relations in clinical text in the Spanish language. ### Source Data #### Initial Data Collection and Normalization As in the previous edition, the corpus for eHealth-KD 2020 has been extracted from MedlinePlus sources. This platform freely provides large health textual data from which we have made a selection for constituting the eHealth-KD corpus. The selection has been made by sampling specific XML files from the collection available in the [Medline website](https://medlineplus.gov/xml.html). ``` “MedlinePlus is the National Institutes of Health’s Website for patients and their families and friends. Produced by the National Library of Medicine, the world’s largest medical library, it brings you information about diseases, conditions, and wellness issues in language you can understand. MedlinePlus offers reliable, up-to-date health information, anytime, anywhere, for free.” ``` These files contain several entries related to health and medicine topics and have been processed to remove all XML markup to extract the textual content. Only Spanish language items were considered. Once cleaned, each individual item was converted to a plain text document, and some further post-processing is applied to remove unwanted sentences, such as headers, footers and similar elements, and to flatten HTML lists into plain sentences. #### Who are the source language producers? As in the previous edition, the corpus for eHealth-KD 2020 was extracted from [MedlinePlus](https://medlineplus.gov/xml.html) sources. ### Annotations #### Annotation process Once the MedlinePlus files were cleaned, they were manually tagged using [BRAT](http://brat.nlplab.org/) by a group of annotators. After tagging, a post-processing was applied to BRAT’s output files (ANN format) to obtain the output files in the formats needed for the challenge. #### Who are the annotators? The data was manually tagged. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset "The eHealth-KD 2020 proposes –as the previous editions– modeling the human language in a scenario in which Spanish electronic health documents could be machine-readable from a semantic point of view. With this task, we expect to encourage the development of software technologies to automatically extract a large variety of knowledge from eHealth documents written in the Spanish Language." ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators #### Organization Committee | Name | Email | Institution | |:---------------------------------------:|:---------------------:|:-----------------------------:| | Yoan Gutiérrez Vázquez (contact person) | ygutierrez@dlsi.ua.es | University of Alicante, Spain | | Suilan Estévez Velarde | sestevez@matcom.uh.cu | University of Havana, Cuba | | Alejandro Piad Morffis | apiad@matcom.uh.cu | University of Havana, Cuba | | Yudivián Almeida Cruz | yudy@matcom.uh.cu | University of Havana, Cuba | | Andrés Montoyo Guijarro | montoyo@dlsi.ua.es | University of Alicante, Spain | | Rafael Muñoz Guillena | rafael@dlsi.ua.es | University of Alicante, Spain | #### Funding This research has been supported by a Carolina Foundation grant in agreement with University of Alicante and University of Havana. Moreover, it has also been partially funded by both aforementioned universities, IUII, Generalitat Valenciana, Spanish Government, Ministerio de Educación, Cultura y Deporte through the projects SIIA (PROMETEU/2018/089) and LIVINGLANG (RTI2018-094653-B-C22). ### Licensing Information This dataset is under the Attribution-NonCommercial-ShareAlike 4.0 International [(CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). To accept the distribution terms, please fill in the following [form](https://forms.gle/pUJutSDq2FYLwNWQA). ### Citation Information In the following link you can find the [preliminar bibtexts of the systems’ working-notes](https://knowledge-learning.github.io/ehealthkd-2020/shared/eHealth-KD_2020_bibtexts.zip). In addition, to cite the eHealth-KD challenge you can use the following preliminar bibtext: ``` @inproceedings{overview_ehealthkd2020, author = {Piad{-}Morffis, Alejandro and Guti{\'{e}}rrez, Yoan and Ca{\~{n}}izares-Diaz, Hian and Estevez{-}Velarde, Suilan and Almeida{-}Cruz, Yudivi{\'{a}}n and Mu{\~{n}}oz, Rafael and Montoyo, Andr{\'{e}}s}, title = {Overview of the eHealth Knowledge Discovery Challenge at IberLEF 2020}, booktitle = , year = {2020}, } ``` ### Contributions Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset.
ehealth_kd
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:es", "license:cc-by-nc-sa-4.0", "relation-prediction", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["es"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "eHealth-KD", "tags": ["relation-prediction"], "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "entities", "list": [{"name": "ent_id", "dtype": "string"}, {"name": "ent_text", "dtype": "string"}, {"name": "ent_label", "dtype": {"class_label": {"names": {"0": "Concept", "1": "Action", "2": "Predicate", "3": "Reference"}}}}, {"name": "start_character", "dtype": "int32"}, {"name": "end_character", "dtype": "int32"}]}, {"name": "relations", "list": [{"name": "rel_id", "dtype": "string"}, {"name": "rel_label", "dtype": {"class_label": {"names": {"0": "is-a", "1": "same-as", "2": "has-property", "3": "part-of", "4": "causes", "5": "entails", "6": "in-time", "7": "in-place", "8": "in-context", "9": "subject", "10": "target", "11": "domain", "12": "arg"}}}}, {"name": "arg1", "dtype": "string"}, {"name": "arg2", "dtype": "string"}]}], "config_name": "ehealth_kd", "splits": [{"name": "train", "num_bytes": 425713, "num_examples": 800}, {"name": "validation", "num_bytes": 108154, "num_examples": 199}, {"name": "test", "num_bytes": 47314, "num_examples": 100}], "download_size": 565900, "dataset_size": 581181}}
2024-01-18T11:02:59+00:00
[]
[ "es" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Spanish #license-cc-by-nc-sa-4.0 #relation-prediction #region-us
Dataset Card for eHealth-KD =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: eHealth-KD homepage * Repository: eHealth-KD repository * Paper: eHealth-KD overview paper * Leaderboard: eHealth-KD Challenge 2020 official results * Point of Contact: Yoan Gutiérrez Vázquez (Organization Committee), María Grandury (Dataset Submitter) ### Dataset Summary Dataset of the eHealth-KD Challenge at IberLEF 2020. It is designed for the identification of semantic entities and relations in Spanish health documents. ### Supported Tasks and Leaderboards The eHealth-KD challenge proposes two computational subtasks: * 'named-entity-recognition': Given a sentence of an eHealth document written in Spanish, the goal of this subtask is to identify all the entities and their types. * 'relation-prediction': The purpose of this subtask is to recognise all relevant semantic relationships between the entities recognised. For an analysis of the most successful approaches of this challenge, read the eHealth-KD overview paper. ### Languages The text in the dataset is in Spanish (BCP-47 code: 'es'). Dataset Structure ----------------- ### Data Instances The first example of the eHeatlh-KD Corpus train set looks as follows: ### Data Fields * 'sentence': sentence of an eHealth document written in Spanish * 'entities': list of entities identified in the sentence + 'ent\_id': entity identifier ('T'+ a number) + 'ent\_text': entity, can consist of one or more complete words (i.e., not a prefix or a suffix of a word), and will never include any surrounding punctuation symbols, parenthesis, etc. + 'ent\_label': type of entity ('Concept', 'Action', 'Predicate' or 'Reference') + 'start\_character': position of the first character of the entity + 'end\_character': position of the last character of the entity * 'relations': list of semantic relationships between the entities recognised + 'rel\_id': relation identifier ('R' + a number) + 'rel\_label': type of relation, can be a general relation ('is-a', 'same-as', 'has-property', 'part-of', 'causes', 'entails'), a contextual relation ('in-time', 'in-place', 'in-context') an action role ('subject', 'target') or a predicate role ('domain', 'arg'). + 'arg1': ID of the first entity of the relation + 'arg2': ID of the second entity of the relation For more information about the types of entities and relations, click here. ### Data Splits The data is split into a training, validation and test set. The split sizes are as follow: In the challenge there are 4 different scenarios for testing. The test data of this dataset corresponds to the third scenario. More information about the testing data here. Dataset Creation ---------------- ### Curation Rationale The vast amount of clinical text available online has motivated the development of automatic knowledge discovery systems that can analyse this data and discover relevant facts. The eHealth Knowledge Discovery (eHealth-KD) challenge, in its third edition, leverages a semantic model of human language that encodes the most common expressions of factual knowledge, via a set of four general-purpose entity types and thirteen semantic relations among them. The challenge proposes the design of systems that can automatically annotate entities and relations in clinical text in the Spanish language. ### Source Data #### Initial Data Collection and Normalization As in the previous edition, the corpus for eHealth-KD 2020 has been extracted from MedlinePlus sources. This platform freely provides large health textual data from which we have made a selection for constituting the eHealth-KD corpus. The selection has been made by sampling specific XML files from the collection available in the Medline website. These files contain several entries related to health and medicine topics and have been processed to remove all XML markup to extract the textual content. Only Spanish language items were considered. Once cleaned, each individual item was converted to a plain text document, and some further post-processing is applied to remove unwanted sentences, such as headers, footers and similar elements, and to flatten HTML lists into plain sentences. #### Who are the source language producers? As in the previous edition, the corpus for eHealth-KD 2020 was extracted from MedlinePlus sources. ### Annotations #### Annotation process Once the MedlinePlus files were cleaned, they were manually tagged using BRAT by a group of annotators. After tagging, a post-processing was applied to BRAT’s output files (ANN format) to obtain the output files in the formats needed for the challenge. #### Who are the annotators? The data was manually tagged. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset "The eHealth-KD 2020 proposes –as the previous editions– modeling the human language in a scenario in which Spanish electronic health documents could be machine-readable from a semantic point of view. With this task, we expect to encourage the development of software technologies to automatically extract a large variety of knowledge from eHealth documents written in the Spanish Language." ### Discussion of Biases ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. Additional Information ---------------------- ### Dataset Curators #### Organization Committee #### Funding This research has been supported by a Carolina Foundation grant in agreement with University of Alicante and University of Havana. Moreover, it has also been partially funded by both aforementioned universities, IUII, Generalitat Valenciana, Spanish Government, Ministerio de Educación, Cultura y Deporte through the projects SIIA (PROMETEU/2018/089) and LIVINGLANG (RTI2018-094653-B-C22). ### Licensing Information This dataset is under the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). To accept the distribution terms, please fill in the following form. In the following link you can find the preliminar bibtexts of the systems’ working-notes. In addition, to cite the eHealth-KD challenge you can use the following preliminar bibtext: ### Contributions Thanks to @mariagrandury for adding this dataset.
[ "### Dataset Summary\n\n\nDataset of the eHealth-KD Challenge at IberLEF 2020. It is designed for the identification of semantic\nentities and relations in Spanish health documents.", "### Supported Tasks and Leaderboards\n\n\nThe eHealth-KD challenge proposes two computational subtasks:\n\n\n* 'named-entity-recognition': Given a sentence of an eHealth document written in Spanish, the goal of this subtask is to\nidentify all the entities and their types.\n* 'relation-prediction': The purpose of this subtask is to recognise all relevant semantic relationships between the entities recognised.\n\n\nFor an analysis of the most successful approaches of this challenge, read the eHealth-KD overview paper.", "### Languages\n\n\nThe text in the dataset is in Spanish (BCP-47 code: 'es').\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe first example of the eHeatlh-KD Corpus train set looks as follows:", "### Data Fields\n\n\n* 'sentence': sentence of an eHealth document written in Spanish\n* 'entities': list of entities identified in the sentence\n\t+ 'ent\\_id': entity identifier ('T'+ a number)\n\t+ 'ent\\_text': entity, can consist of one or more complete words (i.e., not a prefix or a suffix of a word), and will\n\tnever include any surrounding punctuation symbols, parenthesis, etc.\n\t+ 'ent\\_label': type of entity ('Concept', 'Action', 'Predicate' or 'Reference')\n\t+ 'start\\_character': position of the first character of the entity\n\t+ 'end\\_character': position of the last character of the entity\n* 'relations': list of semantic relationships between the entities recognised\n\t+ 'rel\\_id': relation identifier ('R' + a number)\n\t+ 'rel\\_label': type of relation, can be a general relation ('is-a', 'same-as', 'has-property', 'part-of', 'causes', 'entails'),\n\ta contextual relation ('in-time', 'in-place', 'in-context') an action role ('subject', 'target') or a predicate role ('domain', 'arg').\n\t+ 'arg1': ID of the first entity of the relation\n\t+ 'arg2': ID of the second entity of the relation\n\n\nFor more information about the types of entities and relations, click here.", "### Data Splits\n\n\nThe data is split into a training, validation and test set. The split sizes are as follow:\n\n\n\nIn the challenge there are 4 different scenarios for testing. The test data of this dataset corresponds to the third scenario.\nMore information about the testing data here.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe vast amount of clinical text available online has motivated the development of automatic\nknowledge discovery systems that can analyse this data and discover relevant facts.\n\n\nThe eHealth Knowledge Discovery (eHealth-KD) challenge, in its third edition, leverages\na semantic model of human language that encodes the most common expressions of factual\nknowledge, via a set of four general-purpose entity types and thirteen semantic relations among\nthem. The challenge proposes the design of systems that can automatically annotate entities and\nrelations in clinical text in the Spanish language.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nAs in the previous edition, the corpus for eHealth-KD 2020 has been extracted from MedlinePlus sources. This platform\nfreely provides large health textual data from which we have made a selection for constituting the eHealth-KD corpus.\nThe selection has been made by sampling specific XML files from the collection available in the Medline website.\n\n\nThese files contain several entries related to health and medicine topics and have been processed to remove all\nXML markup to extract the textual content. Only Spanish language items were considered. Once cleaned, each individual\nitem was converted to a plain text document, and some further post-processing is applied to remove unwanted sentences,\nsuch as headers, footers and similar elements, and to flatten HTML lists into plain sentences.", "#### Who are the source language producers?\n\n\nAs in the previous edition, the corpus for eHealth-KD 2020 was extracted from MedlinePlus sources.", "### Annotations", "#### Annotation process\n\n\nOnce the MedlinePlus files were cleaned, they were manually tagged using BRAT by a group of\nannotators. After tagging, a post-processing was applied to BRAT’s output files (ANN format) to obtain the output files\nin the formats needed for the challenge.", "#### Who are the annotators?\n\n\nThe data was manually tagged.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\n\"The eHealth-KD 2020 proposes –as the previous editions– modeling the human language in a scenario in which Spanish\nelectronic health documents could be machine-readable from a semantic point of view.\n\n\nWith this task, we expect to encourage the development of software technologies to automatically extract a large variety\nof knowledge from eHealth documents written in the Spanish Language.\"", "### Discussion of Biases", "### Other Known Limitations\n\n\nDataset provided for research purposes only. Please check dataset license for additional information.\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "#### Organization Committee", "#### Funding\n\n\nThis research has been supported by a Carolina Foundation grant in agreement with University of Alicante and University\nof Havana. Moreover, it has also been partially funded by both aforementioned universities, IUII, Generalitat Valenciana,\nSpanish Government, Ministerio de Educación, Cultura y Deporte through the projects SIIA (PROMETEU/2018/089) and\nLIVINGLANG (RTI2018-094653-B-C22).", "### Licensing Information\n\n\nThis dataset is under the Attribution-NonCommercial-ShareAlike 4.0 International\n(CC BY-NC-SA 4.0).\n\n\nTo accept the distribution terms, please fill in the following form.\n\n\nIn the following link you can find the\npreliminar bibtexts of the systems’ working-notes.\nIn addition, to cite the eHealth-KD challenge you can use the following preliminar bibtext:", "### Contributions\n\n\nThanks to @mariagrandury for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Spanish #license-cc-by-nc-sa-4.0 #relation-prediction #region-us \n", "### Dataset Summary\n\n\nDataset of the eHealth-KD Challenge at IberLEF 2020. It is designed for the identification of semantic\nentities and relations in Spanish health documents.", "### Supported Tasks and Leaderboards\n\n\nThe eHealth-KD challenge proposes two computational subtasks:\n\n\n* 'named-entity-recognition': Given a sentence of an eHealth document written in Spanish, the goal of this subtask is to\nidentify all the entities and their types.\n* 'relation-prediction': The purpose of this subtask is to recognise all relevant semantic relationships between the entities recognised.\n\n\nFor an analysis of the most successful approaches of this challenge, read the eHealth-KD overview paper.", "### Languages\n\n\nThe text in the dataset is in Spanish (BCP-47 code: 'es').\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe first example of the eHeatlh-KD Corpus train set looks as follows:", "### Data Fields\n\n\n* 'sentence': sentence of an eHealth document written in Spanish\n* 'entities': list of entities identified in the sentence\n\t+ 'ent\\_id': entity identifier ('T'+ a number)\n\t+ 'ent\\_text': entity, can consist of one or more complete words (i.e., not a prefix or a suffix of a word), and will\n\tnever include any surrounding punctuation symbols, parenthesis, etc.\n\t+ 'ent\\_label': type of entity ('Concept', 'Action', 'Predicate' or 'Reference')\n\t+ 'start\\_character': position of the first character of the entity\n\t+ 'end\\_character': position of the last character of the entity\n* 'relations': list of semantic relationships between the entities recognised\n\t+ 'rel\\_id': relation identifier ('R' + a number)\n\t+ 'rel\\_label': type of relation, can be a general relation ('is-a', 'same-as', 'has-property', 'part-of', 'causes', 'entails'),\n\ta contextual relation ('in-time', 'in-place', 'in-context') an action role ('subject', 'target') or a predicate role ('domain', 'arg').\n\t+ 'arg1': ID of the first entity of the relation\n\t+ 'arg2': ID of the second entity of the relation\n\n\nFor more information about the types of entities and relations, click here.", "### Data Splits\n\n\nThe data is split into a training, validation and test set. The split sizes are as follow:\n\n\n\nIn the challenge there are 4 different scenarios for testing. The test data of this dataset corresponds to the third scenario.\nMore information about the testing data here.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe vast amount of clinical text available online has motivated the development of automatic\nknowledge discovery systems that can analyse this data and discover relevant facts.\n\n\nThe eHealth Knowledge Discovery (eHealth-KD) challenge, in its third edition, leverages\na semantic model of human language that encodes the most common expressions of factual\nknowledge, via a set of four general-purpose entity types and thirteen semantic relations among\nthem. The challenge proposes the design of systems that can automatically annotate entities and\nrelations in clinical text in the Spanish language.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nAs in the previous edition, the corpus for eHealth-KD 2020 has been extracted from MedlinePlus sources. This platform\nfreely provides large health textual data from which we have made a selection for constituting the eHealth-KD corpus.\nThe selection has been made by sampling specific XML files from the collection available in the Medline website.\n\n\nThese files contain several entries related to health and medicine topics and have been processed to remove all\nXML markup to extract the textual content. Only Spanish language items were considered. Once cleaned, each individual\nitem was converted to a plain text document, and some further post-processing is applied to remove unwanted sentences,\nsuch as headers, footers and similar elements, and to flatten HTML lists into plain sentences.", "#### Who are the source language producers?\n\n\nAs in the previous edition, the corpus for eHealth-KD 2020 was extracted from MedlinePlus sources.", "### Annotations", "#### Annotation process\n\n\nOnce the MedlinePlus files were cleaned, they were manually tagged using BRAT by a group of\nannotators. After tagging, a post-processing was applied to BRAT’s output files (ANN format) to obtain the output files\nin the formats needed for the challenge.", "#### Who are the annotators?\n\n\nThe data was manually tagged.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\n\"The eHealth-KD 2020 proposes –as the previous editions– modeling the human language in a scenario in which Spanish\nelectronic health documents could be machine-readable from a semantic point of view.\n\n\nWith this task, we expect to encourage the development of software technologies to automatically extract a large variety\nof knowledge from eHealth documents written in the Spanish Language.\"", "### Discussion of Biases", "### Other Known Limitations\n\n\nDataset provided for research purposes only. Please check dataset license for additional information.\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "#### Organization Committee", "#### Funding\n\n\nThis research has been supported by a Carolina Foundation grant in agreement with University of Alicante and University\nof Havana. Moreover, it has also been partially funded by both aforementioned universities, IUII, Generalitat Valenciana,\nSpanish Government, Ministerio de Educación, Cultura y Deporte through the projects SIIA (PROMETEU/2018/089) and\nLIVINGLANG (RTI2018-094653-B-C22).", "### Licensing Information\n\n\nThis dataset is under the Attribution-NonCommercial-ShareAlike 4.0 International\n(CC BY-NC-SA 4.0).\n\n\nTo accept the distribution terms, please fill in the following form.\n\n\nIn the following link you can find the\npreliminar bibtexts of the systems’ working-notes.\nIn addition, to cite the eHealth-KD challenge you can use the following preliminar bibtext:", "### Contributions\n\n\nThanks to @mariagrandury for adding this dataset." ]
[ 110, 41, 129, 30, 26, 373, 67, 128, 4, 179, 34, 5, 68, 16, 18, 83, 8, 32, 6, 4, 95, 88, 18 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Spanish #license-cc-by-nc-sa-4.0 #relation-prediction #region-us \n### Dataset Summary\n\n\nDataset of the eHealth-KD Challenge at IberLEF 2020. It is designed for the identification of semantic\nentities and relations in Spanish health documents.### Supported Tasks and Leaderboards\n\n\nThe eHealth-KD challenge proposes two computational subtasks:\n\n\n* 'named-entity-recognition': Given a sentence of an eHealth document written in Spanish, the goal of this subtask is to\nidentify all the entities and their types.\n* 'relation-prediction': The purpose of this subtask is to recognise all relevant semantic relationships between the entities recognised.\n\n\nFor an analysis of the most successful approaches of this challenge, read the eHealth-KD overview paper.### Languages\n\n\nThe text in the dataset is in Spanish (BCP-47 code: 'es').\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThe first example of the eHeatlh-KD Corpus train set looks as follows:", "passage: ### Data Fields\n\n\n* 'sentence': sentence of an eHealth document written in Spanish\n* 'entities': list of entities identified in the sentence\n\t+ 'ent\\_id': entity identifier ('T'+ a number)\n\t+ 'ent\\_text': entity, can consist of one or more complete words (i.e., not a prefix or a suffix of a word), and will\n\tnever include any surrounding punctuation symbols, parenthesis, etc.\n\t+ 'ent\\_label': type of entity ('Concept', 'Action', 'Predicate' or 'Reference')\n\t+ 'start\\_character': position of the first character of the entity\n\t+ 'end\\_character': position of the last character of the entity\n* 'relations': list of semantic relationships between the entities recognised\n\t+ 'rel\\_id': relation identifier ('R' + a number)\n\t+ 'rel\\_label': type of relation, can be a general relation ('is-a', 'same-as', 'has-property', 'part-of', 'causes', 'entails'),\n\ta contextual relation ('in-time', 'in-place', 'in-context') an action role ('subject', 'target') or a predicate role ('domain', 'arg').\n\t+ 'arg1': ID of the first entity of the relation\n\t+ 'arg2': ID of the second entity of the relation\n\n\nFor more information about the types of entities and relations, click here.### Data Splits\n\n\nThe data is split into a training, validation and test set. The split sizes are as follow:\n\n\n\nIn the challenge there are 4 different scenarios for testing. The test data of this dataset corresponds to the third scenario.\nMore information about the testing data here.\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe vast amount of clinical text available online has motivated the development of automatic\nknowledge discovery systems that can analyse this data and discover relevant facts.\n\n\nThe eHealth Knowledge Discovery (eHealth-KD) challenge, in its third edition, leverages\na semantic model of human language that encodes the most common expressions of factual\nknowledge, via a set of four general-purpose entity types and thirteen semantic relations among\nthem. The challenge proposes the design of systems that can automatically annotate entities and\nrelations in clinical text in the Spanish language.### Source Data#### Initial Data Collection and Normalization\n\n\nAs in the previous edition, the corpus for eHealth-KD 2020 has been extracted from MedlinePlus sources. This platform\nfreely provides large health textual data from which we have made a selection for constituting the eHealth-KD corpus.\nThe selection has been made by sampling specific XML files from the collection available in the Medline website.\n\n\nThese files contain several entries related to health and medicine topics and have been processed to remove all\nXML markup to extract the textual content. Only Spanish language items were considered. Once cleaned, each individual\nitem was converted to a plain text document, and some further post-processing is applied to remove unwanted sentences,\nsuch as headers, footers and similar elements, and to flatten HTML lists into plain sentences.#### Who are the source language producers?\n\n\nAs in the previous edition, the corpus for eHealth-KD 2020 was extracted from MedlinePlus sources.### Annotations#### Annotation process\n\n\nOnce the MedlinePlus files were cleaned, they were manually tagged using BRAT by a group of\nannotators. After tagging, a post-processing was applied to BRAT’s output files (ANN format) to obtain the output files\nin the formats needed for the challenge.#### Who are the annotators?\n\n\nThe data was manually tagged." ]
54b5f264228abbb4d1ec6d692d1cf5c10e94b89c
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://opus.nlpl.eu/EiTB-ParCC/corpus/version/EiTB-ParCC - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary EiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by \ Vicomtech (https://www.vicomtech.org), extracted from comparable news produced by the \ Basque public broadcasting group Euskal Irrati Telebista. ### Supported Tasks and Leaderboards The underlying task is machine translation. ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @InProceedings{TIEDEMANN12.463, author = {J{\"o}rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
eitb_parcc
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:es", "language:eu", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["es", "eu"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "eitb-parcc", "pretty_name": "EiTB-ParCC", "dataset_info": {"config_name": "es-eu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["es", "eu"]}}}], "splits": [{"name": "train", "num_bytes": 139038886, "num_examples": 637183}], "download_size": 57244346, "dataset_size": 139038886}}
2024-02-08T15:06:26+00:00
[]
[ "es", "eu" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Spanish #language-Basque #license-unknown #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary EiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by \ Vicomtech (URL), extracted from comparable news produced by the \ Basque public broadcasting group Euskal Irrati Telebista. ### Supported Tasks and Leaderboards The underlying task is machine translation. ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patil-suraj for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nEiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by \\\nVicomtech (URL), extracted from comparable news produced by the \\\nBasque public broadcasting group Euskal Irrati Telebista.", "### Supported Tasks and Leaderboards\n\nThe underlying task is machine translation.", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Spanish #language-Basque #license-unknown #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nEiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by \\\nVicomtech (URL), extracted from comparable news produced by the \\\nBasque public broadcasting group Euskal Irrati Telebista.", "### Supported Tasks and Leaderboards\n\nThe underlying task is machine translation.", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ 78, 10, 120, 25, 60, 19, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Spanish #language-Basque #license-unknown #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nEiTB-ParCC: Parallel Corpus of Comparable News. A Basque-Spanish parallel corpus provided by \\\nVicomtech (URL), extracted from comparable news produced by the \\\nBasque public broadcasting group Euskal Irrati Telebista.### Supported Tasks and Leaderboards\n\nThe underlying task is machine translation.### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
be5912e0562f6aae77d9a06e93f43e61dd12f079
# Dataset Card for Electricity Load Diagrams ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Electricity Load Diagrams 2011-2014](https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014) - **Paper:** [Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks ](https://dl.acm.org/doi/10.1145/3209978.3210006) - **Point of Contact:** [Artur Trindade](mailto:artur.trindade@elergone.pt) ### Dataset Summary This dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014. ### Dataset Usage The dataset has the following configuration parameters: - `freq` is the time series frequency at which we resample (default: `"1H"`) - `prediction_length` is the forecast horizon for this task which is used to make the validation and test splits (default: `24`) - `rolling_evaluations` is the number of rolling window time series in the test split for evaluation purposes (default: `7`) For example, you can specify your own configuration different from those used in the papers as follows: ```python load_dataset("electricity_load_diagrams", "uci", rolling_evaluations=10) ``` > Notes: > - Data set has no missing values. > - Values are in kW of each 15 min rescaled to hourly. To convert values in kWh values must be divided by 4. > - All time labels report to Portuguese hour, however all days present 96 measures (24*4). > - Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points. > - Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours. ### Supported Tasks and Leaderboards - `univariate-time-series-forecasting`: The time series forecasting tasks involves learning the future `target` values of time series in a dataset for the `prediction_length` time steps. The results of the forecasts can then be validated via the ground truth in the `validation` split and tested via the `test` split. ### Languages ## Dataset Structure Data set has no missing values. The raw values are in kW of each 15 min interval and are resampled to hourly frequency. Each time series represent one client. Some clients were created after 2011. In these cases consumption were considered zero. All time labels report to Portuguese hour, however all days contain 96 measurements (24*4). Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points. Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours. ### Data Instances A sample from the training set is provided below: ``` { 'start': datetime.datetime(2012, 1, 1, 0, 0), 'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, 20.0, 20.0, 13.0, 11.0], # <= this target array is a concatenated sample 'feat_static_cat': [0], 'item_id': '0' } ``` We have two configurations `uci` and `lstnet`, which are specified as follows. The time series are resampled to hourly frequency. We test on 7 rolling windows of prediction length of 24. The `uci` validation therefore ends 24*7 time steps before the end of each time series. The training split ends 24 time steps before the end of the validation split. For the `lsnet` configuration we split the training window so that it is 0.6-th of the full time series and the validation is 0.8-th of the full time series and the last 0.2-th length time windows is used as the test set of 7 rolling windows of the 24 time steps each. Finally, as in the LSTNet paper, we only consider time series that are active in the year 2012--2014, which leaves us with 320 time series. ### Data Fields For this univariate regular time series we have: - `start`: a `datetime` of the first entry of each time series in the dataset - `target`: an `array[float32]` of the actual target values - `feat_static_cat`: an `array[uint64]` which contains a categorical identifier of each time series in the dataset - `item_id`: a string identifier of each time series in a dataset for reference Given the `freq` and the `start` datetime, we can assign a datetime to each entry in the target array. ### Data Splits | name |train|unsupervised|test | |----------|----:|-----------:|----:| |uci|370| 2590|370| |lstnet|320| 2240|320| ## Dataset Creation The Electricity Load Diagrams 2011–2014 Dataset was developed by Artur Trindade and shared in UCI Machine Learning Repository. This dataset covers the electricity load of 370 substations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min. We will resample this to hourly time series. ### Curation Rationale Research and development of load forecasting methods. In particular short-term electricity forecasting. ### Source Data This dataset covers the electricity load of 370 sub-stations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min. #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @inproceedings{10.1145/3209978.3210006, author = {Lai, Guokun and Chang, Wei-Cheng and Yang, Yiming and Liu, Hanxiao}, title = {Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks}, year = {2018}, isbn = {9781450356572}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3209978.3210006}, doi = {10.1145/3209978.3210006}, booktitle = {The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval}, pages = {95--104}, numpages = {10}, location = {Ann Arbor, MI, USA}, series = {SIGIR '18} } ``` ### Contributions Thanks to [@kashif](https://github.com/kashif) for adding this dataset.
electricity_load_diagrams
[ "task_categories:time-series-forecasting", "task_ids:univariate-time-series-forecasting", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": [], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["time-series-forecasting"], "task_ids": ["univariate-time-series-forecasting"], "pretty_name": "Electricity Load Diagrams", "dataset_info": [{"config_name": "uci", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42968147, "num_examples": 370}, {"name": "test", "num_bytes": 302059069, "num_examples": 2590}, {"name": "validation", "num_bytes": 43004777, "num_examples": 370}], "download_size": 261335609, "dataset_size": 388031993}, {"config_name": "lstnet", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20843200, "num_examples": 320}, {"name": "test", "num_bytes": 195401080, "num_examples": 2240}, {"name": "validation", "num_bytes": 27787720, "num_examples": 320}], "download_size": 261335609, "dataset_size": 244032000}]}
2024-01-18T11:03:07+00:00
[]
[]
TAGS #task_categories-time-series-forecasting #task_ids-univariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-unknown #region-us
Dataset Card for Electricity Load Diagrams ========================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Electricity Load Diagrams 2011-2014 * Paper: Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks * Point of Contact: Artur Trindade ### Dataset Summary This dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014. ### Dataset Usage The dataset has the following configuration parameters: * 'freq' is the time series frequency at which we resample (default: '"1H"') * 'prediction\_length' is the forecast horizon for this task which is used to make the validation and test splits (default: '24') * 'rolling\_evaluations' is the number of rolling window time series in the test split for evaluation purposes (default: '7') For example, you can specify your own configuration different from those used in the papers as follows: > > Notes: > > > * Data set has no missing values. > * Values are in kW of each 15 min rescaled to hourly. To convert values in kWh values must be divided by 4. > * All time labels report to Portuguese hour, however all days present 96 measures (24\*4). > * Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points. > * Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours. > > > ### Supported Tasks and Leaderboards * 'univariate-time-series-forecasting': The time series forecasting tasks involves learning the future 'target' values of time series in a dataset for the 'prediction\_length' time steps. The results of the forecasts can then be validated via the ground truth in the 'validation' split and tested via the 'test' split. ### Languages Dataset Structure ----------------- Data set has no missing values. The raw values are in kW of each 15 min interval and are resampled to hourly frequency. Each time series represent one client. Some clients were created after 2011. In these cases consumption were considered zero. All time labels report to Portuguese hour, however all days contain 96 measurements (24\*4). Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points. Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours. ### Data Instances A sample from the training set is provided below: We have two configurations 'uci' and 'lstnet', which are specified as follows. The time series are resampled to hourly frequency. We test on 7 rolling windows of prediction length of 24. The 'uci' validation therefore ends 24\*7 time steps before the end of each time series. The training split ends 24 time steps before the end of the validation split. For the 'lsnet' configuration we split the training window so that it is 0.6-th of the full time series and the validation is 0.8-th of the full time series and the last 0.2-th length time windows is used as the test set of 7 rolling windows of the 24 time steps each. Finally, as in the LSTNet paper, we only consider time series that are active in the year 2012--2014, which leaves us with 320 time series. ### Data Fields For this univariate regular time series we have: * 'start': a 'datetime' of the first entry of each time series in the dataset * 'target': an 'array[float32]' of the actual target values * 'feat\_static\_cat': an 'array[uint64]' which contains a categorical identifier of each time series in the dataset * 'item\_id': a string identifier of each time series in a dataset for reference Given the 'freq' and the 'start' datetime, we can assign a datetime to each entry in the target array. ### Data Splits Dataset Creation ---------------- The Electricity Load Diagrams 2011–2014 Dataset was developed by Artur Trindade and shared in UCI Machine Learning Repository. This dataset covers the electricity load of 370 substations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min. We will resample this to hourly time series. ### Curation Rationale Research and development of load forecasting methods. In particular short-term electricity forecasting. ### Source Data This dataset covers the electricity load of 370 sub-stations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min. #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @kashif for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014.", "### Dataset Usage\n\n\nThe dataset has the following configuration parameters:\n\n\n* 'freq' is the time series frequency at which we resample (default: '\"1H\"')\n* 'prediction\\_length' is the forecast horizon for this task which is used to make the validation and test splits (default: '24')\n* 'rolling\\_evaluations' is the number of rolling window time series in the test split for evaluation purposes (default: '7')\n\n\nFor example, you can specify your own configuration different from those used in the papers as follows:\n\n\n\n> \n> Notes:\n> \n> \n> * Data set has no missing values.\n> * Values are in kW of each 15 min rescaled to hourly. To convert values in kWh values must be divided by 4.\n> * All time labels report to Portuguese hour, however all days present 96 measures (24\\*4).\n> * Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points.\n> * Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours.\n> \n> \n>", "### Supported Tasks and Leaderboards\n\n\n* 'univariate-time-series-forecasting': The time series forecasting tasks involves learning the future 'target' values of time series in a dataset for the 'prediction\\_length' time steps. The results of the forecasts can then be validated via the ground truth in the 'validation' split and tested via the 'test' split.", "### Languages\n\n\nDataset Structure\n-----------------\n\n\nData set has no missing values. The raw values are in kW of each 15 min interval and are resampled to hourly frequency.\nEach time series represent one client. Some clients were created after 2011. In these cases consumption were considered zero. All time labels report to Portuguese hour, however all days contain 96 measurements (24\\*4). Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points. Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours.", "### Data Instances\n\n\nA sample from the training set is provided below:\n\n\nWe have two configurations 'uci' and 'lstnet', which are specified as follows.\n\n\nThe time series are resampled to hourly frequency. We test on 7 rolling windows of prediction length of 24.\n\n\nThe 'uci' validation therefore ends 24\\*7 time steps before the end of each time series. The training split ends 24 time steps before the end of the validation split.\n\n\nFor the 'lsnet' configuration we split the training window so that it is 0.6-th of the full time series and the validation is 0.8-th of the full time series and the last 0.2-th length time windows is used as the test set of 7 rolling windows of the 24 time steps each. Finally, as in the LSTNet paper, we only consider time series that are active in the year 2012--2014, which leaves us with 320 time series.", "### Data Fields\n\n\nFor this univariate regular time series we have:\n\n\n* 'start': a 'datetime' of the first entry of each time series in the dataset\n* 'target': an 'array[float32]' of the actual target values\n* 'feat\\_static\\_cat': an 'array[uint64]' which contains a categorical identifier of each time series in the dataset\n* 'item\\_id': a string identifier of each time series in a dataset for reference\n\n\nGiven the 'freq' and the 'start' datetime, we can assign a datetime to each entry in the target array.", "### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nThe Electricity Load Diagrams 2011–2014 Dataset was developed by Artur Trindade and shared in UCI Machine Learning Repository. This dataset covers the electricity load of 370 substations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min. We will resample this to hourly time series.", "### Curation Rationale\n\n\nResearch and development of load forecasting methods. In particular short-term electricity forecasting.", "### Source Data\n\n\nThis dataset covers the electricity load of 370 sub-stations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @kashif for adding this dataset." ]
[ "TAGS\n#task_categories-time-series-forecasting #task_ids-univariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-unknown #region-us \n", "### Dataset Summary\n\n\nThis dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014.", "### Dataset Usage\n\n\nThe dataset has the following configuration parameters:\n\n\n* 'freq' is the time series frequency at which we resample (default: '\"1H\"')\n* 'prediction\\_length' is the forecast horizon for this task which is used to make the validation and test splits (default: '24')\n* 'rolling\\_evaluations' is the number of rolling window time series in the test split for evaluation purposes (default: '7')\n\n\nFor example, you can specify your own configuration different from those used in the papers as follows:\n\n\n\n> \n> Notes:\n> \n> \n> * Data set has no missing values.\n> * Values are in kW of each 15 min rescaled to hourly. To convert values in kWh values must be divided by 4.\n> * All time labels report to Portuguese hour, however all days present 96 measures (24\\*4).\n> * Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points.\n> * Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours.\n> \n> \n>", "### Supported Tasks and Leaderboards\n\n\n* 'univariate-time-series-forecasting': The time series forecasting tasks involves learning the future 'target' values of time series in a dataset for the 'prediction\\_length' time steps. The results of the forecasts can then be validated via the ground truth in the 'validation' split and tested via the 'test' split.", "### Languages\n\n\nDataset Structure\n-----------------\n\n\nData set has no missing values. The raw values are in kW of each 15 min interval and are resampled to hourly frequency.\nEach time series represent one client. Some clients were created after 2011. In these cases consumption were considered zero. All time labels report to Portuguese hour, however all days contain 96 measurements (24\\*4). Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points. Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours.", "### Data Instances\n\n\nA sample from the training set is provided below:\n\n\nWe have two configurations 'uci' and 'lstnet', which are specified as follows.\n\n\nThe time series are resampled to hourly frequency. We test on 7 rolling windows of prediction length of 24.\n\n\nThe 'uci' validation therefore ends 24\\*7 time steps before the end of each time series. The training split ends 24 time steps before the end of the validation split.\n\n\nFor the 'lsnet' configuration we split the training window so that it is 0.6-th of the full time series and the validation is 0.8-th of the full time series and the last 0.2-th length time windows is used as the test set of 7 rolling windows of the 24 time steps each. Finally, as in the LSTNet paper, we only consider time series that are active in the year 2012--2014, which leaves us with 320 time series.", "### Data Fields\n\n\nFor this univariate regular time series we have:\n\n\n* 'start': a 'datetime' of the first entry of each time series in the dataset\n* 'target': an 'array[float32]' of the actual target values\n* 'feat\\_static\\_cat': an 'array[uint64]' which contains a categorical identifier of each time series in the dataset\n* 'item\\_id': a string identifier of each time series in a dataset for reference\n\n\nGiven the 'freq' and the 'start' datetime, we can assign a datetime to each entry in the target array.", "### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nThe Electricity Load Diagrams 2011–2014 Dataset was developed by Artur Trindade and shared in UCI Machine Learning Repository. This dataset covers the electricity load of 370 substations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min. We will resample this to hourly time series.", "### Curation Rationale\n\n\nResearch and development of load forecasting methods. In particular short-term electricity forecasting.", "### Source Data\n\n\nThis dataset covers the electricity load of 370 sub-stations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @kashif for adding this dataset." ]
[ 95, 31, 277, 97, 151, 205, 151, 88, 26, 41, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 16 ]
[ "passage: TAGS\n#task_categories-time-series-forecasting #task_ids-univariate-time-series-forecasting #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #license-unknown #region-us \n### Dataset Summary\n\n\nThis dataset contains hourly kW electricity consumption time series of 370 Portuguese clients from 2011 to 2014.### Dataset Usage\n\n\nThe dataset has the following configuration parameters:\n\n\n* 'freq' is the time series frequency at which we resample (default: '\"1H\"')\n* 'prediction\\_length' is the forecast horizon for this task which is used to make the validation and test splits (default: '24')\n* 'rolling\\_evaluations' is the number of rolling window time series in the test split for evaluation purposes (default: '7')\n\n\nFor example, you can specify your own configuration different from those used in the papers as follows:\n\n\n\n> \n> Notes:\n> \n> \n> * Data set has no missing values.\n> * Values are in kW of each 15 min rescaled to hourly. To convert values in kWh values must be divided by 4.\n> * All time labels report to Portuguese hour, however all days present 96 measures (24\\*4).\n> * Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points.\n> * Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours.\n> \n> \n>### Supported Tasks and Leaderboards\n\n\n* 'univariate-time-series-forecasting': The time series forecasting tasks involves learning the future 'target' values of time series in a dataset for the 'prediction\\_length' time steps. The results of the forecasts can then be validated via the ground truth in the 'validation' split and tested via the 'test' split.", "passage: ### Languages\n\n\nDataset Structure\n-----------------\n\n\nData set has no missing values. The raw values are in kW of each 15 min interval and are resampled to hourly frequency.\nEach time series represent one client. Some clients were created after 2011. In these cases consumption were considered zero. All time labels report to Portuguese hour, however all days contain 96 measurements (24\\*4). Every year in March time change day (which has only 23 hours) the values between 1:00 am and 2:00 am are zero for all points. Every year in October time change day (which has 25 hours) the values between 1:00 am and 2:00 am aggregate the consumption of two hours.### Data Instances\n\n\nA sample from the training set is provided below:\n\n\nWe have two configurations 'uci' and 'lstnet', which are specified as follows.\n\n\nThe time series are resampled to hourly frequency. We test on 7 rolling windows of prediction length of 24.\n\n\nThe 'uci' validation therefore ends 24\\*7 time steps before the end of each time series. The training split ends 24 time steps before the end of the validation split.\n\n\nFor the 'lsnet' configuration we split the training window so that it is 0.6-th of the full time series and the validation is 0.8-th of the full time series and the last 0.2-th length time windows is used as the test set of 7 rolling windows of the 24 time steps each. Finally, as in the LSTNet paper, we only consider time series that are active in the year 2012--2014, which leaves us with 320 time series.### Data Fields\n\n\nFor this univariate regular time series we have:\n\n\n* 'start': a 'datetime' of the first entry of each time series in the dataset\n* 'target': an 'array[float32]' of the actual target values\n* 'feat\\_static\\_cat': an 'array[uint64]' which contains a categorical identifier of each time series in the dataset\n* 'item\\_id': a string identifier of each time series in a dataset for reference\n\n\nGiven the 'freq' and the 'start' datetime, we can assign a datetime to each entry in the target array.### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nThe Electricity Load Diagrams 2011–2014 Dataset was developed by Artur Trindade and shared in UCI Machine Learning Repository. This dataset covers the electricity load of 370 substations in Portugal from the start of 2011 to the end of 2014 with a sampling period of 15 min. We will resample this to hourly time series.### Curation Rationale\n\n\nResearch and development of load forecasting methods. In particular short-term electricity forecasting." ]
3e8e3c29ce51a77b35eaa45965474c4b35850430
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Defunct:</b> Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data.</p> </div> ## <span style="color:red">⚠️ Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable</span>. # Dataset Card for ELI5 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ELI5 homepage](https://facebookresearch.github.io/ELI5/explore.html) - **Repository:** [ELI5 repository](https://github.com/facebookresearch/ELI5) - **Paper:** [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190) - **Point of Contact:** [Yacine Jernite](mailto:yacine@huggingface.co) ### Dataset Summary The ELI5 dataset is an English-language dataset of questions and answers gathered from three subreddits where users ask factual questions requiring paragraph-length or longer answers. The dataset was created to support the task of open-domain long form abstractive question answering, and covers questions about general topics in its [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subset, science in it [r/askscience](https://www.reddit.com/r/askscience/) subset, and History in its [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subset. ### Supported Tasks and Leaderboards - `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. The model performance is measured by how high its [ROUGE](https://huggingface.co/metrics/rouge) score to the reference is. A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) trained to draw information from [Wikipedia passages](https://huggingface.co/datasets/wiki_snippets) achieves a [ROUGE-L of 0.149](https://yjernite.github.io/lfqa.html#generation). ### Languages The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by the number of upvotes they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text. An example from the ELI5 test set looks as follows: ``` {'q_id': '8houtx', 'title': 'Why does water heated to room temperature feel colder than the air around it?', 'selftext': '', 'document': '', 'subreddit': 'explainlikeimfive', 'answers': {'a_id': ['dylcnfk', 'dylcj49'], 'text': ["Water transfers heat more efficiently than air. When something feels cold it's because heat is being transferred from your skin to whatever you're touching. Since water absorbs the heat more readily than air, it feels colder.", "Air isn't as good at transferring heat compared to something like water or steel (sit on a room temperature steel bench vs. a room temperature wooden bench, and the steel one will feel more cold).\n\nWhen you feel cold, what you're feeling is heat being transferred out of you. If there is no breeze, you feel a certain way. If there's a breeze, you will get colder faster (because the moving air is pulling the heat away from you), and if you get into water, its quite good at pulling heat from you. Get out of the water and have a breeze blow on you while you're wet, all of the water starts evaporating, pulling even more heat from you."], 'score': [5, 2]}, 'title_urls': {'url': []}, 'selftext_urls': {'url': []}, 'answers_urls': {'url': []}} ``` ### Data Fields - `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps. - `subreddit`: One of `explainlikeimfive`, `askscience`, or `AskHistorians`, indicating which subreddit the question came from - `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens - `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n` - `selftext`: either an empty string or an elaboration of the question - `selftext_urls`: similar to `title_urls` but for `self_text` - `answers`: a list of answers, each answer has: - `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps. - `text`: the answer text with the URLs normalized - `score`: the number of upvotes the answer had received when the dumps were created - `answers_urls`: a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts ### Data Splits The data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the `title` field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow: | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | r/explainlikeimfive examples| 272634 | 9812 | 24512| | r/askscience examples | 131778 | 2281 | 4462 | | r/AskHistorians examples | 98525 | 4901 | 9764 | ## Dataset Creation ### Curation Rationale ELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain. ### Source Data #### Initial Data Collection and Normalization The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/). In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019. #### Who are the source language producers? The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits between 2012 and 2019. No further demographic information was available from the data source. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop better question answering systems. A system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them. It should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section. ### Discussion of Biases While Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the [recent post from Reddit founder u/spez](https://www.reddit.com/r/announcements/comments/gxas21/upcoming_changes_to_our_content_policy_our_board/) outlining some of the ways he thinks the website's historical policies have been responsible for this problem, [Adrienne Massanari's 2015 article on GamerGate](https://www.researchgate.net/publication/283848479_Gamergate_and_The_Fappening_How_Reddit's_algorithm_governance_and_culture_support_toxic_technocultures) and follow-up works, or a [2019 Wired article on misogyny on Reddit](https://www.wired.com/story/misogyny-reddit-research/). While there has been some recent work in the NLP community on *de-biasing* models (e.g. [Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings](https://arxiv.org/abs/1904.04047) for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern. We still note some encouraging signs for all of these communities: [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) and [r/askscience](https://www.reddit.com/r/askscience/) have similar structures and purposes, and [r/askscience](https://www.reddit.com/r/askscience/) was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a [hackerfall post](https://hackerfall.com/story/study-and-interactive-visualization-of-toxicity-in), [thecut.com write-up](https://www.thecut.com/2015/03/interactive-chart-of-reddits-toxicity.html) and supporting [data](https://chart-studio.plotly.com/~bsbell21/210/toxicity-vs-supportiveness-by-subreddit/#data)). Meanwhile, the [r/AskHistorians rules](https://www.reddit.com/r/AskHistorians/wiki/rules) mention that the admins will not tolerate "_racism, sexism, or any other forms of bigotry_". However, further analysis of whether and to what extent these rules reduce toxicity is still needed. We also note that given the audience of the Reddit website which is more broadly used in the US and Europe, the answers will likely present a Western perspectives, which is particularly important to note when dealing with historical topics. ### Other Known Limitations The answers provided in the dataset are represent the opinion of Reddit users. While these communities strive to be helpful, they should not be considered to represent a ground truth. ## Additional Information ### Dataset Curators The dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR). ### Licensing Information The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear. ### Citation Information ``` @inproceedings{eli5_lfqa, author = {Angela Fan and Yacine Jernite and Ethan Perez and David Grangier and Jason Weston and Michael Auli}, editor = {Anna Korhonen and David R. Traum and Llu{\'{\i}}s M{\`{a}}rquez}, title = {{ELI5:} Long Form Question Answering}, booktitle = {Proceedings of the 57th Conference of the Association for Computational Linguistics, {ACL} 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers}, pages = {3558--3567}, publisher = {Association for Computational Linguistics}, year = {2019}, url = {https://doi.org/10.18653/v1/p19-1346}, doi = {10.18653/v1/p19-1346} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset.
eli5
[ "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:open-domain-abstractive-qa", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "arxiv:1907.09190", "arxiv:1904.04047", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": ["abstractive-qa", "open-domain-abstractive-qa"], "paperswithcode_id": "eli5", "pretty_name": "ELI5", "viewer": false, "dataset_info": {"features": [{"name": "q_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "selftext", "dtype": "string"}, {"name": "document", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "a_id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "score", "dtype": "int32"}]}, {"name": "title_urls", "sequence": [{"name": "url", "dtype": "string"}]}, {"name": "selftext_urls", "sequence": [{"name": "url", "dtype": "string"}]}, {"name": "answers_urls", "sequence": [{"name": "url", "dtype": "string"}]}], "config_name": "LFQA_reddit", "splits": [{"name": "train_eli5", "num_bytes": 577188173, "num_examples": 272634}, {"name": "validation_eli5", "num_bytes": 21117891, "num_examples": 9812}, {"name": "test_eli5", "num_bytes": 53099796, "num_examples": 24512}, {"name": "train_asks", "num_bytes": 286464210, "num_examples": 131778}, {"name": "validation_asks", "num_bytes": 9662481, "num_examples": 2281}, {"name": "test_asks", "num_bytes": 17713920, "num_examples": 4462}, {"name": "train_askh", "num_bytes": 330483260, "num_examples": 98525}, {"name": "validation_askh", "num_bytes": 18690845, "num_examples": 4901}, {"name": "test_askh", "num_bytes": 36246784, "num_examples": 9764}], "download_size": 6326543, "dataset_size": 1350667360}}
2024-01-11T09:32:33+00:00
[ "1907.09190", "1904.04047" ]
[ "en" ]
TAGS #task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-open-domain-abstractive-qa #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #arxiv-1907.09190 #arxiv-1904.04047 #region-us
**Defunct:** Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data. ️ Reddit recently changed the terms of access to its API, making the source data for this dataset unavailable. -------------------------------------------------------------------------------------------------------------- Dataset Card for ELI5 ===================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: ELI5 homepage * Repository: ELI5 repository * Paper: ELI5: Long Form Question Answering * Point of Contact: Yacine Jernite ### Dataset Summary The ELI5 dataset is an English-language dataset of questions and answers gathered from three subreddits where users ask factual questions requiring paragraph-length or longer answers. The dataset was created to support the task of open-domain long form abstractive question answering, and covers questions about general topics in its r/explainlikeimfive subset, science in it r/askscience subset, and History in its r/AskHistorians subset. ### Supported Tasks and Leaderboards * 'abstractive-qa', 'open-domain-abstractive-qa': The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as Wikipedia), then use it to generate a multi-sentence answer. The model performance is measured by how high its ROUGE score to the reference is. A BART-based model with a dense retriever trained to draw information from Wikipedia passages achieves a ROUGE-L of 0.149. ### Languages The text in the dataset is in English, as spoken by Reddit users on the r/explainlikeimfive, r/askscience, and r/AskHistorians subreddits. The associated BCP-47 code is 'en'. Dataset Structure ----------------- ### Data Instances A typical data point comprises a question, with a 'title' containing the main question and a 'selftext' which sometimes elaborates on it, and a list of answers from the forum sorted by the number of upvotes they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text. An example from the ELI5 test set looks as follows: ### Data Fields * 'q\_id': a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps. * 'subreddit': One of 'explainlikeimfive', 'askscience', or 'AskHistorians', indicating which subreddit the question came from * 'title': title of the question, with URLs extracted and replaced by 'URL\_n' tokens * 'title\_urls': list of the extracted URLs, the 'n'th element of the list was replaced by 'URL\_n' * 'selftext': either an empty string or an elaboration of the question * 'selftext\_urls': similar to 'title\_urls' but for 'self\_text' * 'answers': a list of answers, each answer has: + 'a\_id': a string answer identifier for each answer, corresponding to its ID in the URL Reddit comments dumps. + 'text': the answer text with the URLs normalized + 'score': the number of upvotes the answer had received when the dumps were created * 'answers\_urls': a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts ### Data Splits The data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the 'title' field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow: Dataset Creation ---------------- ### Curation Rationale ELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including r/explainlikeimfive, along with the answers that were provided by other users. The rules of the subreddit make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain. ### Source Data #### Initial Data Collection and Normalization The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the Reddit forum hosted on URL. In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019. #### Who are the source language producers? The language producers are users of the r/explainlikeimfive, r/askscience, and r/AskHistorians subreddits between 2012 and 2019. No further demographic information was available from the data source. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information The authors removed the speaker IDs from the URL dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The purpose of this dataset is to help develop better question answering systems. A system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them. It should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section. ### Discussion of Biases While Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the recent post from Reddit founder u/spez outlining some of the ways he thinks the website's historical policies have been responsible for this problem, Adrienne Massanari's 2015 article on GamerGate and follow-up works, or a 2019 Wired article on misogyny on Reddit. While there has been some recent work in the NLP community on *de-biasing* models (e.g. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern. We still note some encouraging signs for all of these communities: r/explainlikeimfive and r/askscience have similar structures and purposes, and r/askscience was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a hackerfall post, URL write-up and supporting data). Meanwhile, the r/AskHistorians rules mention that the admins will not tolerate "*racism, sexism, or any other forms of bigotry*". However, further analysis of whether and to what extent these rules reduce toxicity is still needed. We also note that given the audience of the Reddit website which is more broadly used in the US and Europe, the answers will likely present a Western perspectives, which is particularly important to note when dealing with historical topics. ### Other Known Limitations The answers provided in the dataset are represent the opinion of Reddit users. While these communities strive to be helpful, they should not be considered to represent a ground truth. Additional Information ---------------------- ### Dataset Curators The dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR). ### Licensing Information The licensing status of the dataset hinges on the legal status of the URL data which is unclear. ### Contributions Thanks to @lewtun, @lhoestq, @mariamabarham, @thomwolf, @yjernite for adding this dataset.
[ "### Dataset Summary\n\n\nThe ELI5 dataset is an English-language dataset of questions and answers gathered from three subreddits where users ask factual questions requiring paragraph-length or longer answers. The dataset was created to support the task of open-domain long form abstractive question answering, and covers questions about general topics in its r/explainlikeimfive subset, science in it r/askscience subset, and History in its r/AskHistorians subset.", "### Supported Tasks and Leaderboards\n\n\n* 'abstractive-qa', 'open-domain-abstractive-qa': The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as Wikipedia), then use it to generate a multi-sentence answer. The model performance is measured by how high its ROUGE score to the reference is. A BART-based model with a dense retriever trained to draw information from Wikipedia passages achieves a ROUGE-L of 0.149.", "### Languages\n\n\nThe text in the dataset is in English, as spoken by Reddit users on the r/explainlikeimfive, r/askscience, and r/AskHistorians subreddits. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises a question, with a 'title' containing the main question and a 'selftext' which sometimes elaborates on it, and a list of answers from the forum sorted by the number of upvotes they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text.\n\n\nAn example from the ELI5 test set looks as follows:", "### Data Fields\n\n\n* 'q\\_id': a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps.\n* 'subreddit': One of 'explainlikeimfive', 'askscience', or 'AskHistorians', indicating which subreddit the question came from\n* 'title': title of the question, with URLs extracted and replaced by 'URL\\_n' tokens\n* 'title\\_urls': list of the extracted URLs, the 'n'th element of the list was replaced by 'URL\\_n'\n* 'selftext': either an empty string or an elaboration of the question\n* 'selftext\\_urls': similar to 'title\\_urls' but for 'self\\_text'\n* 'answers': a list of answers, each answer has:\n\t+ 'a\\_id': a string answer identifier for each answer, corresponding to its ID in the URL Reddit comments dumps.\n\t+ 'text': the answer text with the URLs normalized\n\t+ 'score': the number of upvotes the answer had received when the dumps were created\n* 'answers\\_urls': a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts", "### Data Splits\n\n\nThe data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the 'title' field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including r/explainlikeimfive, along with the answers that were provided by other users. The rules of the subreddit make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the Reddit forum hosted on URL.\n\n\nIn order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019.", "#### Who are the source language producers?\n\n\nThe language producers are users of the r/explainlikeimfive, r/askscience, and r/AskHistorians subreddits between 2012 and 2019. No further demographic information was available from the data source.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nThe authors removed the speaker IDs from the URL dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop better question answering systems.\n\n\nA system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them.\n\n\nIt should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section.", "### Discussion of Biases\n\n\nWhile Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the recent post from Reddit founder u/spez outlining some of the ways he thinks the website's historical policies have been responsible for this problem, Adrienne Massanari's 2015 article on GamerGate and follow-up works, or a 2019 Wired article on misogyny on Reddit.\n\n\nWhile there has been some recent work in the NLP community on *de-biasing* models (e.g. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern.\n\n\nWe still note some encouraging signs for all of these communities: r/explainlikeimfive and r/askscience have similar structures and purposes, and r/askscience was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a hackerfall post, URL write-up and supporting data). Meanwhile, the r/AskHistorians rules mention that the admins will not tolerate \"*racism, sexism, or any other forms of bigotry*\". However, further analysis of whether and to what extent these rules reduce toxicity is still needed.\n\n\nWe also note that given the audience of the Reddit website which is more broadly used in the US and Europe, the answers will likely present a Western perspectives, which is particularly important to note when dealing with historical topics.", "### Other Known Limitations\n\n\nThe answers provided in the dataset are represent the opinion of Reddit users. While these communities strive to be helpful, they should not be considered to represent a ground truth.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR).", "### Licensing Information\n\n\nThe licensing status of the dataset hinges on the legal status of the URL data which is unclear.", "### Contributions\n\n\nThanks to @lewtun, @lhoestq, @mariamabarham, @thomwolf, @yjernite for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-open-domain-abstractive-qa #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #arxiv-1907.09190 #arxiv-1904.04047 #region-us \n", "### Dataset Summary\n\n\nThe ELI5 dataset is an English-language dataset of questions and answers gathered from three subreddits where users ask factual questions requiring paragraph-length or longer answers. The dataset was created to support the task of open-domain long form abstractive question answering, and covers questions about general topics in its r/explainlikeimfive subset, science in it r/askscience subset, and History in its r/AskHistorians subset.", "### Supported Tasks and Leaderboards\n\n\n* 'abstractive-qa', 'open-domain-abstractive-qa': The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as Wikipedia), then use it to generate a multi-sentence answer. The model performance is measured by how high its ROUGE score to the reference is. A BART-based model with a dense retriever trained to draw information from Wikipedia passages achieves a ROUGE-L of 0.149.", "### Languages\n\n\nThe text in the dataset is in English, as spoken by Reddit users on the r/explainlikeimfive, r/askscience, and r/AskHistorians subreddits. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises a question, with a 'title' containing the main question and a 'selftext' which sometimes elaborates on it, and a list of answers from the forum sorted by the number of upvotes they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text.\n\n\nAn example from the ELI5 test set looks as follows:", "### Data Fields\n\n\n* 'q\\_id': a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps.\n* 'subreddit': One of 'explainlikeimfive', 'askscience', or 'AskHistorians', indicating which subreddit the question came from\n* 'title': title of the question, with URLs extracted and replaced by 'URL\\_n' tokens\n* 'title\\_urls': list of the extracted URLs, the 'n'th element of the list was replaced by 'URL\\_n'\n* 'selftext': either an empty string or an elaboration of the question\n* 'selftext\\_urls': similar to 'title\\_urls' but for 'self\\_text'\n* 'answers': a list of answers, each answer has:\n\t+ 'a\\_id': a string answer identifier for each answer, corresponding to its ID in the URL Reddit comments dumps.\n\t+ 'text': the answer text with the URLs normalized\n\t+ 'score': the number of upvotes the answer had received when the dumps were created\n* 'answers\\_urls': a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts", "### Data Splits\n\n\nThe data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the 'title' field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including r/explainlikeimfive, along with the answers that were provided by other users. The rules of the subreddit make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the Reddit forum hosted on URL.\n\n\nIn order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019.", "#### Who are the source language producers?\n\n\nThe language producers are users of the r/explainlikeimfive, r/askscience, and r/AskHistorians subreddits between 2012 and 2019. No further demographic information was available from the data source.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nThe authors removed the speaker IDs from the URL dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop better question answering systems.\n\n\nA system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them.\n\n\nIt should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section.", "### Discussion of Biases\n\n\nWhile Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the recent post from Reddit founder u/spez outlining some of the ways he thinks the website's historical policies have been responsible for this problem, Adrienne Massanari's 2015 article on GamerGate and follow-up works, or a 2019 Wired article on misogyny on Reddit.\n\n\nWhile there has been some recent work in the NLP community on *de-biasing* models (e.g. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern.\n\n\nWe still note some encouraging signs for all of these communities: r/explainlikeimfive and r/askscience have similar structures and purposes, and r/askscience was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a hackerfall post, URL write-up and supporting data). Meanwhile, the r/AskHistorians rules mention that the admins will not tolerate \"*racism, sexism, or any other forms of bigotry*\". However, further analysis of whether and to what extent these rules reduce toxicity is still needed.\n\n\nWe also note that given the audience of the Reddit website which is more broadly used in the US and Europe, the answers will likely present a Western perspectives, which is particularly important to note when dealing with historical topics.", "### Other Known Limitations\n\n\nThe answers provided in the dataset are represent the opinion of Reddit users. While these communities strive to be helpful, they should not be considered to represent a ground truth.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR).", "### Licensing Information\n\n\nThe licensing status of the dataset hinges on the legal status of the URL data which is unclear.", "### Contributions\n\n\nThanks to @lewtun, @lhoestq, @mariamabarham, @thomwolf, @yjernite for adding this dataset." ]
[ 123, 117, 148, 67, 108, 310, 101, 147, 4, 101, 60, 17, 10, 14, 63, 169, 412, 50, 51, 30, 37 ]
[ "passage: TAGS\n#task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-open-domain-abstractive-qa #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #arxiv-1907.09190 #arxiv-1904.04047 #region-us \n### Dataset Summary\n\n\nThe ELI5 dataset is an English-language dataset of questions and answers gathered from three subreddits where users ask factual questions requiring paragraph-length or longer answers. The dataset was created to support the task of open-domain long form abstractive question answering, and covers questions about general topics in its r/explainlikeimfive subset, science in it r/askscience subset, and History in its r/AskHistorians subset.### Supported Tasks and Leaderboards\n\n\n* 'abstractive-qa', 'open-domain-abstractive-qa': The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as Wikipedia), then use it to generate a multi-sentence answer. The model performance is measured by how high its ROUGE score to the reference is. A BART-based model with a dense retriever trained to draw information from Wikipedia passages achieves a ROUGE-L of 0.149.### Languages\n\n\nThe text in the dataset is in English, as spoken by Reddit users on the r/explainlikeimfive, r/askscience, and r/AskHistorians subreddits. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "passage: ### Data Instances\n\n\nA typical data point comprises a question, with a 'title' containing the main question and a 'selftext' which sometimes elaborates on it, and a list of answers from the forum sorted by the number of upvotes they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text.\n\n\nAn example from the ELI5 test set looks as follows:### Data Fields\n\n\n* 'q\\_id': a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps.\n* 'subreddit': One of 'explainlikeimfive', 'askscience', or 'AskHistorians', indicating which subreddit the question came from\n* 'title': title of the question, with URLs extracted and replaced by 'URL\\_n' tokens\n* 'title\\_urls': list of the extracted URLs, the 'n'th element of the list was replaced by 'URL\\_n'\n* 'selftext': either an empty string or an elaboration of the question\n* 'selftext\\_urls': similar to 'title\\_urls' but for 'self\\_text'\n* 'answers': a list of answers, each answer has:\n\t+ 'a\\_id': a string answer identifier for each answer, corresponding to its ID in the URL Reddit comments dumps.\n\t+ 'text': the answer text with the URLs normalized\n\t+ 'score': the number of upvotes the answer had received when the dumps were created\n* 'answers\\_urls': a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts### Data Splits\n\n\nThe data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the 'title' field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow:\n\n\n\nDataset Creation\n----------------", "passage: ### Curation Rationale\n\n\nELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including r/explainlikeimfive, along with the answers that were provided by other users. The rules of the subreddit make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.### Source Data#### Initial Data Collection and Normalization\n\n\nThe data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the Reddit forum hosted on URL.\n\n\nIn order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019.#### Who are the source language producers?\n\n\nThe language producers are users of the r/explainlikeimfive, r/askscience, and r/AskHistorians subreddits between 2012 and 2019. No further demographic information was available from the data source.### Annotations\n\n\nThe dataset does not contain any additional annotations.#### Annotation process\n\n\n[N/A]#### Who are the annotators?\n\n\n[N/A]### Personal and Sensitive Information\n\n\nThe authors removed the speaker IDs from the URL dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop better question answering systems.\n\n\nA system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them.\n\n\nIt should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section." ]
8bfd40dfeebbf20e10e651965d9f31b94c9c0620
# Dataset Card for ELI5-Category ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ELI5-Category homepage](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/) - **Repository:** [ELI5-Category repository](https://github.com/rexarski/ANLY580-final-project) - **Point of Contact:** [Jingsong Gao](mailto:jg2109@georgetown.edu) ### Dataset Summary The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. It's an English-language dataset of questions and answers gathered from the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit where users ask factual questions requiring paragraph-length or longer answers. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original [ELI5 dataset](https://huggingface.co/datasets/eli5). ### Supported Tasks and Leaderboards - `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. ### Languages The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances The structure of this dataset is very similar to the original [ELI5 dataset](https://huggingface.co/datasets/eli5). A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by scores they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text. In addition to the original ELI5 dataset, the data point also has a `category` field. There are 11 common values of `category` in this dataset: `Biology`,`Chemistry`,`Culture`,`Earth Science`,`Economics`,`Engineering`,`Mathematics`,`Other`,`Physics`,`Psychology`,`Technology`, and a special `category`: `Repost` indicates the same question has been asked before. An example from the ELI5-Category set looks as follows: ``` {'q_id': '5lcm18', 'title': 'Why do old games running on new hardware still have technical issues ?', 'selftext': 'I am playing some mega man games on my Xbox One and experience slowdown when there are a lot of enemies on screen . but the Xbox One is significantly more powerful than the NES , so why is there still slowdown on this hardware ?', 'category': 'Engineering', 'subreddit': 'explainlikeimfive', 'answers': {'a_id': ['dbuo48e', 'dbusfve'], 'text': ["The XBox is emulating NES hardware and running the emulation at a set speed . If it ran it at as fast as possible , then it would be several times faster than the original NES game and would be unplayable . I ca n't speak for Mega Man exactly , but older games tended to run on a cycle locked to the screen refresh which was a fixed 60Hz or 50Hz . There was only one piece of hardware they ran on , so there was no need to adjust for different hardware speeds .", "In that case , it 's probably on purpose - they want to emulate the experience as closely as possible , even including the slowdown and sprite flickering . Some emulators let you turn it off , but it 's usually turned on by default . In other cases , like if you 're trying to emulate PS2 games on your PC , the game might just run really slow in general . Even though your PC is way more powerful than a PS2 , it has to \" translate \" from PS2 language to PC language in realtime , which is much more difficult than running PS2 code on the PS2 itself ."], 'score': [13, 3], 'text_urls': [[],[]]}, 'title_urls': {'url': []}, 'selftext_urls': {'url': []}} ``` ### Data Fields - `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps - `subreddit`: always `explainlikeimfive`, indicating which subreddit the question came from - `category`: tag of the question, the possible values are listed above. - `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens - `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n` - `selftext`: either an empty string or an elaboration of the question - `selftext_urls`: similar to `title_urls` but for `self_text` - `answers`: a list of answers, each answer has: - `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps. - `text`: the answer text with the URLs normalized - `score`: the number of upvotes - the number of downvotes the answer had received when the dumps were created - `text_urls`: lists of the extracted URLs for every answer ### Data Splits In order to avoid having duplicate questions across sets, three non-overlapping subsets of `category` are used in the training, validation and test set. Also, a special validation set contains all the questions in the `Repost` category. A valid retriever-generator model should have consistent performances on both validation sets. The final split sizes are as follows: | | Train | Valid | Valid2 |Test | | ----- | ------ | ----- | ---- | ---- | | `Biology` | 32769 | | | | | `Chemistry` | 6633 | | | | | `Culture` | | 5446 | | | | `Earth Science` | 677 | | | | | `Economics` | 5901 | | | | | `Engineering` | | | | 5411 | | `Mathematics` | 1912 | | | | | `Other` | 19312 | | | | | `Physics` | 10196 | | | | | `Psychology` | 338 | | | | | `Technology` | 14034 | | | | | `Repost` | | | 2375 | | | **Total** | 91772 | 5446 | 2375 | 5411 | ## Dataset Creation ### Curation Rationale ELI5-Category was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine the information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well-established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain. ### Source Data #### Initial Data Collection and Normalization The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/). In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period from January 2017 to June 2021. #### Who are the source language producers? The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source. ### Annotations The dataset contains the `category` as an additional annotation for the topics of questions. #### Annotation process The dataset is auto-annotated by the tags of posts in the [Reddit forum](https://www.reddit.com/). #### Who are the annotators? The annotators are users/administrators of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source. ### Personal and Sensitive Information The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some questions and answers are about contemporary public figures or individuals who appeared in the news. ## Considerations for Using the Data ### Social Impact of Dataset The dataset has a similar social impact to the original ELI5 dataset [Social Impact of Dataset](https://huggingface.co/datasets/eli5#social-impact-of-dataset). ### Discussion of Biases The dataset has similar considerations of biases to the original ELI5 dataset [Discussion of Biases](https://huggingface.co/datasets/eli5#discussion-of-biases). ### Other Known Limitations The dataset has similar limitations to the original ELI5 dataset [Other Known Limitations](https://huggingface.co/datasets/eli5#other-known-limitations). ## Additional Information ### Dataset Curators The dataset was initially created by Jingsong Gao, Qinren Zhou, Rui Qiu, during a course project of `ANLY 580`: NLP for Data Analytics at Georgetown University. ### Licensing Information The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear. ### Citation Information ``` @inproceedings{eli5-category, author = {Jingsong Gao and Qingren Zhou and Rui Qiu}, title = {{ELI5-Category:} A categorized open-domain QA dataset}, year = {2021} } ``` ### Contributions Thanks to [@jingshenSN2](https://github.com/jingshenSN2), [@QinrenZhou](https://github.com/QinrenZhou), [@rexarski](https://github.com/rexarski) for adding this dataset.
eli5_category
[ "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:open-domain-abstractive-qa", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|eli5", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|eli5"], "task_categories": ["text2text-generation"], "task_ids": ["abstractive-qa", "open-domain-abstractive-qa"], "pretty_name": "ELI5-Category", "dataset_info": {"features": [{"name": "q_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "selftext", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "answers", "struct": [{"name": "a_id", "sequence": "string"}, {"name": "text", "sequence": "string"}, {"name": "score", "sequence": "int32"}, {"name": "text_urls", "sequence": {"sequence": "string"}}]}, {"name": "title_urls", "sequence": "string"}, {"name": "selftext_urls", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 166409797, "num_examples": 91772}, {"name": "validation1", "num_bytes": 13150585, "num_examples": 5446}, {"name": "validation2", "num_bytes": 4737744, "num_examples": 2375}, {"name": "test", "num_bytes": 10419098, "num_examples": 5411}], "download_size": 72921829, "dataset_size": 194717224}}
2024-01-18T11:03:11+00:00
[]
[ "en" ]
TAGS #task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-open-domain-abstractive-qa #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|eli5 #language-English #license-unknown #region-us
Dataset Card for ELI5-Category ============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: ELI5-Category homepage * Repository: ELI5-Category repository * Point of Contact: Jingsong Gao ### Dataset Summary The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. It's an English-language dataset of questions and answers gathered from the r/explainlikeimfive subreddit where users ask factual questions requiring paragraph-length or longer answers. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original ELI5 dataset. ### Supported Tasks and Leaderboards * 'abstractive-qa', 'open-domain-abstractive-qa': The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as Wikipedia), then use it to generate a multi-sentence answer. ### Languages The text in the dataset is in English, as spoken by Reddit users on the r/explainlikeimfive subreddit. The associated BCP-47 code is 'en'. Dataset Structure ----------------- ### Data Instances The structure of this dataset is very similar to the original ELI5 dataset. A typical data point comprises a question, with a 'title' containing the main question and a 'selftext' which sometimes elaborates on it, and a list of answers from the forum sorted by scores they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text. In addition to the original ELI5 dataset, the data point also has a 'category' field. There are 11 common values of 'category' in this dataset: 'Biology','Chemistry','Culture','Earth Science','Economics','Engineering','Mathematics','Other','Physics','Psychology','Technology', and a special 'category': 'Repost' indicates the same question has been asked before. An example from the ELI5-Category set looks as follows: ### Data Fields * 'q\_id': a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps * 'subreddit': always 'explainlikeimfive', indicating which subreddit the question came from * 'category': tag of the question, the possible values are listed above. * 'title': title of the question, with URLs extracted and replaced by 'URL\_n' tokens * 'title\_urls': list of the extracted URLs, the 'n'th element of the list was replaced by 'URL\_n' * 'selftext': either an empty string or an elaboration of the question * 'selftext\_urls': similar to 'title\_urls' but for 'self\_text' * 'answers': a list of answers, each answer has: + 'a\_id': a string answer identifier for each answer, corresponding to its ID in the URL Reddit comments dumps. + 'text': the answer text with the URLs normalized + 'score': the number of upvotes - the number of downvotes the answer had received when the dumps were created + 'text\_urls': lists of the extracted URLs for every answer ### Data Splits In order to avoid having duplicate questions across sets, three non-overlapping subsets of 'category' are used in the training, validation and test set. Also, a special validation set contains all the questions in the 'Repost' category. A valid retriever-generator model should have consistent performances on both validation sets. The final split sizes are as follows: Dataset Creation ---------------- ### Curation Rationale ELI5-Category was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine the information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including r/explainlikeimfive, along with the answers that were provided by other users. The rules of the subreddit make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well-established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain. ### Source Data #### Initial Data Collection and Normalization The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the Reddit forum hosted on URL. In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period from January 2017 to June 2021. #### Who are the source language producers? The language producers are users of the r/explainlikeimfive subreddit between 2017 and 2021. No further demographic information was available from the data source. ### Annotations The dataset contains the 'category' as an additional annotation for the topics of questions. #### Annotation process The dataset is auto-annotated by the tags of posts in the Reddit forum. #### Who are the annotators? The annotators are users/administrators of the r/explainlikeimfive subreddit between 2017 and 2021. No further demographic information was available from the data source. ### Personal and Sensitive Information The authors removed the speaker IDs from the URL dumps but did not otherwise anonymize the data. Some questions and answers are about contemporary public figures or individuals who appeared in the news. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The dataset has a similar social impact to the original ELI5 dataset Social Impact of Dataset. ### Discussion of Biases The dataset has similar considerations of biases to the original ELI5 dataset Discussion of Biases. ### Other Known Limitations The dataset has similar limitations to the original ELI5 dataset Other Known Limitations. Additional Information ---------------------- ### Dataset Curators The dataset was initially created by Jingsong Gao, Qinren Zhou, Rui Qiu, during a course project of 'ANLY 580': NLP for Data Analytics at Georgetown University. ### Licensing Information The licensing status of the dataset hinges on the legal status of the URL data which is unclear. ### Contributions Thanks to @jingshenSN2, @QinrenZhou, @rexarski for adding this dataset.
[ "### Dataset Summary\n\n\nThe ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. It's an English-language dataset of questions and answers gathered from the r/explainlikeimfive subreddit where users ask factual questions requiring paragraph-length or longer answers. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original ELI5 dataset.", "### Supported Tasks and Leaderboards\n\n\n* 'abstractive-qa', 'open-domain-abstractive-qa': The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as Wikipedia), then use it to generate a multi-sentence answer.", "### Languages\n\n\nThe text in the dataset is in English, as spoken by Reddit users on the r/explainlikeimfive subreddit. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe structure of this dataset is very similar to the original ELI5 dataset. A typical data point comprises a question, with a 'title' containing the main question and a 'selftext' which sometimes elaborates on it, and a list of answers from the forum sorted by scores they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text. \n\nIn addition to the original ELI5 dataset, the data point also has a 'category' field. There are 11 common values of 'category' in this dataset: 'Biology','Chemistry','Culture','Earth Science','Economics','Engineering','Mathematics','Other','Physics','Psychology','Technology', and a special 'category': 'Repost' indicates the same question has been asked before.\n\n\nAn example from the ELI5-Category set looks as follows:", "### Data Fields\n\n\n* 'q\\_id': a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps\n* 'subreddit': always 'explainlikeimfive', indicating which subreddit the question came from\n* 'category': tag of the question, the possible values are listed above.\n* 'title': title of the question, with URLs extracted and replaced by 'URL\\_n' tokens\n* 'title\\_urls': list of the extracted URLs, the 'n'th element of the list was replaced by 'URL\\_n'\n* 'selftext': either an empty string or an elaboration of the question\n* 'selftext\\_urls': similar to 'title\\_urls' but for 'self\\_text'\n* 'answers': a list of answers, each answer has:\n\t+ 'a\\_id': a string answer identifier for each answer, corresponding to its ID in the URL Reddit comments dumps.\n\t+ 'text': the answer text with the URLs normalized\n\t+ 'score': the number of upvotes - the number of downvotes the answer had received when the dumps were created\n\t+ 'text\\_urls': lists of the extracted URLs for every answer", "### Data Splits\n\n\nIn order to avoid having duplicate questions across sets, three non-overlapping subsets of 'category' are used in the training, validation and test set. Also, a special validation set contains all the questions in the 'Repost' category. A valid retriever-generator model should have consistent performances on both validation sets. \n\nThe final split sizes are as follows:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nELI5-Category was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine the information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including r/explainlikeimfive, along with the answers that were provided by other users. The rules of the subreddit make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well-established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the Reddit forum hosted on URL.\n\n\nIn order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period from January 2017 to June 2021.", "#### Who are the source language producers?\n\n\nThe language producers are users of the r/explainlikeimfive subreddit between 2017 and 2021. No further demographic information was available from the data source.", "### Annotations\n\n\nThe dataset contains the 'category' as an additional annotation for the topics of questions.", "#### Annotation process\n\n\nThe dataset is auto-annotated by the tags of posts in the Reddit forum.", "#### Who are the annotators?\n\n\nThe annotators are users/administrators of the r/explainlikeimfive subreddit between 2017 and 2021. No further demographic information was available from the data source.", "### Personal and Sensitive Information\n\n\nThe authors removed the speaker IDs from the URL dumps but did not otherwise anonymize the data. Some questions and answers are about contemporary public figures or individuals who appeared in the news.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe dataset has a similar social impact to the original ELI5 dataset Social Impact of Dataset.", "### Discussion of Biases\n\n\nThe dataset has similar considerations of biases to the original ELI5 dataset Discussion of Biases.", "### Other Known Limitations\n\n\nThe dataset has similar limitations to the original ELI5 dataset Other Known Limitations.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Jingsong Gao, Qinren Zhou, Rui Qiu, during a course project of 'ANLY 580': NLP for Data Analytics at Georgetown University.", "### Licensing Information\n\n\nThe licensing status of the dataset hinges on the legal status of the URL data which is unclear.", "### Contributions\n\n\nThanks to @jingshenSN2, @QinrenZhou, @rexarski for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-open-domain-abstractive-qa #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|eli5 #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nThe ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. It's an English-language dataset of questions and answers gathered from the r/explainlikeimfive subreddit where users ask factual questions requiring paragraph-length or longer answers. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original ELI5 dataset.", "### Supported Tasks and Leaderboards\n\n\n* 'abstractive-qa', 'open-domain-abstractive-qa': The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as Wikipedia), then use it to generate a multi-sentence answer.", "### Languages\n\n\nThe text in the dataset is in English, as spoken by Reddit users on the r/explainlikeimfive subreddit. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe structure of this dataset is very similar to the original ELI5 dataset. A typical data point comprises a question, with a 'title' containing the main question and a 'selftext' which sometimes elaborates on it, and a list of answers from the forum sorted by scores they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text. \n\nIn addition to the original ELI5 dataset, the data point also has a 'category' field. There are 11 common values of 'category' in this dataset: 'Biology','Chemistry','Culture','Earth Science','Economics','Engineering','Mathematics','Other','Physics','Psychology','Technology', and a special 'category': 'Repost' indicates the same question has been asked before.\n\n\nAn example from the ELI5-Category set looks as follows:", "### Data Fields\n\n\n* 'q\\_id': a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps\n* 'subreddit': always 'explainlikeimfive', indicating which subreddit the question came from\n* 'category': tag of the question, the possible values are listed above.\n* 'title': title of the question, with URLs extracted and replaced by 'URL\\_n' tokens\n* 'title\\_urls': list of the extracted URLs, the 'n'th element of the list was replaced by 'URL\\_n'\n* 'selftext': either an empty string or an elaboration of the question\n* 'selftext\\_urls': similar to 'title\\_urls' but for 'self\\_text'\n* 'answers': a list of answers, each answer has:\n\t+ 'a\\_id': a string answer identifier for each answer, corresponding to its ID in the URL Reddit comments dumps.\n\t+ 'text': the answer text with the URLs normalized\n\t+ 'score': the number of upvotes - the number of downvotes the answer had received when the dumps were created\n\t+ 'text\\_urls': lists of the extracted URLs for every answer", "### Data Splits\n\n\nIn order to avoid having duplicate questions across sets, three non-overlapping subsets of 'category' are used in the training, validation and test set. Also, a special validation set contains all the questions in the 'Repost' category. A valid retriever-generator model should have consistent performances on both validation sets. \n\nThe final split sizes are as follows:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nELI5-Category was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine the information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including r/explainlikeimfive, along with the answers that were provided by other users. The rules of the subreddit make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well-established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the Reddit forum hosted on URL.\n\n\nIn order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period from January 2017 to June 2021.", "#### Who are the source language producers?\n\n\nThe language producers are users of the r/explainlikeimfive subreddit between 2017 and 2021. No further demographic information was available from the data source.", "### Annotations\n\n\nThe dataset contains the 'category' as an additional annotation for the topics of questions.", "#### Annotation process\n\n\nThe dataset is auto-annotated by the tags of posts in the Reddit forum.", "#### Who are the annotators?\n\n\nThe annotators are users/administrators of the r/explainlikeimfive subreddit between 2017 and 2021. No further demographic information was available from the data source.", "### Personal and Sensitive Information\n\n\nThe authors removed the speaker IDs from the URL dumps but did not otherwise anonymize the data. Some questions and answers are about contemporary public figures or individuals who appeared in the news.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe dataset has a similar social impact to the original ELI5 dataset Social Impact of Dataset.", "### Discussion of Biases\n\n\nThe dataset has similar considerations of biases to the original ELI5 dataset Discussion of Biases.", "### Other Known Limitations\n\n\nThe dataset has similar limitations to the original ELI5 dataset Other Known Limitations.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Jingsong Gao, Qinren Zhou, Rui Qiu, during a course project of 'ANLY 580': NLP for Data Analytics at Georgetown University.", "### Licensing Information\n\n\nThe licensing status of the dataset hinges on the legal status of the URL data which is unclear.", "### Contributions\n\n\nThanks to @jingshenSN2, @QinrenZhou, @rexarski for adding this dataset." ]
[ 108, 157, 95, 52, 252, 299, 101, 154, 4, 102, 46, 28, 24, 48, 61, 29, 34, 35, 53, 30, 30 ]
[ "passage: TAGS\n#task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-open-domain-abstractive-qa #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|eli5 #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nThe ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. It's an English-language dataset of questions and answers gathered from the r/explainlikeimfive subreddit where users ask factual questions requiring paragraph-length or longer answers. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original ELI5 dataset.### Supported Tasks and Leaderboards\n\n\n* 'abstractive-qa', 'open-domain-abstractive-qa': The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as Wikipedia), then use it to generate a multi-sentence answer.### Languages\n\n\nThe text in the dataset is in English, as spoken by Reddit users on the r/explainlikeimfive subreddit. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "passage: ### Data Instances\n\n\nThe structure of this dataset is very similar to the original ELI5 dataset. A typical data point comprises a question, with a 'title' containing the main question and a 'selftext' which sometimes elaborates on it, and a list of answers from the forum sorted by scores they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text. \n\nIn addition to the original ELI5 dataset, the data point also has a 'category' field. There are 11 common values of 'category' in this dataset: 'Biology','Chemistry','Culture','Earth Science','Economics','Engineering','Mathematics','Other','Physics','Psychology','Technology', and a special 'category': 'Repost' indicates the same question has been asked before.\n\n\nAn example from the ELI5-Category set looks as follows:### Data Fields\n\n\n* 'q\\_id': a string question identifier for each example, corresponding to its ID in the URL Reddit submission dumps\n* 'subreddit': always 'explainlikeimfive', indicating which subreddit the question came from\n* 'category': tag of the question, the possible values are listed above.\n* 'title': title of the question, with URLs extracted and replaced by 'URL\\_n' tokens\n* 'title\\_urls': list of the extracted URLs, the 'n'th element of the list was replaced by 'URL\\_n'\n* 'selftext': either an empty string or an elaboration of the question\n* 'selftext\\_urls': similar to 'title\\_urls' but for 'self\\_text'\n* 'answers': a list of answers, each answer has:\n\t+ 'a\\_id': a string answer identifier for each answer, corresponding to its ID in the URL Reddit comments dumps.\n\t+ 'text': the answer text with the URLs normalized\n\t+ 'score': the number of upvotes - the number of downvotes the answer had received when the dumps were created\n\t+ 'text\\_urls': lists of the extracted URLs for every answer### Data Splits\n\n\nIn order to avoid having duplicate questions across sets, three non-overlapping subsets of 'category' are used in the training, validation and test set. Also, a special validation set contains all the questions in the 'Repost' category. A valid retriever-generator model should have consistent performances on both validation sets. \n\nThe final split sizes are as follows:\n\n\n\nDataset Creation\n----------------", "passage: ### Curation Rationale\n\n\nELI5-Category was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine the information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including r/explainlikeimfive, along with the answers that were provided by other users. The rules of the subreddit make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well-established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.### Source Data#### Initial Data Collection and Normalization\n\n\nThe data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the Reddit forum hosted on URL.\n\n\nIn order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period from January 2017 to June 2021.#### Who are the source language producers?\n\n\nThe language producers are users of the r/explainlikeimfive subreddit between 2017 and 2021. No further demographic information was available from the data source.### Annotations\n\n\nThe dataset contains the 'category' as an additional annotation for the topics of questions.#### Annotation process\n\n\nThe dataset is auto-annotated by the tags of posts in the Reddit forum.#### Who are the annotators?\n\n\nThe annotators are users/administrators of the r/explainlikeimfive subreddit between 2017 and 2021. No further demographic information was available from the data source.### Personal and Sensitive Information\n\n\nThe authors removed the speaker IDs from the URL dumps but did not otherwise anonymize the data. Some questions and answers are about contemporary public figures or individuals who appeared in the news.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nThe dataset has a similar social impact to the original ELI5 dataset Social Impact of Dataset.### Discussion of Biases\n\n\nThe dataset has similar considerations of biases to the original ELI5 dataset Discussion of Biases.### Other Known Limitations\n\n\nThe dataset has similar limitations to the original ELI5 dataset Other Known Limitations.\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nThe dataset was initially created by Jingsong Gao, Qinren Zhou, Rui Qiu, during a course project of 'ANLY 580': NLP for Data Analytics at Georgetown University.### Licensing Information\n\n\nThe licensing status of the dataset hinges on the legal status of the URL data which is unclear." ]
5bf8a485252aa1cdddd09bf5dfd026b9a8d25e57
# Dataset Card for EMEA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/EMEA.php - **Repository:** None - **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/EMEA.php E.g. `dataset = load_dataset("emea", lang1="en", lang2="nl")` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here is an example of the `en-nl` configuration: ``` {'id': '4', 'translation': {'en': 'EPAR summary for the public', 'nl': 'EPAR-samenvatting voor het publiek'}} ``` ### Data Fields The data fields are: - id: id of the sentence pair - translation: a dictionary of the form {lang1: text_in_lang1, lang2: text_in_lang2} ### Data Splits Sizes of some language pairs: | name |train| |----------|----:| |bg-el|1044065| |cs-et|1053164| |de-mt|1000532| |fr-sk|1062753| |es-lt|1051370| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @InProceedings{TIEDEMANN12.463, author = {J{\"o}rg Tiedemann}, title = {Parallel Data, Tools and Interfaces in OPUS}, booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012}, month = {may}, date = {23-25}, address = {Istanbul, Turkey}, editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis}, publisher = {European Language Resources Association (ELRA)}, isbn = {978-2-9517408-7-7}, language = {english} } ``` ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
emea
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "language:fi", "language:fr", "language:hu", "language:it", "language:lt", "language:lv", "language:mt", "language:nl", "language:pl", "language:pt", "language:ro", "language:sk", "language:sl", "language:sv", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "EMEA", "config_names": ["bg-el", "cs-et", "de-mt", "es-lt", "fr-sk"], "dataset_info": [{"config_name": "bg-el", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["bg", "el"]}}}], "splits": [{"name": "train", "num_bytes": 296160562, "num_examples": 1044065}], "download_size": 54531690, "dataset_size": 296160562}, {"config_name": "cs-et", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["cs", "et"]}}}], "splits": [{"name": "train", "num_bytes": 180261167, "num_examples": 1053164}], "download_size": 36065651, "dataset_size": 180261167}, {"config_name": "de-mt", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["de", "mt"]}}}], "splits": [{"name": "train", "num_bytes": 182976918, "num_examples": 1000532}], "download_size": 36665427, "dataset_size": 182976918}, {"config_name": "fr-sk", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["fr", "sk"]}}}], "splits": [{"name": "train", "num_bytes": 193605247, "num_examples": 1062753}], "download_size": 38916074, "dataset_size": 193605247}, {"config_name": "es-lt", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["es", "lt"]}}}], "splits": [{"name": "train", "num_bytes": 182623676, "num_examples": 1051370}], "download_size": 35329033, "dataset_size": 182623676}]}
2024-01-18T11:03:12+00:00
[]
[ "bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #license-unknown #region-us
Dataset Card for EMEA ===================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: None * Paper: URL * Leaderboard: * Point of Contact: ### Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: URL E.g. 'dataset = load\_dataset("emea", lang1="en", lang2="nl")' ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances Here is an example of the 'en-nl' configuration: ### Data Fields The data fields are: * id: id of the sentence pair * translation: a dictionary of the form {lang1: text\_in\_lang1, lang2: text\_in\_lang2} ### Data Splits Sizes of some language pairs: Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "### Dataset Summary\n\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n\n'dataset = load\\_dataset(\"emea\", lang1=\"en\", lang2=\"nl\")'", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nHere is an example of the 'en-nl' configuration:", "### Data Fields\n\n\nThe data fields are:\n\n\n* id: id of the sentence pair\n* translation: a dictionary of the form {lang1: text\\_in\\_lang1, lang2: text\\_in\\_lang2}", "### Data Splits\n\n\nSizes of some language pairs:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #license-unknown #region-us \n", "### Dataset Summary\n\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n\n'dataset = load\\_dataset(\"emea\", lang1=\"en\", lang2=\"nl\")'", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nHere is an example of the 'en-nl' configuration:", "### Data Fields\n\n\nThe data fields are:\n\n\n* id: id of the sentence pair\n* translation: a dictionary of the form {lang1: text\\_in\\_lang1, lang2: text\\_in\\_lang2}", "### Data Splits\n\n\nSizes of some language pairs:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ 193, 81, 10, 11, 19, 53, 19, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 20 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #license-unknown #region-us \n### Dataset Summary\n\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n\n'dataset = load\\_dataset(\"emea\", lang1=\"en\", lang2=\"nl\")'### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nHere is an example of the 'en-nl' configuration:### Data Fields\n\n\nThe data fields are:\n\n\n* id: id of the sentence pair\n* translation: a dictionary of the form {lang1: text\\_in\\_lang1, lang2: text\\_in\\_lang2}### Data Splits\n\n\nSizes of some language pairs:\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information" ]
0465d88e20dbbeb445476d773235de09e0e63418
# Dataset Card for "emo" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.aclweb.org/anthology/S19-2005/](https://www.aclweb.org/anthology/S19-2005/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.37 MB - **Size of the generated dataset:** 2.85 MB - **Total amount of disk used:** 6.22 MB ### Dataset Summary In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### emo2019 - **Size of downloaded dataset files:** 3.37 MB - **Size of the generated dataset:** 2.85 MB - **Total amount of disk used:** 6.22 MB An example of 'train' looks as follows. ``` { "label": 0, "text": "don't worry i'm girl hmm how do i know if you are what's ur name" } ``` ### Data Fields The data fields are the same among all splits. #### emo2019 - `text`: a `string` feature. - `label`: a classification label, with possible values including `others` (0), `happy` (1), `sad` (2), `angry` (3). ### Data Splits | name |train|test| |-------|----:|---:| |emo2019|30160|5509| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{chatterjee-etal-2019-semeval, title={SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text}, author={Ankush Chatterjee and Kedhar Nath Narahari and Meghana Joshi and Puneet Agrawal}, booktitle={Proceedings of the 13th International Workshop on Semantic Evaluation}, year={2019}, address={Minneapolis, Minnesota, USA}, publisher={Association for Computational Linguistics}, url={https://www.aclweb.org/anthology/S19-2005}, doi={10.18653/v1/S19-2005}, pages={39--48}, abstract={In this paper, we present the SemEval-2019 Task 3 - EmoContext: Contextual Emotion Detection in Text. Lack of facial expressions and voice modulations make detecting emotions in text a challenging problem. For instance, as humans, on reading ''Why don't you ever text me!'' we can either interpret it as a sad or angry emotion and the same ambiguity exists for machines. However, the context of dialogue can prove helpful in detection of the emotion. In this task, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others. To facilitate the participation in this task, textual dialogues from user interaction with a conversational agent were taken and annotated for emotion classes after several data processing steps. A training data set of 30160 dialogues, and two evaluation data sets, Test1 and Test2, containing 2755 and 5509 dialogues respectively were released to the participants. A total of 311 teams made submissions to this task. The final leader-board was evaluated on Test2 data set, and the highest ranked submission achieved 79.59 micro-averaged F1 score. Our analysis of systems submitted to the task indicate that Bi-directional LSTM was the most common choice of neural architecture used, and most of the systems had the best performance for the Sad emotion class, and the worst for the Happy emotion class} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lordtt13](https://github.com/lordtt13), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
emo
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "paperswithcode_id": "emocontext", "pretty_name": "EmoContext", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "others", "1": "happy", "2": "sad", "3": "angry"}}}}], "config_name": "emo2019", "splits": [{"name": "train", "num_bytes": 2433205, "num_examples": 30160}, {"name": "test", "num_bytes": 421555, "num_examples": 5509}], "download_size": 3362556, "dataset_size": 2854760}}
2024-01-18T11:03:13+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
Dataset Card for "emo" ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 3.37 MB * Size of the generated dataset: 2.85 MB * Total amount of disk used: 6.22 MB ### Dataset Summary In this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### emo2019 * Size of downloaded dataset files: 3.37 MB * Size of the generated dataset: 2.85 MB * Total amount of disk used: 6.22 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### emo2019 * 'text': a 'string' feature. * 'label': a classification label, with possible values including 'others' (0), 'happy' (1), 'sad' (2), 'angry' (3). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @lordtt13, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nIn this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### emo2019\n\n\n* Size of downloaded dataset files: 3.37 MB\n* Size of the generated dataset: 2.85 MB\n* Total amount of disk used: 6.22 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### emo2019\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'others' (0), 'happy' (1), 'sad' (2), 'angry' (3).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @lordtt13, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nIn this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### emo2019\n\n\n* Size of downloaded dataset files: 3.37 MB\n* Size of the generated dataset: 2.85 MB\n* Total amount of disk used: 6.22 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### emo2019\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'others' (0), 'happy' (1), 'sad' (2), 'angry' (3).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @lordtt13, @lhoestq for adding this dataset." ]
[ 91, 65, 10, 11, 6, 50, 17, 51, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 28 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nIn this dataset, given a textual dialogue i.e. an utterance along with two previous turns of context, the goal was to infer the underlying emotion of the utterance by choosing from four emotion classes - Happy, Sad, Angry and Others.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### emo2019\n\n\n* Size of downloaded dataset files: 3.37 MB\n* Size of the generated dataset: 2.85 MB\n* Total amount of disk used: 6.22 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### emo2019\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'others' (0), 'happy' (1), 'sad' (2), 'angry' (3).### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @thomwolf, @lordtt13, @lhoestq for adding this dataset." ]
9ce63038044ae35ec1305d998d1882fcecd70ec8
# Dataset Card for "emotion" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/dair-ai/emotion_dataset](https://github.com/dair-ai/emotion_dataset) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 16.13 MB - **Size of the generated dataset:** 47.62 MB - **Total amount of disk used:** 63.75 MB ### Dataset Summary Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances An example looks as follows. ``` { "text": "im feeling quite sad and sorry for myself but ill snap out of it soon", "label": 0 } ``` ### Data Fields The data fields are: - `text`: a `string` feature. - `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5). ### Data Splits The dataset has 2 configurations: - split: with a total of 20_000 examples split into train, validation and split - unsplit: with a total of 416_809 examples in a single train split | name | train | validation | test | |---------|-------:|-----------:|-----:| | split | 16000 | 2000 | 2000 | | unsplit | 416809 | n/a | n/a | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The dataset should be used for educational and research purposes only. ### Citation Information If you use this dataset, please cite: ``` @inproceedings{saravia-etal-2018-carer, title = "{CARER}: Contextualized Affect Representations for Emotion Recognition", author = "Saravia, Elvis and Liu, Hsien-Chi Toby and Huang, Yen-Hao and Wu, Junlin and Chen, Yi-Shin", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1404", doi = "10.18653/v1/D18-1404", pages = "3687--3697", abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.", } ``` ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
dair-ai/emotion
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "emotion-classification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "paperswithcode_id": "emotion", "pretty_name": "Emotion", "tags": ["emotion-classification"], "dataset_info": [{"config_name": "split", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "sadness", "1": "joy", "2": "love", "3": "anger", "4": "fear", "5": "surprise"}}}}], "splits": [{"name": "train", "num_bytes": 1741597, "num_examples": 16000}, {"name": "validation", "num_bytes": 214703, "num_examples": 2000}, {"name": "test", "num_bytes": 217181, "num_examples": 2000}], "download_size": 740883, "dataset_size": 2173481}, {"config_name": "unsplit", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "sadness", "1": "joy", "2": "love", "3": "anger", "4": "fear", "5": "surprise"}}}}], "splits": [{"name": "train", "num_bytes": 45445685, "num_examples": 416809}], "download_size": 15388281, "dataset_size": 45445685}], "train-eval-index": [{"config": "default", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"text": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2023-04-20T07:08:15+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #emotion-classification #region-us
Dataset Card for "emotion" ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 16.13 MB * Size of the generated dataset: 47.62 MB * Total amount of disk used: 63.75 MB ### Dataset Summary Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances An example looks as follows. ### Data Fields The data fields are: * 'text': a 'string' feature. * 'label': a classification label, with possible values including 'sadness' (0), 'joy' (1), 'love' (2), 'anger' (3), 'fear' (4), 'surprise' (5). ### Data Splits The dataset has 2 configurations: * split: with a total of 20\_000 examples split into train, validation and split * unsplit: with a total of 416\_809 examples in a single train split Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The dataset should be used for educational and research purposes only. If you use this dataset, please cite: ### Contributions Thanks to @lhoestq, @thomwolf, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nEmotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example looks as follows.", "### Data Fields\n\n\nThe data fields are:\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'sadness' (0), 'joy' (1), 'love' (2), 'anger' (3), 'fear' (4), 'surprise' (5).", "### Data Splits\n\n\nThe dataset has 2 configurations:\n\n\n* split: with a total of 20\\_000 examples split into train, validation and split\n* unsplit: with a total of 416\\_809 examples in a single train split\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset should be used for educational and research purposes only.\n\n\nIf you use this dataset, please cite:", "### Contributions\n\n\nThanks to @lhoestq, @thomwolf, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #emotion-classification #region-us \n", "### Dataset Summary\n\n\nEmotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example looks as follows.", "### Data Fields\n\n\nThe data fields are:\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'sadness' (0), 'joy' (1), 'love' (2), 'anger' (3), 'fear' (4), 'surprise' (5).", "### Data Splits\n\n\nThe dataset has 2 configurations:\n\n\n* split: with a total of 20\\_000 examples split into train, validation and split\n* unsplit: with a total of 416\\_809 examples in a single train split\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe dataset should be used for educational and research purposes only.\n\n\nIf you use this dataset, please cite:", "### Contributions\n\n\nThanks to @lhoestq, @thomwolf, @lewtun for adding this dataset." ]
[ 96, 47, 10, 11, 13, 68, 62, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 30, 26 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #emotion-classification #region-us \n### Dataset Summary\n\n\nEmotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example looks as follows.### Data Fields\n\n\nThe data fields are:\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'sadness' (0), 'joy' (1), 'love' (2), 'anger' (3), 'fear' (4), 'surprise' (5).### Data Splits\n\n\nThe dataset has 2 configurations:\n\n\n* split: with a total of 20\\_000 examples split into train, validation and split\n* unsplit: with a total of 416\\_809 examples in a single train split\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information\n\n\nThe dataset should be used for educational and research purposes only.\n\n\nIf you use this dataset, please cite:### Contributions\n\n\nThanks to @lhoestq, @thomwolf, @lewtun for adding this dataset." ]
0ded8ff72cc68cbb7bb5c01b0a9157982b73ddaf
# Dataset Card for Emotional Tone in Arabic ## Table of Contents - [Dataset Card for Emotional Tone in Arabic](#dataset-card-for-emotional-tone-in-arabic) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [|split|num examples|](#splitnum-examples) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Repository](https://github.com/AmrMehasseb/Emotional-Tone) - **Paper:** [Emotional Tone Detection in Arabic Tweets](https://www.researchgate.net/publication/328164296_Emotional_Tone_Detection_in_Arabic_Tweets_18th_International_Conference_CICLing_2017_Budapest_Hungary_April_17-23_2017_Revised_Selected_Papers_Part_II) - **Point of Contact:** [Amr Al-Khatib](https://github.com/AmrMehasseb) ### Dataset Summary Dataset of 10065 tweets in Arabic for Emotion detection in Arabic text ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances example: ``` >>> {'label': 0, 'tweet': 'الاوليمبياد الجايه هكون لسه ف الكليه ..'} ``` ### Data Fields - "tweet": plain text tweet in Arabic - "label": emotion class label the dataset distribution and balance for each class looks like the following |label||Label description | Count | |---------|---------| ------- | |0 |none | 1550 | |1 |anger | 1444 | |2 |joy | 1281 | |3 |sadness | 1256 | |4 |love | 1220 | |5 |sympathy | 1062 | |6 |surprise | 1045 | |7 |fear | 1207 | ### Data Splits The dataset is not split. | | train | |----------|--------:| | no split | 10,065 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inbook{inbook, author = {Al-Khatib, Amr and El-Beltagy, Samhaa}, year = {2018}, month = {01}, pages = {105-114}, title = {Emotional Tone Detection in Arabic Tweets: 18th International Conference, CICLing 2017, Budapest, Hungary, April 17–23, 2017, Revised Selected Papers, Part II}, isbn = {978-3-319-77115-1}, doi = {10.1007/978-3-319-77116-8_8} } ``` ### Contributions Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset.
emotone_ar
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ar"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Emotional Tone in Arabic", "dataset_info": {"features": [{"name": "tweet", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "none", "1": "anger", "2": "joy", "3": "sadness", "4": "love", "5": "sympathy", "6": "surprise", "7": "fear"}}}}], "splits": [{"name": "train", "num_bytes": 1541746, "num_examples": 10065}], "download_size": 1563138, "dataset_size": 1541746}}
2024-01-18T11:03:14+00:00
[]
[ "ar" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Arabic #license-unknown #region-us
Dataset Card for Emotional Tone in Arabic ========================================= Table of Contents ----------------- * Dataset Card for Emotional Tone in Arabic + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + |split|num examples| + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Repository: Repository * Paper: Emotional Tone Detection in Arabic Tweets * Point of Contact: Amr Al-Khatib ### Dataset Summary Dataset of 10065 tweets in Arabic for Emotion detection in Arabic text ### Supported Tasks and Leaderboards ### Languages The dataset is based on Arabic. Dataset Structure ----------------- ### Data Instances example: ### Data Fields * "tweet": plain text tweet in Arabic * "label": emotion class label the dataset distribution and balance for each class looks like the following |label||Label description | Count | |---------|---------| ------- | |0 |none | 1550 | |1 |anger | 1444 | |2 |joy | 1281 | |3 |sadness | 1256 | |4 |love | 1220 | |5 |sympathy | 1062 | |6 |surprise | 1045 | |7 |fear | 1207 | ### Data Splits The dataset is not split. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abdulelahsm for adding this dataset.
[ "### Dataset Summary\n\n\nDataset of 10065 tweets in Arabic for Emotion detection in Arabic text", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is based on Arabic.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nexample:", "### Data Fields\n\n\n* \"tweet\": plain text tweet in Arabic\n* \"label\": emotion class label\n\n\nthe dataset distribution and balance for each class looks like the following\n\n\n|label||Label description | Count |\n|---------|---------| ------- |\n|0 |none | 1550 |\n|1 |anger | 1444 |\n|2 |joy | 1281 |\n|3 |sadness | 1256 |\n|4 |love | 1220 |\n|5 |sympathy | 1062 |\n|6 |surprise | 1045 |\n|7 |fear | 1207 |", "### Data Splits\n\n\nThe dataset is not split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @abdulelahsm for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Arabic #license-unknown #region-us \n", "### Dataset Summary\n\n\nDataset of 10065 tweets in Arabic for Emotion detection in Arabic text", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset is based on Arabic.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nexample:", "### Data Fields\n\n\n* \"tweet\": plain text tweet in Arabic\n* \"label\": emotion class label\n\n\nthe dataset distribution and balance for each class looks like the following\n\n\n|label||Label description | Count |\n|---------|---------| ------- |\n|0 |none | 1550 |\n|1 |anger | 1444 |\n|2 |joy | 1281 |\n|3 |sadness | 1256 |\n|4 |love | 1220 |\n|5 |sympathy | 1062 |\n|6 |surprise | 1045 |\n|7 |fear | 1207 |", "### Data Splits\n\n\nThe dataset is not split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @abdulelahsm for adding this dataset." ]
[ 86, 23, 10, 19, 8, 163, 18, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 20 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Arabic #license-unknown #region-us \n### Dataset Summary\n\n\nDataset of 10065 tweets in Arabic for Emotion detection in Arabic text### Supported Tasks and Leaderboards### Languages\n\n\nThe dataset is based on Arabic.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nexample:### Data Fields\n\n\n* \"tweet\": plain text tweet in Arabic\n* \"label\": emotion class label\n\n\nthe dataset distribution and balance for each class looks like the following\n\n\n|label||Label description | Count |\n|---------|---------| ------- |\n|0 |none | 1550 |\n|1 |anger | 1444 |\n|2 |joy | 1281 |\n|3 |sadness | 1256 |\n|4 |love | 1220 |\n|5 |sympathy | 1062 |\n|6 |surprise | 1045 |\n|7 |fear | 1207 |### Data Splits\n\n\nThe dataset is not split.\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @abdulelahsm for adding this dataset." ]
9e2bcd65bc66b458e226f0eda6808bf7d831b3e0
# Dataset Card for "empathetic_dialogues" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/facebookresearch/EmpatheticDialogues](https://github.com/facebookresearch/EmpatheticDialogues) - **Repository:** https://github.com/facebookresearch/EmpatheticDialogues - **Paper:** [Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset](https://arxiv.org/abs/1811.00207) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 28.02 MB - **Size of the generated dataset:** 25.13 MB - **Total amount of disk used:** 53.15 MB ### Dataset Summary PyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 28.02 MB - **Size of the generated dataset:** 25.13 MB - **Total amount of disk used:** 53.15 MB An example of 'train' looks as follows. ``` { "context": "sentimental", "conv_id": "hit:0_conv:1", "prompt": "I remember going to the fireworks with my best friend. There was a lot of people_comma_ but it only felt like us in the world.", "selfeval": "5|5|5_2|2|5", "speaker_idx": 1, "tags": "", "utterance": "I remember going to see the fireworks with my best friend. It was the first time we ever spent time alone together. Although there was a lot of people_comma_ we felt like the only people in the world.", "utterance_idx": 1 } ``` ### Data Fields The data fields are the same among all splits. #### default - `conv_id`: a `string` feature. - `utterance_idx`: a `int32` feature. - `context`: a `string` feature. - `prompt`: a `string` feature. - `speaker_idx`: a `int32` feature. - `utterance`: a `string` feature. - `selfeval`: a `string` feature. - `tags`: a `string` feature. ### Data Splits | name |train|validation|test | |-------|----:|---------:|----:| |default|76673| 12030|10943| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information Creative Commons [Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/). ### Citation Information ``` @inproceedings{rashkin-etal-2019-towards, title = "Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset", author = "Rashkin, Hannah and Smith, Eric Michael and Li, Margaret and Boureau, Y-Lan", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1534", doi = "10.18653/v1/P19-1534", pages = "5370--5381", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
empathetic_dialogues
[ "task_categories:conversational", "task_categories:question-answering", "task_ids:dialogue-generation", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "arxiv:1811.00207", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["conversational", "question-answering"], "task_ids": ["dialogue-generation", "open-domain-qa"], "paperswithcode_id": "empatheticdialogues", "pretty_name": "EmpatheticDialogues", "dataset_info": {"features": [{"name": "conv_id", "dtype": "string"}, {"name": "utterance_idx", "dtype": "int32"}, {"name": "context", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "speaker_idx", "dtype": "int32"}, {"name": "utterance", "dtype": "string"}, {"name": "selfeval", "dtype": "string"}, {"name": "tags", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3011332, "num_examples": 10943}, {"name": "train", "num_bytes": 19040509, "num_examples": 76673}, {"name": "validation", "num_bytes": 3077481, "num_examples": 12030}], "download_size": 28022709, "dataset_size": 25129322}}
2024-01-18T11:03:15+00:00
[ "1811.00207" ]
[ "en" ]
TAGS #task_categories-conversational #task_categories-question-answering #task_ids-dialogue-generation #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-4.0 #arxiv-1811.00207 #region-us
Dataset Card for "empathetic\_dialogues" ======================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset * Point of Contact: * Size of downloaded dataset files: 28.02 MB * Size of the generated dataset: 25.13 MB * Total amount of disk used: 53.15 MB ### Dataset Summary PyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 28.02 MB * Size of the generated dataset: 25.13 MB * Total amount of disk used: 53.15 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'conv\_id': a 'string' feature. * 'utterance\_idx': a 'int32' feature. * 'context': a 'string' feature. * 'prompt': a 'string' feature. * 'speaker\_idx': a 'int32' feature. * 'utterance': a 'string' feature. * 'selfeval': a 'string' feature. * 'tags': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Creative Commons Attribution-NonCommercial 4.0 International. ### Contributions Thanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nPyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 28.02 MB\n* Size of the generated dataset: 25.13 MB\n* Total amount of disk used: 53.15 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'conv\\_id': a 'string' feature.\n* 'utterance\\_idx': a 'int32' feature.\n* 'context': a 'string' feature.\n* 'prompt': a 'string' feature.\n* 'speaker\\_idx': a 'int32' feature.\n* 'utterance': a 'string' feature.\n* 'selfeval': a 'string' feature.\n* 'tags': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons Attribution-NonCommercial 4.0 International.", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #task_ids-dialogue-generation #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-4.0 #arxiv-1811.00207 #region-us \n", "### Dataset Summary\n\n\nPyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 28.02 MB\n* Size of the generated dataset: 25.13 MB\n* Total amount of disk used: 53.15 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'conv\\_id': a 'string' feature.\n* 'utterance\\_idx': a 'int32' feature.\n* 'context': a 'string' feature.\n* 'prompt': a 'string' feature.\n* 'speaker\\_idx': a 'int32' feature.\n* 'utterance': a 'string' feature.\n* 'selfeval': a 'string' feature.\n* 'tags': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons Attribution-NonCommercial 4.0 International.", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
[ 128, 36, 10, 11, 6, 50, 17, 116, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 17, 28 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-question-answering #task_ids-dialogue-generation #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-4.0 #arxiv-1811.00207 #region-us \n### Dataset Summary\n\n\nPyTorch original implementation of Towards Empathetic Open-domain Conversation Models: a New Benchmark and Dataset### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 28.02 MB\n* Size of the generated dataset: 25.13 MB\n* Total amount of disk used: 53.15 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'conv\\_id': a 'string' feature.\n* 'utterance\\_idx': a 'int32' feature.\n* 'context': a 'string' feature.\n* 'prompt': a 'string' feature.\n* 'speaker\\_idx': a 'int32' feature.\n* 'utterance': a 'string' feature.\n* 'selfeval': a 'string' feature.\n* 'tags': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information\n\n\nCreative Commons Attribution-NonCommercial 4.0 International." ]
d2b027f9f569926ab7c146a7ef81016607505392
# Dataset Card for WebNLG ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [WebNLG challenge website](https://webnlg-challenge.loria.fr/) - **Repository:** [Enriched WebNLG Github repository](https://github.com/ThiagoCF05/webnlg) - **Paper:** [Enriching the WebNLG corpus](https://www.aclweb.org/anthology/W18-6521/) ### Dataset Summary The WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b). It is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture, such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation. ### Supported Tasks and Leaderboards The dataset supports a `other-rdf-to-text` task which requires a model takes a set of RDF (Resource Description Format) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural language sentence expressing the information contained in the triples. ### Languages The dataset is presented in two versions: English (config `en`) and German (config `de`) ## Dataset Structure ### Data Instances A typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and a set of possible verbalizations for this set of triples: ``` { 'category': 'Politician', 'eid': 'Id10', 'lex': {'comment': ['good', 'good', 'good'], 'lid': ['Id1', 'Id2', 'Id3'], 'text': ['World War II had Chiang Kai-shek as a commander and United States Army soldier Abner W. Sibal.', 'Abner W. Sibal served in the United States Army during the Second World War and during that war Chiang Kai-shek was one of the commanders.', 'Abner W. Sibal, served in the United States Army and fought in World War II, one of the commanders of which, was Chiang Kai-shek.']}, 'modified_triple_sets': {'mtriple_set': [['Abner_W._Sibal | battle | World_War_II', 'World_War_II | commander | Chiang_Kai-shek', 'Abner_W._Sibal | militaryBranch | United_States_Army']]}, 'original_triple_sets': {'otriple_set': [['Abner_W._Sibal | battles | World_War_II', 'World_War_II | commander | Chiang_Kai-shek', 'Abner_W._Sibal | branch | United_States_Army'], ['Abner_W._Sibal | militaryBranch | United_States_Army', 'Abner_W._Sibal | battles | World_War_II', 'World_War_II | commander | Chiang_Kai-shek']]}, 'shape': '(X (X) (X (X)))', 'shape_type': 'mixed', 'size': 3} ``` ### Data Fields The following fields can be found in the instances: - `category`: the category of the DBpedia entites present in the RDF triples. - `eid`: an example ID, only unique per split per category. - `size`: number of RDF triples in the set. - `shape`: (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. `shape` is a string representation of the tree with nested parentheses where X is a node ( see [Newick tree format](https://en.wikipedia.org/wiki/Newick_format)) - `shape_type`: (for v3 only) is a type of the tree shape, which can be: `chain` (the object of one triple is the subject of the other); `sibling` (triples with a shared subject); `mixed` (both chain and sibling types present). - `2017_test_category`: (for `webnlg_challenge_2017`) tells whether the set of RDF triples was present in the training set or not. - `lex`: the lexicalizations, with: - `text`: the text to be predicted. - `lid`: a lexicalizayion ID, unique per example. - `comment`: the lexicalizations were rated by crowd workers are either `good` or `bad` ### Data Splits The `en` version has `train`, `test` and `dev` splits; the `de` version, only `train` and `dev`. ## Dataset Creation ### Curation Rationale Natural Language Generation (NLG) is the process of automatically converting non-linguistic data into a linguistic output format (Reiter andDale, 2000; Gatt and Krahmer, 2018). Recently, the field has seen an increase in the number of available focused data resources as E2E (Novikova et al., 2017), ROTOWIRE(Wise-man et al., 2017) and WebNLG (Gardent et al.,2017a,b) corpora. Although theses recent releases are highly valuable resources for the NLG community in general,nall of them were designed to work with end-to-end NLG models. Hence, they consist of a collection of parallel raw representations and their corresponding textual realizations. No intermediate representations are available so researchersncan straight-forwardly use them to develop or evaluate popular tasks in NLG pipelines (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation, Referring Expression Generation, among others. Moreover, these new corpora, like many other resources in Computational Linguistics more in general, are only available in English, limiting the development of NLG-applications to other languages. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset uses the `cc-by-nc-sa-4.0` license. The source DBpedia project uses the `cc-by-sa-3.0` and `gfdl-1.1` licenses. ### Citation Information - If you use the Enriched WebNLG corpus, cite: ``` @InProceedings{ferreiraetal2018, author = "Castro Ferreira, Thiago and Moussallem, Diego and Wubben, Sander and Krahmer, Emiel", title = "Enriching the WebNLG corpus", booktitle = "Proceedings of the 11th International Conference on Natural Language Generation", year = "2018", series = {INLG'18}, publisher = "Association for Computational Linguistics", address = "Tilburg, The Netherlands", } @inproceedings{web_nlg, author = {Claire Gardent and Anastasia Shimorina and Shashi Narayan and Laura Perez{-}Beltrachini}, editor = {Regina Barzilay and Min{-}Yen Kan}, title = {Creating Training Corpora for {NLG} Micro-Planners}, booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, {ACL} 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers}, pages = {179--188}, publisher = {Association for Computational Linguistics}, year = {2017}, url = {https://doi.org/10.18653/v1/P17-1017}, doi = {10.18653/v1/P17-1017} } ``` ### Contributions Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
enriched_web_nlg
[ "task_categories:tabular-to-text", "task_ids:rdf-to-text", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-web-nlg", "language:de", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["crowdsourced"], "language": ["de", "en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|other-web-nlg"], "task_categories": ["tabular-to-text"], "task_ids": ["rdf-to-text"], "pretty_name": "Enriched WebNLG", "config_names": ["de", "en"], "dataset_info": [{"config_name": "en", "features": [{"name": "category", "dtype": "string"}, {"name": "size", "dtype": "int32"}, {"name": "eid", "dtype": "string"}, {"name": "original_triple_sets", "sequence": [{"name": "otriple_set", "sequence": "string"}]}, {"name": "modified_triple_sets", "sequence": [{"name": "mtriple_set", "sequence": "string"}]}, {"name": "shape", "dtype": "string"}, {"name": "shape_type", "dtype": "string"}, {"name": "lex", "sequence": [{"name": "comment", "dtype": "string"}, {"name": "lid", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "template", "dtype": "string"}, {"name": "sorted_triple_sets", "sequence": "string"}, {"name": "lexicalization", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 14665155, "num_examples": 6940}, {"name": "dev", "num_bytes": 1843787, "num_examples": 872}, {"name": "test", "num_bytes": 3931381, "num_examples": 1862}], "download_size": 44284508, "dataset_size": 20440323}, {"config_name": "de", "features": [{"name": "category", "dtype": "string"}, {"name": "size", "dtype": "int32"}, {"name": "eid", "dtype": "string"}, {"name": "original_triple_sets", "sequence": [{"name": "otriple_set", "sequence": "string"}]}, {"name": "modified_triple_sets", "sequence": [{"name": "mtriple_set", "sequence": "string"}]}, {"name": "shape", "dtype": "string"}, {"name": "shape_type", "dtype": "string"}, {"name": "lex", "sequence": [{"name": "comment", "dtype": "string"}, {"name": "lid", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "template", "dtype": "string"}, {"name": "sorted_triple_sets", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 9748193, "num_examples": 6940}, {"name": "dev", "num_bytes": 1238609, "num_examples": 872}], "download_size": 44284508, "dataset_size": 10986802}]}
2024-01-18T11:03:16+00:00
[]
[ "de", "en" ]
TAGS #task_categories-tabular-to-text #task_ids-rdf-to-text #annotations_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-web-nlg #language-German #language-English #license-cc-by-sa-4.0 #region-us
# Dataset Card for WebNLG ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: WebNLG challenge website - Repository: Enriched WebNLG Github repository - Paper: Enriching the WebNLG corpus ### Dataset Summary The WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b). It is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture, such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation. ### Supported Tasks and Leaderboards The dataset supports a 'other-rdf-to-text' task which requires a model takes a set of RDF (Resource Description Format) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural language sentence expressing the information contained in the triples. ### Languages The dataset is presented in two versions: English (config 'en') and German (config 'de') ## Dataset Structure ### Data Instances A typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and a set of possible verbalizations for this set of triples: ### Data Fields The following fields can be found in the instances: - 'category': the category of the DBpedia entites present in the RDF triples. - 'eid': an example ID, only unique per split per category. - 'size': number of RDF triples in the set. - 'shape': (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. 'shape' is a string representation of the tree with nested parentheses where X is a node ( see Newick tree format) - 'shape_type': (for v3 only) is a type of the tree shape, which can be: 'chain' (the object of one triple is the subject of the other); 'sibling' (triples with a shared subject); 'mixed' (both chain and sibling types present). - '2017_test_category': (for 'webnlg_challenge_2017') tells whether the set of RDF triples was present in the training set or not. - 'lex': the lexicalizations, with: - 'text': the text to be predicted. - 'lid': a lexicalizayion ID, unique per example. - 'comment': the lexicalizations were rated by crowd workers are either 'good' or 'bad' ### Data Splits The 'en' version has 'train', 'test' and 'dev' splits; the 'de' version, only 'train' and 'dev'. ## Dataset Creation ### Curation Rationale Natural Language Generation (NLG) is the process of automatically converting non-linguistic data into a linguistic output format (Reiter andDale, 2000; Gatt and Krahmer, 2018). Recently, the field has seen an increase in the number of available focused data resources as E2E (Novikova et al., 2017), ROTOWIRE(Wise-man et al., 2017) and WebNLG (Gardent et al.,2017a,b) corpora. Although theses recent releases are highly valuable resources for the NLG community in general,nall of them were designed to work with end-to-end NLG models. Hence, they consist of a collection of parallel raw representations and their corresponding textual realizations. No intermediate representations are available so researchersncan straight-forwardly use them to develop or evaluate popular tasks in NLG pipelines (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation, Referring Expression Generation, among others. Moreover, these new corpora, like many other resources in Computational Linguistics more in general, are only available in English, limiting the development of NLG-applications to other languages. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information The dataset uses the 'cc-by-nc-sa-4.0' license. The source DBpedia project uses the 'cc-by-sa-3.0' and 'gfdl-1.1' licenses. - If you use the Enriched WebNLG corpus, cite: ### Contributions Thanks to @TevenLeScao for adding this dataset.
[ "# Dataset Card for WebNLG", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: WebNLG challenge website\n- Repository: Enriched WebNLG Github repository\n- Paper: Enriching the WebNLG corpus", "### Dataset Summary\n\nThe WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a\nset of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3\nDBpedia triples shown in (a), the aim is to generate a text such as (b). It is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture, such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.", "### Supported Tasks and Leaderboards\n\nThe dataset supports a 'other-rdf-to-text' task which requires a model takes a set of RDF (Resource Description\nFormat) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural\nlanguage sentence expressing the information contained in the triples.", "### Languages\n\nThe dataset is presented in two versions: English (config 'en') and German (config 'de')", "## Dataset Structure", "### Data Instances\n\nA typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and\na set of possible verbalizations for this set of triples:", "### Data Fields\n\nThe following fields can be found in the instances:\n\n- 'category': the category of the DBpedia entites present in the RDF triples.\n- 'eid': an example ID, only unique per split per category.\n- 'size': number of RDF triples in the set.\n- 'shape': (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. 'shape'\n is a string representation of the tree with nested parentheses where X is a node (\n see Newick tree format)\n- 'shape_type': (for v3 only) is a type of the tree shape, which can be: 'chain' (the object of one triple is the\n subject of the other); 'sibling' (triples with a shared subject); 'mixed' (both chain and sibling types present).\n- '2017_test_category': (for 'webnlg_challenge_2017') tells whether the set of RDF triples was present in the training\n set or not.\n- 'lex': the lexicalizations, with:\n - 'text': the text to be predicted.\n - 'lid': a lexicalizayion ID, unique per example.\n - 'comment': the lexicalizations were rated by crowd workers are either 'good' or 'bad'", "### Data Splits\n\nThe 'en' version has 'train', 'test' and 'dev' splits; the 'de' version, only 'train' and 'dev'.", "## Dataset Creation", "### Curation Rationale\n\nNatural Language Generation (NLG) is the process of automatically converting non-linguistic data into a linguistic output format (Reiter andDale, 2000; Gatt and Krahmer, 2018). Recently, the field has seen an increase in the number of available focused data resources as E2E (Novikova et al., 2017), ROTOWIRE(Wise-man et al., 2017) and WebNLG (Gardent et al.,2017a,b) corpora. Although theses recent releases are highly valuable resources for the NLG community in general,nall of them were designed to work with end-to-end NLG models. Hence, they consist of a collection of parallel raw representations and their corresponding textual realizations. No intermediate representations are available so researchersncan straight-forwardly use them to develop or evaluate popular tasks in NLG pipelines (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation, Referring Expression Generation, among others. Moreover, these new corpora, like many other resources in Computational Linguistics more in general, are only available in English, limiting the development of NLG-applications to other languages.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe dataset uses the 'cc-by-nc-sa-4.0' license. The source DBpedia project uses the 'cc-by-sa-3.0' and 'gfdl-1.1'\nlicenses.\n\n\n\n- If you use the Enriched WebNLG corpus, cite:", "### Contributions\n\nThanks to @TevenLeScao for adding this dataset." ]
[ "TAGS\n#task_categories-tabular-to-text #task_ids-rdf-to-text #annotations_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-web-nlg #language-German #language-English #license-cc-by-sa-4.0 #region-us \n", "# Dataset Card for WebNLG", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: WebNLG challenge website\n- Repository: Enriched WebNLG Github repository\n- Paper: Enriching the WebNLG corpus", "### Dataset Summary\n\nThe WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a\nset of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3\nDBpedia triples shown in (a), the aim is to generate a text such as (b). It is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture, such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.", "### Supported Tasks and Leaderboards\n\nThe dataset supports a 'other-rdf-to-text' task which requires a model takes a set of RDF (Resource Description\nFormat) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural\nlanguage sentence expressing the information contained in the triples.", "### Languages\n\nThe dataset is presented in two versions: English (config 'en') and German (config 'de')", "## Dataset Structure", "### Data Instances\n\nA typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and\na set of possible verbalizations for this set of triples:", "### Data Fields\n\nThe following fields can be found in the instances:\n\n- 'category': the category of the DBpedia entites present in the RDF triples.\n- 'eid': an example ID, only unique per split per category.\n- 'size': number of RDF triples in the set.\n- 'shape': (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. 'shape'\n is a string representation of the tree with nested parentheses where X is a node (\n see Newick tree format)\n- 'shape_type': (for v3 only) is a type of the tree shape, which can be: 'chain' (the object of one triple is the\n subject of the other); 'sibling' (triples with a shared subject); 'mixed' (both chain and sibling types present).\n- '2017_test_category': (for 'webnlg_challenge_2017') tells whether the set of RDF triples was present in the training\n set or not.\n- 'lex': the lexicalizations, with:\n - 'text': the text to be predicted.\n - 'lid': a lexicalizayion ID, unique per example.\n - 'comment': the lexicalizations were rated by crowd workers are either 'good' or 'bad'", "### Data Splits\n\nThe 'en' version has 'train', 'test' and 'dev' splits; the 'de' version, only 'train' and 'dev'.", "## Dataset Creation", "### Curation Rationale\n\nNatural Language Generation (NLG) is the process of automatically converting non-linguistic data into a linguistic output format (Reiter andDale, 2000; Gatt and Krahmer, 2018). Recently, the field has seen an increase in the number of available focused data resources as E2E (Novikova et al., 2017), ROTOWIRE(Wise-man et al., 2017) and WebNLG (Gardent et al.,2017a,b) corpora. Although theses recent releases are highly valuable resources for the NLG community in general,nall of them were designed to work with end-to-end NLG models. Hence, they consist of a collection of parallel raw representations and their corresponding textual realizations. No intermediate representations are available so researchersncan straight-forwardly use them to develop or evaluate popular tasks in NLG pipelines (Reiter and Dale, 2000), such as Discourse Ordering, Lexicalization, Aggregation, Referring Expression Generation, among others. Moreover, these new corpora, like many other resources in Computational Linguistics more in general, are only available in English, limiting the development of NLG-applications to other languages.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe dataset uses the 'cc-by-nc-sa-4.0' license. The source DBpedia project uses the 'cc-by-sa-3.0' and 'gfdl-1.1'\nlicenses.\n\n\n\n- If you use the Enriched WebNLG corpus, cite:", "### Contributions\n\nThanks to @TevenLeScao for adding this dataset." ]
[ 108, 8, 120, 40, 186, 83, 30, 6, 47, 316, 42, 5, 281, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 68, 19 ]
[ "passage: TAGS\n#task_categories-tabular-to-text #task_ids-rdf-to-text #annotations_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|other-web-nlg #language-German #language-English #license-cc-by-sa-4.0 #region-us \n# Dataset Card for WebNLG## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: WebNLG challenge website\n- Repository: Enriched WebNLG Github repository\n- Paper: Enriching the WebNLG corpus### Dataset Summary\n\nThe WebNLG challenge consists in mapping data to text. The training data consists of Data/Text pairs where the data is a\nset of triples extracted from DBpedia and the text is a verbalisation of these triples. For instance, given the 3\nDBpedia triples shown in (a), the aim is to generate a text such as (b). It is a valuable resource and benchmark for the Natural Language Generation (NLG) community. However, as other NLG benchmarks, it only consists of a collection of parallel raw representations and their corresponding textual realizations. This work aimed to provide intermediate representations of the data for the development and evaluation of popular tasks in the NLG pipeline architecture, such as Discourse Ordering, Lexicalization, Aggregation and Referring Expression Generation.", "passage: ### Supported Tasks and Leaderboards\n\nThe dataset supports a 'other-rdf-to-text' task which requires a model takes a set of RDF (Resource Description\nFormat) triples from a database (DBpedia) of the form (subject, property, object) as input and write out a natural\nlanguage sentence expressing the information contained in the triples.### Languages\n\nThe dataset is presented in two versions: English (config 'en') and German (config 'de')## Dataset Structure### Data Instances\n\nA typical example contains the original RDF triples in the set, a modified version which presented to crowd workers, and\na set of possible verbalizations for this set of triples:### Data Fields\n\nThe following fields can be found in the instances:\n\n- 'category': the category of the DBpedia entites present in the RDF triples.\n- 'eid': an example ID, only unique per split per category.\n- 'size': number of RDF triples in the set.\n- 'shape': (for v3 only) Each set of RDF-triples is a tree, which is characterised by its shape and shape type. 'shape'\n is a string representation of the tree with nested parentheses where X is a node (\n see Newick tree format)\n- 'shape_type': (for v3 only) is a type of the tree shape, which can be: 'chain' (the object of one triple is the\n subject of the other); 'sibling' (triples with a shared subject); 'mixed' (both chain and sibling types present).\n- '2017_test_category': (for 'webnlg_challenge_2017') tells whether the set of RDF triples was present in the training\n set or not.\n- 'lex': the lexicalizations, with:\n - 'text': the text to be predicted.\n - 'lid': a lexicalizayion ID, unique per example.\n - 'comment': the lexicalizations were rated by crowd workers are either 'good' or 'bad'### Data Splits\n\nThe 'en' version has 'train', 'test' and 'dev' splits; the 'de' version, only 'train' and 'dev'.## Dataset Creation" ]
7c33dc45cf531b7e99460545fc79964394b30ba7
# Dataset Card for "eraser_multi_rc" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://cogcomp.org/multirc/ - **Repository:** https://github.com/CogComp/multirc - **Paper:** [Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences](https://cogcomp.seas.upenn.edu/page/publication_view/833) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.67 MB - **Size of the generated dataset:** 63.65 MB - **Total amount of disk used:** 65.32 MB ### Dataset Summary MultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph. We have designed the dataset with three key challenges in mind: - The number of correct answer-options for each question is not pre-specified. This removes the over-reliance of current approaches on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, unlike previous work, the task here is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually. - The correct answer(s) is not required to be a span in the text. - The paragraphs in our dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets. The goal of this dataset is to encourage the research community to explore approaches that can do more than sophisticated lexical-level matching. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 1.67 MB - **Size of the generated dataset:** 63.65 MB - **Total amount of disk used:** 65.32 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "evidences": "[\"Allan sat down at his desk and pulled the chair in close .\", \"Opening a side drawer , he took out a piece of paper and his ink...", "label": 0, "passage": "\"Allan sat down at his desk and pulled the chair in close .\\nOpening a side drawer , he took out a piece of paper and his inkpot...", "query_and_answer": "Name few objects said to be in or on Allan 's desk || Eraser" } ``` ### Data Fields The data fields are the same among all splits. #### default - `passage`: a `string` feature. - `query_and_answer`: a `string` feature. - `label`: a classification label, with possible values including `False` (0), `True` (1). - `evidences`: a `list` of `string` features. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|24029| 3214|4848| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information https://github.com/CogComp/multirc/blob/master/LICENSE Research and Academic Use License Cognitive Computation Group University of Illinois at Urbana-Champaign Downloading software implies that you accept the following license terms: Under this Agreement, The Board of Trustees of the University of Illinois ("University"), a body corporate and politic of the State of Illinois with its principal offices at 506 South Wright Street, Urbana, Illinois 61801, U.S.A., on behalf of its Department of Computer Science on the Urbana-Champaign Campus, provides the software ("Software") described in Appendix A, attached hereto and incorporated herein, to the Licensee identified below ("Licensee") subject to the following conditions: 1. Upon execution of this Agreement by Licensee below, the University grants, and Licensee accepts, a roylaty-free, non-exclusive license: A. To use unlimited copies of the Software for its own academic and research purposes. B. To make derivative works. However, if Licensee distributes any derivative work based on or derived from the Software (with such distribution limited to binary form only), then Licensee will (1) notify the University (c/o Professor Dan Roth, e-mail: danr@cs.uiuc.edu) regarding its distribution of the derivative work and provide a copy if requested, and (2) clearly notify users that such derivative work is a modified version and not the original Software distributed by the University. C. To redistribute (sublicense) derivative works based on the Software in binary form only to third parties provided that (1) the copyright notice and any accompanying legends or proprietary notices are reproduced on all copies, (2) no royalty is charged for such copies, and (3) third parties are restricted to using the derivative work for academic and research purposes only, without further sublicensing rights. No license is granted herein that would permit Licensee to incorporate the Software into a commercial product, or to otherwise commercially exploit the Software. Should Licensee wish to make commercial use of the Software, Licensee should contact the University, c/o the Office of Technology Management ("OTM") to negotiate an appropriate license for such commercial use. To contact the OTM: otmmailaccount@ad.uiuc.edu; telephone: (217)333-3781; fax: (217) 265-5530. 2. THE UNIVERSITY GIVES NO WARRANTIES, EITHER EXPRESSED OR IMPLIED, FOR THE SOFTWARE AND/OR ASSOCIATED MATERIALS PROVIDED UNDER THIS AGREEMENT, INCLUDING, WITHOUT LIMITATION, WARRANTY OF MERCHANTABILITY AND WARRANTY OF FITNESS FOR A PARTICULAR PURPOSE, AND ANY WARRANTY AGAINST INFRINGEMENT OF ANY INTELLECTUAL PROPERTY RIGHTS. 3. Licensee understands the Software is a research tool for which no warranties as to capabilities or accuracy are made, and Licensee accepts the Software on an "as is, with all defects" basis, without maintenance, debugging , support or improvement. Licensee assumes the entire risk as to the results and performance of the Software and/or associated materials. Licensee agrees that University shall not be held liable for any direct, indirect, consequential, or incidental damages with respect to any claim by Licensee or any third party on account of or arising from this Agreement or use of the Software and/or associated materials. 4. Licensee understands the Software is proprietary to the University. Licensee will take all reasonable steps to insure that the source code is protected and secured from unauthorized disclosure, use, or release and will treat it with at least the same level of care as Licensee would use to protect and secure its own proprietary computer programs and/or information, but using no less than reasonable care. 5. In the event that Licensee shall be in default in the performance of any material obligations under this Agreement, and if the default has not been remedied within sixty (60) days after the date of notice in writing of such default, University may terminate this Agreement by written notice. In the event of termination, Licensee shall promptly return to University the original and any copies of licensed Software in Licensee's possession. In the event of any termination of this Agreement, any and all sublicenses granted by Licensee to third parties pursuant to this Agreement (as permitted by this Agreement) prior to the date of such termination shall nevertheless remain in full force and effect. 6. The Software was developed, in part, with support from the National Science Foundation, and the Federal Government has certain license rights in the Software. 7. This Agreement shall be construed and interpreted in accordance with the laws of the State of Illinois, U.S.A.. 8. This Agreement shall be subject to all United States Government laws and regulations now and hereafter applicable to the subject matter of this Agreement, including specifically the Export Law provisions of the Departments of Commerce and State. Licensee will not export or re-export the Software without the appropriate United States or foreign government license. By its registration below, Licensee confirms that it understands the terms and conditions of this Agreement, and agrees to be bound by them. This Agreement shall become effective as of the date of execution by Licensee. ### Citation Information ``` @unpublished{eraser2019, title = {ERASER: A Benchmark to Evaluate Rationalized NLP Models}, author = {Jay DeYoung and Sarthak Jain and Nazneen Fatema Rajani and Eric Lehman and Caiming Xiong and Richard Socher and Byron C. Wallace} } @inproceedings{MultiRC2018, author = {Daniel Khashabi and Snigdha Chaturvedi and Michael Roth and Shyam Upadhyay and Dan Roth}, title = {Looking Beyond the Surface:A Challenge Set for Reading Comprehension over Multiple Sentences}, booktitle = {Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL)}, year = {2018} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
eraser_multi_rc
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["multiple-choice"], "task_ids": ["multiple-choice-qa"], "pretty_name": "Eraser MultiRC (Multi-Sentence Reading Comprehension)", "dataset_info": {"features": [{"name": "passage", "dtype": "string"}, {"name": "query_and_answer", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "False", "1": "True"}}}}, {"name": "evidences", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 9194475, "num_examples": 4848}, {"name": "train", "num_bytes": 47922877, "num_examples": 24029}, {"name": "validation", "num_bytes": 6529020, "num_examples": 3214}], "download_size": 1667550, "dataset_size": 63646372}}
2024-01-18T11:03:17+00:00
[]
[ "en" ]
TAGS #task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us
Dataset Card for "eraser\_multi\_rc" ==================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences * Point of Contact: * Size of downloaded dataset files: 1.67 MB * Size of the generated dataset: 63.65 MB * Total amount of disk used: 65.32 MB ### Dataset Summary MultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph. We have designed the dataset with three key challenges in mind: * The number of correct answer-options for each question is not pre-specified. This removes the over-reliance of current approaches on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, unlike previous work, the task here is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually. * The correct answer(s) is not required to be a span in the text. * The paragraphs in our dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets. The goal of this dataset is to encourage the research community to explore approaches that can do more than sophisticated lexical-level matching. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 1.67 MB * Size of the generated dataset: 63.65 MB * Total amount of disk used: 65.32 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'passage': a 'string' feature. * 'query\_and\_answer': a 'string' feature. * 'label': a classification label, with possible values including 'False' (0), 'True' (1). * 'evidences': a 'list' of 'string' features. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information URL Research and Academic Use License Cognitive Computation Group University of Illinois at Urbana-Champaign Downloading software implies that you accept the following license terms: Under this Agreement, The Board of Trustees of the University of Illinois ("University"), a body corporate and politic of the State of Illinois with its principal offices at 506 South Wright Street, Urbana, Illinois 61801, U.S.A., on behalf of its Department of Computer Science on the Urbana-Champaign Campus, provides the software ("Software") described in Appendix A, attached hereto and incorporated herein, to the Licensee identified below ("Licensee") subject to the following conditions: ``` 1. Upon execution of this Agreement by Licensee below, the University grants, and Licensee accepts, a roylaty-free, non-exclusive license: A. To use unlimited copies of the Software for its own academic and research purposes. B. To make derivative works. However, if Licensee distributes any derivative work based on or derived from the Software (with such distribution limited to binary form only), then Licensee will (1) notify the University (c/o Professor Dan Roth, e-mail: danr@URL) regarding its distribution of the derivative work and provide a copy if requested, and (2) clearly notify users that such derivative work is a modified version and not the original Software distributed by the University. C. To redistribute (sublicense) derivative works based on the Software in binary form only to third parties provided that (1) the copyright notice and any accompanying legends or proprietary notices are reproduced on all copies, (2) no royalty is charged for such copies, and (3) third parties are restricted to using the derivative work for academic and research purposes only, without further sublicensing rights. No license is granted herein that would permit Licensee to incorporate the Software into a commercial product, or to otherwise commercially exploit the Software. Should Licensee wish to make commercial use of the Software, Licensee should contact the University, c/o the Office of Technology Management ("OTM") to negotiate an appropriate license for such commercial use. To contact the OTM: otmmailaccount@URL; telephone: (217)333-3781; fax: (217) 265-5530. 2. THE UNIVERSITY GIVES NO WARRANTIES, EITHER EXPRESSED OR IMPLIED, FOR THE SOFTWARE AND/OR ASSOCIATED MATERIALS PROVIDED UNDER THIS AGREEMENT, INCLUDING, WITHOUT LIMITATION, WARRANTY OF MERCHANTABILITY AND WARRANTY OF FITNESS FOR A PARTICULAR PURPOSE, AND ANY WARRANTY AGAINST INFRINGEMENT OF ANY INTELLECTUAL PROPERTY RIGHTS. 3. Licensee understands the Software is a research tool for which no warranties as to capabilities or accuracy are made, and Licensee accepts the Software on an "as is, with all defects" basis, without maintenance, debugging , support or improvement. Licensee assumes the entire risk as to the results and performance of the Software and/or associated materials. Licensee agrees that University shall not be held liable for any direct, indirect, consequential, or incidental damages with respect to any claim by Licensee or any third party on account of or arising from this Agreement or use of the Software and/or associated materials. 4. Licensee understands the Software is proprietary to the University. Licensee will take all reasonable steps to insure that the source code is protected and secured from unauthorized disclosure, use, or release and will treat it with at least the same level of care as Licensee would use to protect and secure its own proprietary computer programs and/or information, but using no less than reasonable care. 5. In the event that Licensee shall be in default in the performance of any material obligations under this Agreement, and if the default has not been remedied within sixty (60) days after the date of notice in writing of such default, University may terminate this Agreement by written notice. In the event of termination, Licensee shall promptly return to University the original and any copies of licensed Software in Licensee's possession. In the event of any termination of this Agreement, any and all sublicenses granted by Licensee to third parties pursuant to this Agreement (as permitted by this Agreement) prior to the date of such termination shall nevertheless remain in full force and effect. 6. The Software was developed, in part, with support from the National Science Foundation, and the Federal Government has certain license rights in the Software. 7. This Agreement shall be construed and interpreted in accordance with the laws of the State of Illinois, U.S.A.. 8. This Agreement shall be subject to all United States Government laws and regulations now and hereafter applicable to the subject matter of this Agreement, including specifically the Export Law provisions of the Departments of Commerce and State. Licensee will not export or re-export the Software without the appropriate United States or foreign government license. ``` By its registration below, Licensee confirms that it understands the terms and conditions of this Agreement, and agrees to be bound by them. This Agreement shall become effective as of the date of execution by Licensee. ### Contributions Thanks to @lewtun, @patrickvonplaten, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nMultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph.\n\n\nWe have designed the dataset with three key challenges in mind:\n\n\n* The number of correct answer-options for each question is not pre-specified. This removes the over-reliance of current approaches on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, unlike previous work, the task here is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually.\n* The correct answer(s) is not required to be a span in the text.\n* The paragraphs in our dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets.\n\n\nThe goal of this dataset is to encourage the research community to explore approaches that can do more than sophisticated lexical-level matching.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 1.67 MB\n* Size of the generated dataset: 63.65 MB\n* Total amount of disk used: 65.32 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'passage': a 'string' feature.\n* 'query\\_and\\_answer': a 'string' feature.\n* 'label': a classification label, with possible values including 'False' (0), 'True' (1).\n* 'evidences': a 'list' of 'string' features.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nURL\n\n\nResearch and Academic Use License\nCognitive Computation Group\nUniversity of Illinois at Urbana-Champaign\n\n\nDownloading software implies that you accept the following license terms:\n\n\nUnder this Agreement, The Board of Trustees of the University of Illinois (\"University\"), a body corporate and politic of the State of Illinois with its principal offices at 506 South Wright Street, Urbana, Illinois 61801, U.S.A., on behalf of its Department of Computer Science on the Urbana-Champaign Campus, provides the software (\"Software\") described in Appendix A, attached hereto and incorporated herein, to the Licensee identified below (\"Licensee\") subject to the following conditions:\n\n\n\n```\n1. Upon execution of this Agreement by Licensee below, the University grants, and Licensee accepts, a roylaty-free, non-exclusive license:\n\tA. To use unlimited copies of the Software for its own academic and research purposes.\n\tB. To make derivative works. However, if Licensee distributes any derivative work based on or derived from the Software (with such distribution limited to binary form only), then Licensee will (1) notify the University (c/o Professor Dan Roth, e-mail: danr@URL) regarding its distribution of the derivative work and provide a copy if requested, and (2) clearly notify users that such derivative work is a modified version and not the original Software distributed by the University.\n\tC. To redistribute (sublicense) derivative works based on the Software in binary form only to third parties provided that (1) the copyright notice and any accompanying legends or proprietary notices are reproduced on all copies, (2) no royalty is charged for such copies, and (3) third parties are restricted to using the derivative work for academic and research purposes only, without further sublicensing rights.\nNo license is granted herein that would permit Licensee to incorporate the Software into a commercial product, or to otherwise commercially exploit the Software. Should Licensee wish to make commercial use of the Software, Licensee should contact the University, c/o the Office of Technology Management (\"OTM\") to negotiate an appropriate license for such commercial use. To contact the OTM: otmmailaccount@URL; telephone: (217)333-3781; fax: (217) 265-5530.\n2. THE UNIVERSITY GIVES NO WARRANTIES, EITHER EXPRESSED OR IMPLIED, FOR THE SOFTWARE AND/OR ASSOCIATED MATERIALS PROVIDED UNDER THIS AGREEMENT, INCLUDING, WITHOUT LIMITATION, WARRANTY OF MERCHANTABILITY AND WARRANTY OF FITNESS FOR A PARTICULAR PURPOSE, AND ANY WARRANTY AGAINST INFRINGEMENT OF ANY INTELLECTUAL PROPERTY RIGHTS.\n3. Licensee understands the Software is a research tool for which no warranties as to capabilities or accuracy are made, and Licensee accepts the Software on an \"as is, with all defects\" basis, without maintenance, debugging , support or improvement. Licensee assumes the entire risk as to the results and performance of the Software and/or associated materials. Licensee agrees that University shall not be held liable for any direct, indirect, consequential, or incidental damages with respect to any claim by Licensee or any third party on account of or arising from this Agreement or use of the Software and/or associated materials.\n4. Licensee understands the Software is proprietary to the University. Licensee will take all reasonable steps to insure that the source code is protected and secured from unauthorized disclosure, use, or release and will treat it with at least the same level of care as Licensee would use to protect and secure its own proprietary computer programs and/or information, but using no less than reasonable care.\n5. In the event that Licensee shall be in default in the performance of any material obligations under this Agreement, and if the default has not been remedied within sixty (60) days after the date of notice in writing of such default, University may terminate this Agreement by written notice. In the event of termination, Licensee shall promptly return to University the original and any copies of licensed Software in Licensee's possession. In the event of any termination of this Agreement, any and all sublicenses granted by Licensee to third parties pursuant to this Agreement (as permitted by this Agreement) prior to the date of such termination shall nevertheless remain in full force and effect.\n6. The Software was developed, in part, with support from the National Science Foundation, and the Federal Government has certain license rights in the Software.\n7. This Agreement shall be construed and interpreted in accordance with the laws of the State of Illinois, U.S.A..\n8. This Agreement shall be subject to all United States Government laws and regulations now and hereafter applicable to the subject matter of this Agreement, including specifically the Export Law provisions of the Departments of Commerce and State. Licensee will not export or re-export the Software without the appropriate United States or foreign government license.\n\n```\n\nBy its registration below, Licensee confirms that it understands the terms and conditions of this Agreement, and agrees to be bound by them. This Agreement shall become effective as of the date of execution by Licensee.", "### Contributions\n\n\nThanks to @lewtun, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nMultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph.\n\n\nWe have designed the dataset with three key challenges in mind:\n\n\n* The number of correct answer-options for each question is not pre-specified. This removes the over-reliance of current approaches on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, unlike previous work, the task here is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually.\n* The correct answer(s) is not required to be a span in the text.\n* The paragraphs in our dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets.\n\n\nThe goal of this dataset is to encourage the research community to explore approaches that can do more than sophisticated lexical-level matching.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 1.67 MB\n* Size of the generated dataset: 63.65 MB\n* Total amount of disk used: 65.32 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'passage': a 'string' feature.\n* 'query\\_and\\_answer': a 'string' feature.\n* 'label': a classification label, with possible values including 'False' (0), 'True' (1).\n* 'evidences': a 'list' of 'string' features.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nURL\n\n\nResearch and Academic Use License\nCognitive Computation Group\nUniversity of Illinois at Urbana-Champaign\n\n\nDownloading software implies that you accept the following license terms:\n\n\nUnder this Agreement, The Board of Trustees of the University of Illinois (\"University\"), a body corporate and politic of the State of Illinois with its principal offices at 506 South Wright Street, Urbana, Illinois 61801, U.S.A., on behalf of its Department of Computer Science on the Urbana-Champaign Campus, provides the software (\"Software\") described in Appendix A, attached hereto and incorporated herein, to the Licensee identified below (\"Licensee\") subject to the following conditions:\n\n\n\n```\n1. Upon execution of this Agreement by Licensee below, the University grants, and Licensee accepts, a roylaty-free, non-exclusive license:\n\tA. To use unlimited copies of the Software for its own academic and research purposes.\n\tB. To make derivative works. However, if Licensee distributes any derivative work based on or derived from the Software (with such distribution limited to binary form only), then Licensee will (1) notify the University (c/o Professor Dan Roth, e-mail: danr@URL) regarding its distribution of the derivative work and provide a copy if requested, and (2) clearly notify users that such derivative work is a modified version and not the original Software distributed by the University.\n\tC. To redistribute (sublicense) derivative works based on the Software in binary form only to third parties provided that (1) the copyright notice and any accompanying legends or proprietary notices are reproduced on all copies, (2) no royalty is charged for such copies, and (3) third parties are restricted to using the derivative work for academic and research purposes only, without further sublicensing rights.\nNo license is granted herein that would permit Licensee to incorporate the Software into a commercial product, or to otherwise commercially exploit the Software. Should Licensee wish to make commercial use of the Software, Licensee should contact the University, c/o the Office of Technology Management (\"OTM\") to negotiate an appropriate license for such commercial use. To contact the OTM: otmmailaccount@URL; telephone: (217)333-3781; fax: (217) 265-5530.\n2. THE UNIVERSITY GIVES NO WARRANTIES, EITHER EXPRESSED OR IMPLIED, FOR THE SOFTWARE AND/OR ASSOCIATED MATERIALS PROVIDED UNDER THIS AGREEMENT, INCLUDING, WITHOUT LIMITATION, WARRANTY OF MERCHANTABILITY AND WARRANTY OF FITNESS FOR A PARTICULAR PURPOSE, AND ANY WARRANTY AGAINST INFRINGEMENT OF ANY INTELLECTUAL PROPERTY RIGHTS.\n3. Licensee understands the Software is a research tool for which no warranties as to capabilities or accuracy are made, and Licensee accepts the Software on an \"as is, with all defects\" basis, without maintenance, debugging , support or improvement. Licensee assumes the entire risk as to the results and performance of the Software and/or associated materials. Licensee agrees that University shall not be held liable for any direct, indirect, consequential, or incidental damages with respect to any claim by Licensee or any third party on account of or arising from this Agreement or use of the Software and/or associated materials.\n4. Licensee understands the Software is proprietary to the University. Licensee will take all reasonable steps to insure that the source code is protected and secured from unauthorized disclosure, use, or release and will treat it with at least the same level of care as Licensee would use to protect and secure its own proprietary computer programs and/or information, but using no less than reasonable care.\n5. In the event that Licensee shall be in default in the performance of any material obligations under this Agreement, and if the default has not been remedied within sixty (60) days after the date of notice in writing of such default, University may terminate this Agreement by written notice. In the event of termination, Licensee shall promptly return to University the original and any copies of licensed Software in Licensee's possession. In the event of any termination of this Agreement, any and all sublicenses granted by Licensee to third parties pursuant to this Agreement (as permitted by this Agreement) prior to the date of such termination shall nevertheless remain in full force and effect.\n6. The Software was developed, in part, with support from the National Science Foundation, and the Federal Government has certain license rights in the Software.\n7. This Agreement shall be construed and interpreted in accordance with the laws of the State of Illinois, U.S.A..\n8. This Agreement shall be subject to all United States Government laws and regulations now and hereafter applicable to the subject matter of this Agreement, including specifically the Export Law provisions of the Departments of Commerce and State. Licensee will not export or re-export the Software without the appropriate United States or foreign government license.\n\n```\n\nBy its registration below, Licensee confirms that it understands the terms and conditions of this Agreement, and agrees to be bound by them. This Agreement shall become effective as of the date of execution by Licensee.", "### Contributions\n\n\nThanks to @lewtun, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ 89, 257, 10, 11, 6, 52, 17, 78, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 1183, 28 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us \n### Dataset Summary\n\n\nMultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph.\n\n\nWe have designed the dataset with three key challenges in mind:\n\n\n* The number of correct answer-options for each question is not pre-specified. This removes the over-reliance of current approaches on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, unlike previous work, the task here is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually.\n* The correct answer(s) is not required to be a span in the text.\n* The paragraphs in our dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets.\n\n\nThe goal of this dataset is to encourage the research community to explore approaches that can do more than sophisticated lexical-level matching.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 1.67 MB\n* Size of the generated dataset: 63.65 MB\n* Total amount of disk used: 65.32 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.", "passage: #### default\n\n\n* 'passage': a 'string' feature.\n* 'query\\_and\\_answer': a 'string' feature.\n* 'label': a classification label, with possible values including 'False' (0), 'True' (1).\n* 'evidences': a 'list' of 'string' features.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators" ]
824ed2cecbf76c757a1e5a63230e77e9117a2e39
# Dataset Card for "esnli" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/OanaMariaCamburu/e-SNLI](https://github.com/OanaMariaCamburu/e-SNLI) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 204.51 MB - **Size of the generated dataset:** 114.84 MB - **Total amount of disk used:** 319.35 MB ### Dataset Summary The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to include human-annotated natural language explanations of the entailment relations. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 204.51 MB - **Size of the generated dataset:** 114.84 MB - **Total amount of disk used:** 319.35 MB An example of 'validation' looks as follows. ``` { "explanation_1": "A woman must be present to smile.", "explanation_2": "A woman smiling implies that she is present.", "explanation_3": "A smiling woman is also present.", "hypothesis": "A woman is present.", "label": 0, "premise": "A woman smiles at the child." } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `explanation_1`: a `string` feature. - `explanation_2`: a `string` feature. - `explanation_3`: a `string` feature. ### Data Splits | name |train |validation|test| |----------|-----:|---------:|---:| |plain_text|549367| 9842|9824| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @incollection{NIPS2018_8163, title = {e-SNLI: Natural Language Inference with Natural Language Explanations}, author = {Camburu, Oana-Maria and Rockt"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil}, booktitle = {Advances in Neural Information Processing Systems 31}, editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett}, pages = {9539--9549}, year = {2018}, publisher = {Curran Associates, Inc.}, url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
esnli
[ "language:en", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "paperswithcode_id": "e-snli", "pretty_name": "e-SNLI", "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "explanation_1", "dtype": "string"}, {"name": "explanation_2", "dtype": "string"}, {"name": "explanation_3", "dtype": "string"}], "config_name": "plain_text", "splits": [{"name": "test", "num_bytes": 3387169, "num_examples": 9824}, {"name": "train", "num_bytes": 108024142, "num_examples": 549367}, {"name": "validation", "num_bytes": 3423725, "num_examples": 9842}], "download_size": 204516010, "dataset_size": 114835036}}
2024-01-18T11:03:18+00:00
[]
[ "en" ]
TAGS #language-English #region-us
Dataset Card for "esnli" ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 204.51 MB * Size of the generated dataset: 114.84 MB * Total amount of disk used: 319.35 MB ### Dataset Summary The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to include human-annotated natural language explanations of the entailment relations. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### plain\_text * Size of downloaded dataset files: 204.51 MB * Size of the generated dataset: 114.84 MB * Total amount of disk used: 319.35 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### plain\_text * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2). * 'explanation\_1': a 'string' feature. * 'explanation\_2': a 'string' feature. * 'explanation\_3': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @lewtun, @albertvillanova, @patrickvonplaten for adding this dataset.
[ "### Dataset Summary\n\n\nThe e-SNLI dataset extends the Stanford Natural Language Inference Dataset to\ninclude human-annotated natural language explanations of the entailment\nrelations.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 204.51 MB\n* Size of the generated dataset: 114.84 MB\n* Total amount of disk used: 319.35 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'explanation\\_1': a 'string' feature.\n* 'explanation\\_2': a 'string' feature.\n* 'explanation\\_3': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @albertvillanova, @patrickvonplaten for adding this dataset." ]
[ "TAGS\n#language-English #region-us \n", "### Dataset Summary\n\n\nThe e-SNLI dataset extends the Stanford Natural Language Inference Dataset to\ninclude human-annotated natural language explanations of the entailment\nrelations.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 204.51 MB\n* Size of the generated dataset: 114.84 MB\n* Total amount of disk used: 319.35 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'explanation\\_1': a 'string' feature.\n* 'explanation\\_2': a 'string' feature.\n* 'explanation\\_3': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @albertvillanova, @patrickvonplaten for adding this dataset." ]
[ 10, 42, 10, 11, 6, 57, 17, 112, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 34 ]
[ "passage: TAGS\n#language-English #region-us \n### Dataset Summary\n\n\nThe e-SNLI dataset extends the Stanford Natural Language Inference Dataset to\ninclude human-annotated natural language explanations of the entailment\nrelations.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### plain\\_text\n\n\n* Size of downloaded dataset files: 204.51 MB\n* Size of the generated dataset: 114.84 MB\n* Total amount of disk used: 319.35 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### plain\\_text\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'explanation\\_1': a 'string' feature.\n* 'explanation\\_2': a 'string' feature.\n* 'explanation\\_3': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @thomwolf, @lewtun, @albertvillanova, @patrickvonplaten for adding this dataset." ]
ed2e19157c7f48d99c649fa78b0aa6cb96748a28
# Dataset Card for ethpy150open ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.sri.inf.ethz.ch/py150 - **Repository:** https://github.com/google-research-datasets/eth_py150_open - **Paper:** https://proceedings.icml.cc/static/paper_files/icml/2020/5401-Paper.pdf - **Leaderboard:** None - **Point of Contact:** Aditya Kanade <kanade@iisc.ac.in>, Petros Maniatis <maniatis@google.com> ### Dataset Summary A redistributable subset of the [ETH Py150 corpus](https://www.sri.inf.ethz.ch/py150), introduced in the ICML 2020 paper ['Learning and Evaluating Contextual Embedding of Source Code'](https://proceedings.icml.cc/static/paper_files/icml/2020/5401-Paper.pdf) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure List of dicts of { "filepath": The relative URL containing the path to the file on GitHub "license": The license used for that specific file or repository } ### Data Instances { "filepath": "0rpc/zerorpc-python/setup.py", "license": "mit" }, { "filepath": "0rpc/zerorpc-python/zerorpc/heartbeat.py", "license": "mit" }, ### Data Fields - `filepath`: The relative URL containing the path to the file on GitHub - `license`: The license used for that specific file or repository ### Data Splits | | Train | Valid | Test | | ----- | ------- | ----- | ----- | | Dataset Split | 74749 | 8302 | 41457 | ## Dataset Creation The original dataset is at https://www.sri.inf.ethz.ch/py150 ### Curation Rationale To generate a more redistributable version of the dataset ### Source Data #### Initial Data Collection and Normalization All the urls are filepaths relative to GitHub and the master branch was used as available at the time #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Apache License 2.0 ### Citation Information @inproceedings{kanade2020learning, title={Learning and Evaluating Contextual Embedding of Source Code}, author={Kanade, Aditya and Maniatis, Petros and Balakrishnan, Gogul and Shi, Kensen}, booktitle={International Conference on Machine Learning}, pages={5110--5121}, year={2020}, organization={PMLR} } ### Contributions Thanks to [@Bharat123rox](https://github.com/Bharat123rox) for adding this dataset.
eth_py150_open
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:apache-2.0", "contextual-embeddings", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "eth-py150-open", "pretty_name": "ethpy150open", "tags": ["contextual-embeddings"], "dataset_info": {"features": [{"name": "filepath", "dtype": "string"}, {"name": "license", "dtype": "string"}], "config_name": "eth_py150_open", "splits": [{"name": "train", "num_bytes": 5414978, "num_examples": 74749}, {"name": "test", "num_bytes": 3006199, "num_examples": 41457}, {"name": "validation", "num_bytes": 598524, "num_examples": 8302}], "download_size": 13875671, "dataset_size": 9019701}}
2024-01-18T11:03:19+00:00
[]
[ "en" ]
TAGS #task_categories-other #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #contextual-embeddings #region-us
Dataset Card for ethpy150open ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: None * Point of Contact: Aditya Kanade [kanade@URL](mailto:kanade@URL), Petros Maniatis [maniatis@URL](mailto:maniatis@URL) ### Dataset Summary A redistributable subset of the ETH Py150 corpus, introduced in the ICML 2020 paper 'Learning and Evaluating Contextual Embedding of Source Code' ### Supported Tasks and Leaderboards ### Languages English Dataset Structure ----------------- List of dicts of { "filepath": The relative URL containing the path to the file on GitHub "license": The license used for that specific file or repository } ### Data Instances { "filepath": "0rpc/zerorpc-python/URL", "license": "mit" }, { "filepath": "0rpc/zerorpc-python/zerorpc/URL", "license": "mit" }, ### Data Fields * 'filepath': The relative URL containing the path to the file on GitHub * 'license': The license used for that specific file or repository ### Data Splits Dataset Creation ---------------- The original dataset is at URL ### Curation Rationale To generate a more redistributable version of the dataset ### Source Data #### Initial Data Collection and Normalization All the urls are filepaths relative to GitHub and the master branch was used as available at the time #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Apache License 2.0 @inproceedings{kanade2020learning, title={Learning and Evaluating Contextual Embedding of Source Code}, author={Kanade, Aditya and Maniatis, Petros and Balakrishnan, Gogul and Shi, Kensen}, booktitle={International Conference on Machine Learning}, pages={5110--5121}, year={2020}, organization={PMLR} } ### Contributions Thanks to @Bharat123rox for adding this dataset.
[ "### Dataset Summary\n\n\nA redistributable subset of the ETH Py150 corpus, introduced in the ICML 2020 paper 'Learning and Evaluating Contextual Embedding of Source Code'", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\nList of dicts of\n{\n\"filepath\": The relative URL containing the path to the file on GitHub\n\"license\": The license used for that specific file or repository\n}", "### Data Instances\n\n\n{\n\"filepath\": \"0rpc/zerorpc-python/URL\",\n\"license\": \"mit\"\n},\n{\n\"filepath\": \"0rpc/zerorpc-python/zerorpc/URL\",\n\"license\": \"mit\"\n},", "### Data Fields\n\n\n* 'filepath': The relative URL containing the path to the file on GitHub\n* 'license': The license used for that specific file or repository", "### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nThe original dataset is at URL", "### Curation Rationale\n\n\nTo generate a more redistributable version of the dataset", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nAll the urls are filepaths relative to GitHub and the master branch was used as available at the time", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nApache License 2.0\n\n\n@inproceedings{kanade2020learning,\ntitle={Learning and Evaluating Contextual Embedding of Source Code},\nauthor={Kanade, Aditya and Maniatis, Petros and Balakrishnan, Gogul and Shi, Kensen},\nbooktitle={International Conference on Machine Learning},\npages={5110--5121},\nyear={2020},\norganization={PMLR}\n}", "### Contributions\n\n\nThanks to @Bharat123rox for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #contextual-embeddings #region-us \n", "### Dataset Summary\n\n\nA redistributable subset of the ETH Py150 corpus, introduced in the ICML 2020 paper 'Learning and Evaluating Contextual Embedding of Source Code'", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\nList of dicts of\n{\n\"filepath\": The relative URL containing the path to the file on GitHub\n\"license\": The license used for that specific file or repository\n}", "### Data Instances\n\n\n{\n\"filepath\": \"0rpc/zerorpc-python/URL\",\n\"license\": \"mit\"\n},\n{\n\"filepath\": \"0rpc/zerorpc-python/zerorpc/URL\",\n\"license\": \"mit\"\n},", "### Data Fields\n\n\n* 'filepath': The relative URL containing the path to the file on GitHub\n* 'license': The license used for that specific file or repository", "### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nThe original dataset is at URL", "### Curation Rationale\n\n\nTo generate a more redistributable version of the dataset", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nAll the urls are filepaths relative to GitHub and the master branch was used as available at the time", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nApache License 2.0\n\n\n@inproceedings{kanade2020learning,\ntitle={Learning and Evaluating Contextual Embedding of Source Code},\nauthor={Kanade, Aditya and Maniatis, Petros and Balakrishnan, Gogul and Shi, Kensen},\nbooktitle={International Conference on Machine Learning},\npages={5110--5121},\nyear={2020},\norganization={PMLR}\n}", "### Contributions\n\n\nThanks to @Bharat123rox for adding this dataset." ]
[ 86, 42, 10, 55, 70, 42, 18, 18, 4, 36, 10, 5, 5, 9, 18, 7, 8, 14, 6, 102, 20 ]
[ "passage: TAGS\n#task_categories-other #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #contextual-embeddings #region-us \n### Dataset Summary\n\n\nA redistributable subset of the ETH Py150 corpus, introduced in the ICML 2020 paper 'Learning and Evaluating Contextual Embedding of Source Code'### Supported Tasks and Leaderboards### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------\n\n\nList of dicts of\n{\n\"filepath\": The relative URL containing the path to the file on GitHub\n\"license\": The license used for that specific file or repository\n}### Data Instances\n\n\n{\n\"filepath\": \"0rpc/zerorpc-python/URL\",\n\"license\": \"mit\"\n},\n{\n\"filepath\": \"0rpc/zerorpc-python/zerorpc/URL\",\n\"license\": \"mit\"\n},### Data Fields\n\n\n* 'filepath': The relative URL containing the path to the file on GitHub\n* 'license': The license used for that specific file or repository### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nThe original dataset is at URL### Curation Rationale\n\n\nTo generate a more redistributable version of the dataset### Source Data#### Initial Data Collection and Normalization\n\n\nAll the urls are filepaths relative to GitHub and the master branch was used as available at the time#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators" ]
b239d9c437c4d74e643b973d5e57782549aaab81
# Dataset Card for Ethos ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ETHOS Hate Speech Dataset](https://github.com/intelligence-csd-auth-gr/Ethos-Hate-Speech-Dataset) - **Repository:**[ETHOS Hate Speech Dataset](https://github.com/intelligence-csd-auth-gr/Ethos-Hate-Speech-Dataset) - **Paper:**[ETHOS: an Online Hate Speech Detection Dataset](https://arxiv.org/abs/2006.08328) ### Dataset Summary ETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset: - **Ethos_Dataset_Binary**: contains 998 comments in the dataset alongside with a label about hate speech *presence* or *absence*. 565 of them do not contain hate speech, while the rest of them, 433, contain. - **Ethos_Dataset_Multi_Label** which contains 8 labels for the 433 comments with hate speech content. These labels are *violence* (if it incites (1) or not (0) violence), *directed_vs_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national_origin*, *disability*, *religion* and *sexual_orientation*. ***Ethos /ˈiːθɒs/*** is a Greek word meaning “character” that is used to describe the guiding beliefs or ideals that characterize a community, nation, or ideology. The Greeks also used this word to refer to the power of music to influence emotions, behaviors, and even morals. ### Supported Tasks and Leaderboards [More Information Needed] - `text-classification-other-Hate Speech Detection`, `sentiment-classification`,`multi-label-classification`: The dataset can be used to train a model for hate speech detection. Moreover, it can be used as a benchmark dataset for multi label classification algorithms. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances A typical data point in the binary version comprises a comment, with a `text` containing the text and a `label` describing if a comment contains hate speech content (1 - hate-speech) or not (0 - non-hate-speech). In the multilabel version more labels like *violence* (if it incites (1) or not (0) violence), *directed_vs_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national_origin*, *disability*, *religion* and *sexual_orientation* are appearing. An example from the binary version, which is offensive, but it does not contain hate speech content: ``` {'text': 'What the fuck stupid people !!!', 'label': '0' } ``` An example from the multi-label version, which contains hate speech content towards women (gender): ``` {'text': 'You should know women's sports are a joke', `violence`: 0, `directed_vs_generalized`: 0, `gender`: 1, `race`: 0, `national_origin`: 0, `disability`: 0, `religion`: 0, `sexual_orientation`: 0 } ``` ### Data Fields Ethos Binary: - `text`: a `string` feature containing the text of the comment. - `label`: a classification label, with possible values including `no_hate_speech`, `hate_speech`. Ethis Multilabel: - `text`: a `string` feature containing the text of the comment. - `violence`: a classification label, with possible values including `not_violent`, `violent`. - `directed_vs_generalized`: a classification label, with possible values including `generalized`, `directed`. - `gender`: a classification label, with possible values including `false`, `true`. - `race`: a classification label, with possible values including `false`, `true`. - `national_origin`: a classification label, with possible values including `false`, `true`. - `disability`: a classification label, with possible values including `false`, `true`. - `religion`: a classification label, with possible values including `false`, `true`. - `sexual_orientation`: a classification label, with possible values including `false`, `true`. ### Data Splits The data is split into binary and multilabel. Multilabel is a subset of the binary version. | | Instances | Labels | | ----- | ------ | ----- | | binary | 998 | 1 | | multilabel | 433 | 8 | ## Dataset Creation ### Curation Rationale The dataset was build by gathering online comments in Youtube videos and reddit comments, from videos and subreddits which may attract hate speech content. ### Source Data #### Initial Data Collection and Normalization The initial data we used are from the hatebusters platform: [Original data used](https://intelligence.csd.auth.gr/topics/hate-speech-detection/), but they were not included in this dataset #### Who are the source language producers? The language producers are users of reddit and Youtube. More informations can be found in this paper: [ETHOS: an Online Hate Speech Detection Dataset](https://arxiv.org/abs/2006.08328) ### Annotations #### Annotation process The annotation process is detailed in the third section of this paper: [ETHOS: an Online Hate Speech Detection Dataset](https://arxiv.org/abs/2006.08328) #### Who are the annotators? Originally anotated by Ioannis Mollas and validated through the Figure8 platform (APEN). ### Personal and Sensitive Information No personal and sensitive information included in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset will help on the evolution of the automated hate speech detection tools. Those tools have great impact on preventing social issues. ### Discussion of Biases This dataset tries to be unbiased towards its classes and labels. ### Other Known Limitations The dataset is relatively small and should be used combined with larger datasets. ## Additional Information ### Dataset Curators The dataset was initially created by [Intelligent Systems Lab](https://intelligence.csd.auth.gr). ### Licensing Information The licensing status of the datasets is [GNU GPLv3](https://choosealicense.com/licenses/gpl-3.0/). ### Citation Information ``` @misc{mollas2020ethos, title={ETHOS: an Online Hate Speech Detection Dataset}, author={Ioannis Mollas and Zoe Chrysopoulou and Stamatis Karlos and Grigorios Tsoumakas}, year={2020}, eprint={2006.08328}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@iamollas](https://github.com/iamollas) for adding this dataset.
ethos
[ "task_categories:text-classification", "task_ids:multi-label-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "language_creators:other", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:agpl-3.0", "Hate Speech Detection", "arxiv:2006.08328", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["found", "other"], "language": ["en"], "license": ["agpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification", "sentiment-classification"], "paperswithcode_id": "ethos", "pretty_name": "onlinE haTe speecH detectiOn dataSet", "config_names": ["binary", "multilabel"], "tags": ["Hate Speech Detection"], "dataset_info": [{"config_name": "binary", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "no_hate_speech", "1": "hate_speech"}}}}], "splits": [{"name": "train", "num_bytes": 124823, "num_examples": 998}], "download_size": 123919, "dataset_size": 124823}, {"config_name": "multilabel", "features": [{"name": "text", "dtype": "string"}, {"name": "violence", "dtype": {"class_label": {"names": {"0": "not_violent", "1": "violent"}}}}, {"name": "directed_vs_generalized", "dtype": {"class_label": {"names": {"0": "generalied", "1": "directed"}}}}, {"name": "gender", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "race", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "national_origin", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "disability", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "religion", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}, {"name": "sexual_orientation", "dtype": {"class_label": {"names": {"0": "false", "1": "true"}}}}], "splits": [{"name": "train", "num_bytes": 79112, "num_examples": 433}], "download_size": 62836, "dataset_size": 79112}]}
2024-01-18T11:03:20+00:00
[ "2006.08328" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-label-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #language_creators-other #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-agpl-3.0 #Hate Speech Detection #arxiv-2006.08328 #region-us
Dataset Card for Ethos ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: ETHOS Hate Speech Dataset * Repository:ETHOS Hate Speech Dataset * Paper:ETHOS: an Online Hate Speech Detection Dataset ### Dataset Summary ETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset: * Ethos\_Dataset\_Binary: contains 998 comments in the dataset alongside with a label about hate speech *presence* or *absence*. 565 of them do not contain hate speech, while the rest of them, 433, contain. * Ethos\_Dataset\_Multi\_Label which contains 8 labels for the 433 comments with hate speech content. These labels are *violence* (if it incites (1) or not (0) violence), *directed\_vs\_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national\_origin*, *disability*, *religion* and *sexual\_orientation*. *Ethos /ˈiːθɒs/* is a Greek word meaning “character” that is used to describe the guiding beliefs or ideals that characterize a community, nation, or ideology. The Greeks also used this word to refer to the power of music to influence emotions, behaviors, and even morals. ### Supported Tasks and Leaderboards * 'text-classification-other-Hate Speech Detection', 'sentiment-classification','multi-label-classification': The dataset can be used to train a model for hate speech detection. Moreover, it can be used as a benchmark dataset for multi label classification algorithms. ### Languages The text in the dataset is in English. Dataset Structure ----------------- ### Data Instances A typical data point in the binary version comprises a comment, with a 'text' containing the text and a 'label' describing if a comment contains hate speech content (1 - hate-speech) or not (0 - non-hate-speech). In the multilabel version more labels like *violence* (if it incites (1) or not (0) violence), *directed\_vs\_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national\_origin*, *disability*, *religion* and *sexual\_orientation* are appearing. An example from the binary version, which is offensive, but it does not contain hate speech content: An example from the multi-label version, which contains hate speech content towards women (gender): ### Data Fields Ethos Binary: * 'text': a 'string' feature containing the text of the comment. * 'label': a classification label, with possible values including 'no\_hate\_speech', 'hate\_speech'. Ethis Multilabel: * 'text': a 'string' feature containing the text of the comment. * 'violence': a classification label, with possible values including 'not\_violent', 'violent'. * 'directed\_vs\_generalized': a classification label, with possible values including 'generalized', 'directed'. * 'gender': a classification label, with possible values including 'false', 'true'. * 'race': a classification label, with possible values including 'false', 'true'. * 'national\_origin': a classification label, with possible values including 'false', 'true'. * 'disability': a classification label, with possible values including 'false', 'true'. * 'religion': a classification label, with possible values including 'false', 'true'. * 'sexual\_orientation': a classification label, with possible values including 'false', 'true'. ### Data Splits The data is split into binary and multilabel. Multilabel is a subset of the binary version. Instances: binary, Labels: 998 Instances: multilabel, Labels: 433 Dataset Creation ---------------- ### Curation Rationale The dataset was build by gathering online comments in Youtube videos and reddit comments, from videos and subreddits which may attract hate speech content. ### Source Data #### Initial Data Collection and Normalization The initial data we used are from the hatebusters platform: Original data used, but they were not included in this dataset #### Who are the source language producers? The language producers are users of reddit and Youtube. More informations can be found in this paper: ETHOS: an Online Hate Speech Detection Dataset ### Annotations #### Annotation process The annotation process is detailed in the third section of this paper: ETHOS: an Online Hate Speech Detection Dataset #### Who are the annotators? Originally anotated by Ioannis Mollas and validated through the Figure8 platform (APEN). ### Personal and Sensitive Information No personal and sensitive information included in the dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset This dataset will help on the evolution of the automated hate speech detection tools. Those tools have great impact on preventing social issues. ### Discussion of Biases This dataset tries to be unbiased towards its classes and labels. ### Other Known Limitations The dataset is relatively small and should be used combined with larger datasets. Additional Information ---------------------- ### Dataset Curators The dataset was initially created by Intelligent Systems Lab. ### Licensing Information The licensing status of the datasets is GNU GPLv3. ### Contributions Thanks to @iamollas for adding this dataset.
[ "### Dataset Summary\n\n\nETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset:\n\n\n* Ethos\\_Dataset\\_Binary: contains 998 comments in the dataset alongside with a label about hate speech *presence* or *absence*. 565 of them do not contain hate speech, while the rest of them, 433, contain.\n* Ethos\\_Dataset\\_Multi\\_Label which contains 8 labels for the 433 comments with hate speech content. These labels are *violence* (if it incites (1) or not (0) violence), *directed\\_vs\\_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national\\_origin*, *disability*, *religion* and *sexual\\_orientation*.\n\n\n*Ethos /ˈiːθɒs/*\nis a Greek word meaning “character” that is used to describe the guiding beliefs or ideals that characterize a community, nation, or ideology. The Greeks also used this word to refer to the power of music to influence emotions, behaviors, and even morals.", "### Supported Tasks and Leaderboards\n\n\n* 'text-classification-other-Hate Speech Detection', 'sentiment-classification','multi-label-classification': The dataset can be used to train a model for hate speech detection. Moreover, it can be used as a benchmark dataset for multi label classification algorithms.", "### Languages\n\n\nThe text in the dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point in the binary version comprises a comment, with a 'text' containing the text and a 'label' describing if a comment contains hate speech content (1 - hate-speech) or not (0 - non-hate-speech). In the multilabel version more labels like *violence* (if it incites (1) or not (0) violence), *directed\\_vs\\_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national\\_origin*, *disability*, *religion* and *sexual\\_orientation* are appearing.\n\n\nAn example from the binary version, which is offensive, but it does not contain hate speech content:\n\n\nAn example from the multi-label version, which contains hate speech content towards women (gender):", "### Data Fields\n\n\nEthos Binary:\n\n\n* 'text': a 'string' feature containing the text of the comment.\n* 'label': a classification label, with possible values including 'no\\_hate\\_speech', 'hate\\_speech'.\n\n\nEthis Multilabel:\n\n\n* 'text': a 'string' feature containing the text of the comment.\n* 'violence': a classification label, with possible values including 'not\\_violent', 'violent'.\n* 'directed\\_vs\\_generalized': a classification label, with possible values including 'generalized', 'directed'.\n* 'gender': a classification label, with possible values including 'false', 'true'.\n* 'race': a classification label, with possible values including 'false', 'true'.\n* 'national\\_origin': a classification label, with possible values including 'false', 'true'.\n* 'disability': a classification label, with possible values including 'false', 'true'.\n* 'religion': a classification label, with possible values including 'false', 'true'.\n* 'sexual\\_orientation': a classification label, with possible values including 'false', 'true'.", "### Data Splits\n\n\nThe data is split into binary and multilabel. Multilabel is a subset of the binary version.\n\n\nInstances: binary, Labels: 998\nInstances: multilabel, Labels: 433\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was build by gathering online comments in Youtube videos and reddit comments, from videos and subreddits which may attract hate speech content.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe initial data we used are from the hatebusters platform: Original data used, but they were not included in this dataset", "#### Who are the source language producers?\n\n\nThe language producers are users of reddit and Youtube. More informations can be found in this paper: ETHOS: an Online Hate Speech Detection Dataset", "### Annotations", "#### Annotation process\n\n\nThe annotation process is detailed in the third section of this paper: ETHOS: an Online Hate Speech Detection Dataset", "#### Who are the annotators?\n\n\nOriginally anotated by Ioannis Mollas and validated through the Figure8 platform (APEN).", "### Personal and Sensitive Information\n\n\nNo personal and sensitive information included in the dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThis dataset will help on the evolution of the automated hate speech detection tools. Those tools have great impact on preventing social issues.", "### Discussion of Biases\n\n\nThis dataset tries to be unbiased towards its classes and labels.", "### Other Known Limitations\n\n\nThe dataset is relatively small and should be used combined with larger datasets.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Intelligent Systems Lab.", "### Licensing Information\n\n\nThe licensing status of the datasets is GNU GPLv3.", "### Contributions\n\n\nThanks to @iamollas for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #language_creators-other #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-agpl-3.0 #Hate Speech Detection #arxiv-2006.08328 #region-us \n", "### Dataset Summary\n\n\nETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset:\n\n\n* Ethos\\_Dataset\\_Binary: contains 998 comments in the dataset alongside with a label about hate speech *presence* or *absence*. 565 of them do not contain hate speech, while the rest of them, 433, contain.\n* Ethos\\_Dataset\\_Multi\\_Label which contains 8 labels for the 433 comments with hate speech content. These labels are *violence* (if it incites (1) or not (0) violence), *directed\\_vs\\_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national\\_origin*, *disability*, *religion* and *sexual\\_orientation*.\n\n\n*Ethos /ˈiːθɒs/*\nis a Greek word meaning “character” that is used to describe the guiding beliefs or ideals that characterize a community, nation, or ideology. The Greeks also used this word to refer to the power of music to influence emotions, behaviors, and even morals.", "### Supported Tasks and Leaderboards\n\n\n* 'text-classification-other-Hate Speech Detection', 'sentiment-classification','multi-label-classification': The dataset can be used to train a model for hate speech detection. Moreover, it can be used as a benchmark dataset for multi label classification algorithms.", "### Languages\n\n\nThe text in the dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point in the binary version comprises a comment, with a 'text' containing the text and a 'label' describing if a comment contains hate speech content (1 - hate-speech) or not (0 - non-hate-speech). In the multilabel version more labels like *violence* (if it incites (1) or not (0) violence), *directed\\_vs\\_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national\\_origin*, *disability*, *religion* and *sexual\\_orientation* are appearing.\n\n\nAn example from the binary version, which is offensive, but it does not contain hate speech content:\n\n\nAn example from the multi-label version, which contains hate speech content towards women (gender):", "### Data Fields\n\n\nEthos Binary:\n\n\n* 'text': a 'string' feature containing the text of the comment.\n* 'label': a classification label, with possible values including 'no\\_hate\\_speech', 'hate\\_speech'.\n\n\nEthis Multilabel:\n\n\n* 'text': a 'string' feature containing the text of the comment.\n* 'violence': a classification label, with possible values including 'not\\_violent', 'violent'.\n* 'directed\\_vs\\_generalized': a classification label, with possible values including 'generalized', 'directed'.\n* 'gender': a classification label, with possible values including 'false', 'true'.\n* 'race': a classification label, with possible values including 'false', 'true'.\n* 'national\\_origin': a classification label, with possible values including 'false', 'true'.\n* 'disability': a classification label, with possible values including 'false', 'true'.\n* 'religion': a classification label, with possible values including 'false', 'true'.\n* 'sexual\\_orientation': a classification label, with possible values including 'false', 'true'.", "### Data Splits\n\n\nThe data is split into binary and multilabel. Multilabel is a subset of the binary version.\n\n\nInstances: binary, Labels: 998\nInstances: multilabel, Labels: 433\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was build by gathering online comments in Youtube videos and reddit comments, from videos and subreddits which may attract hate speech content.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe initial data we used are from the hatebusters platform: Original data used, but they were not included in this dataset", "#### Who are the source language producers?\n\n\nThe language producers are users of reddit and Youtube. More informations can be found in this paper: ETHOS: an Online Hate Speech Detection Dataset", "### Annotations", "#### Annotation process\n\n\nThe annotation process is detailed in the third section of this paper: ETHOS: an Online Hate Speech Detection Dataset", "#### Who are the annotators?\n\n\nOriginally anotated by Ioannis Mollas and validated through the Figure8 platform (APEN).", "### Personal and Sensitive Information\n\n\nNo personal and sensitive information included in the dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThis dataset will help on the evolution of the automated hate speech detection tools. Those tools have great impact on preventing social issues.", "### Discussion of Biases\n\n\nThis dataset tries to be unbiased towards its classes and labels.", "### Other Known Limitations\n\n\nThe dataset is relatively small and should be used combined with larger datasets.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Intelligent Systems Lab.", "### Licensing Information\n\n\nThe licensing status of the datasets is GNU GPLv3.", "### Contributions\n\n\nThanks to @iamollas for adding this dataset." ]
[ 136, 314, 79, 21, 204, 300, 56, 39, 4, 36, 43, 5, 32, 31, 29, 37, 25, 33, 18, 23, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #language_creators-other #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-agpl-3.0 #Hate Speech Detection #arxiv-2006.08328 #region-us \n### Dataset Summary\n\n\nETHOS: onlinE haTe speecH detectiOn dataSet. This repository contains a dataset for hate speech detection on social media platforms, called Ethos. There are two variations of the dataset:\n\n\n* Ethos\\_Dataset\\_Binary: contains 998 comments in the dataset alongside with a label about hate speech *presence* or *absence*. 565 of them do not contain hate speech, while the rest of them, 433, contain.\n* Ethos\\_Dataset\\_Multi\\_Label which contains 8 labels for the 433 comments with hate speech content. These labels are *violence* (if it incites (1) or not (0) violence), *directed\\_vs\\_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national\\_origin*, *disability*, *religion* and *sexual\\_orientation*.\n\n\n*Ethos /ˈiːθɒs/*\nis a Greek word meaning “character” that is used to describe the guiding beliefs or ideals that characterize a community, nation, or ideology. The Greeks also used this word to refer to the power of music to influence emotions, behaviors, and even morals.", "passage: ### Supported Tasks and Leaderboards\n\n\n* 'text-classification-other-Hate Speech Detection', 'sentiment-classification','multi-label-classification': The dataset can be used to train a model for hate speech detection. Moreover, it can be used as a benchmark dataset for multi label classification algorithms.### Languages\n\n\nThe text in the dataset is in English.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA typical data point in the binary version comprises a comment, with a 'text' containing the text and a 'label' describing if a comment contains hate speech content (1 - hate-speech) or not (0 - non-hate-speech). In the multilabel version more labels like *violence* (if it incites (1) or not (0) violence), *directed\\_vs\\_general* (if it is directed to a person (1) or a group (0)), and 6 labels about the category of hate speech like, *gender*, *race*, *national\\_origin*, *disability*, *religion* and *sexual\\_orientation* are appearing.\n\n\nAn example from the binary version, which is offensive, but it does not contain hate speech content:\n\n\nAn example from the multi-label version, which contains hate speech content towards women (gender):" ]
058695fbd683917c61e2cade090e6c44e85ea0c7
# Dataset Card for the RegIR datasets ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://archive.org/details/eacl2021_regir_datasets - **Repository:** https://archive.org/details/eacl2021_regir_datasets - **Paper:** https://arxiv.org/abs/2101.10726 - **Leaderboard:** N/A - **Point of Contact:** [Ilias Chalkidis](mailto:ihalk@aueb.gr) ### Dataset Summary The European Union (EU) has a legislation scheme analogous to regulatory compliance for organizations. According to the Treaty on the Functioning of the European Union (TFEU), all published EU directives must take effect at the national level. Thus, all EU member states must adopt a law to transpose a newly issued directive within the period set by the directive (typically 2 years). Here, we have two datasets, EU2UK and UK2EU, containing EU directives and UK regulations, which can serve both as queries and documents under the ground truth assumption that a UK law is relevant to the EU directives it transposes and vice versa. ### Supported Tasks and Leaderboards The dataset supports: **EU2UK** (`eu2uk`): Given an EU directive *Q*, retrieve the set of relevant documents from the pool of all available UK regulations. Relevant documents are those that transpose the EU directive (*Q*). **UK2EU** (`uk2eu`): Given a UK regulation *Q*, retrieve the set of relevant documents from the pool of all available EU directives. Relevant documents are those that are being transposed by the UK regulations (*Q*). ### Languages All documents are written in English. ## Dataset Structure ### Data Instances ```json { "document_id": "31977L0794", "publication_year": "1977", "text": "Commission Directive 77/794/EEC ... of agricultural levies and customs duties", "relevant_documents": ["UKPGA19800048", "UKPGA19770036"] } ``` ### Data Fields The following data fields are provided for query documents (`train`, `dev`, `test`): `document_id`: (**str**) The ID of the document.\ `publication_year`: (**str**) The publication year of the document.\ `text`: (**str**) The text of the document.\ `relevant_documents`: (**List[str]**) The list of relevant documents, as represented by their `document_id`. The following data fields are provided for corpus documents (`corpus`): `document_id`: (**str**) The ID of the document.\ `publication_year`: (**str**) The publication year of the document.\ `text`: (**str**) The text of the document.\ ### Data Splits #### EU2UK dataset | Split | No of Queries | Avg. relevant documents | | ------------------- | ------------------------------------ | --- | | Train | 1,400 | 1.79 | |Development | 300 | 2.09 | |Test | 300 | 1.74 | Document Pool (Corpus): 52,515 UK regulations #### UK2EU dataset | Split | No of Queries | Avg. relevant documents | | ------------------- | ------------------------------------ | --- | | Train | 1,500 | 1.90 | |Development | 300 | 1.46 | |Test | 300 | 1.29 | Document Pool (Corpus): 3,930 EU directives ## Dataset Creation ### Curation Rationale The dataset was curated by Chalkidis et al. (2021).\ The transposition pairs are publicly available by the Publications Office of EU (https://publications.europa.eu/en). ### Source Data #### Initial Data Collection and Normalization The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) and Legislation.GOV.UK (http://legislation.gov.uk/) in an unprocessed format.\ The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql).\ For more information on the dataset curation, read Chalkidis et al. (2021). #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process * The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) and Legislation.GOV.UK (http://legislation.gov.uk/) in an unprocessed format. * The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql). #### Who are the annotators? Publications Office of EU (https://publications.europa.eu/en) ### Personal and Sensitive Information The dataset does not include personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Chalkidis et al. (2021) ### Licensing Information **EU Data** © European Union, 1998-2021 The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes. The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence​​ . This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \ Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html **UK Data** You are encouraged to use and re-use the Information that is available under this licence freely and flexibly, with only a few conditions. You are free to: - copy, publish, distribute and transmit the Information; - adapt the Information; - exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. You must (where you do any of the above): acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/. ### Citation Information *Ilias Chalkidis, Manos Fergadiotis, Nikos Manginas, Eva Katakalou and Prodromos Malakasiotis.* *Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations* *Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021). Online. 2021* ``` @inproceedings{chalkidis-etal-2021-regir, title = "Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations", author = "Chalkidis, Ilias and Fergadiotis, Manos and Manginas, Nikos and Katakalou, Eva, and Malakasiotis, Prodromos", booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/2101.10726", } ``` ### Contributions Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
eu_regulatory_ir
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "document-to-document-retrieval", "arxiv:2101.10726", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "the RegIR datasets", "tags": ["document-to-document-retrieval"], "dataset_info": [{"config_name": "eu2uk", "features": [{"name": "document_id", "dtype": "string"}, {"name": "publication_year", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "relevant_documents", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 20665038, "num_examples": 1400}, {"name": "test", "num_bytes": 8844145, "num_examples": 300}, {"name": "validation", "num_bytes": 5852814, "num_examples": 300}, {"name": "uk_corpus", "num_bytes": 502468359, "num_examples": 52515}], "download_size": 119685577, "dataset_size": 537830356}, {"config_name": "uk2eu", "features": [{"name": "document_id", "dtype": "string"}, {"name": "publication_year", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "relevant_documents", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 55144655, "num_examples": 1500}, {"name": "test", "num_bytes": 14810460, "num_examples": 300}, {"name": "validation", "num_bytes": 15175644, "num_examples": 300}, {"name": "eu_corpus", "num_bytes": 57212422, "num_examples": 3930}], "download_size": 31835104, "dataset_size": 142343181}]}
2024-01-18T11:03:21+00:00
[ "2101.10726" ]
[ "en" ]
TAGS #task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #document-to-document-retrieval #arxiv-2101.10726 #region-us
Dataset Card for the RegIR datasets =================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: N/A * Point of Contact: Ilias Chalkidis ### Dataset Summary The European Union (EU) has a legislation scheme analogous to regulatory compliance for organizations. According to the Treaty on the Functioning of the European Union (TFEU), all published EU directives must take effect at the national level. Thus, all EU member states must adopt a law to transpose a newly issued directive within the period set by the directive (typically 2 years). Here, we have two datasets, EU2UK and UK2EU, containing EU directives and UK regulations, which can serve both as queries and documents under the ground truth assumption that a UK law is relevant to the EU directives it transposes and vice versa. ### Supported Tasks and Leaderboards The dataset supports: EU2UK ('eu2uk'): Given an EU directive *Q*, retrieve the set of relevant documents from the pool of all available UK regulations. Relevant documents are those that transpose the EU directive (*Q*). UK2EU ('uk2eu'): Given a UK regulation *Q*, retrieve the set of relevant documents from the pool of all available EU directives. Relevant documents are those that are being transposed by the UK regulations (*Q*). ### Languages All documents are written in English. Dataset Structure ----------------- ### Data Instances ### Data Fields The following data fields are provided for query documents ('train', 'dev', 'test'): 'document\_id': (str) The ID of the document. 'publication\_year': (str) The publication year of the document. 'text': (str) The text of the document. 'relevant\_documents': (List[str]) The list of relevant documents, as represented by their 'document\_id'. The following data fields are provided for corpus documents ('corpus'): 'document\_id': (str) The ID of the document. 'publication\_year': (str) The publication year of the document. 'text': (str) The text of the document.\ ### Data Splits #### EU2UK dataset Split: Train, No of Queries: 1,400, Avg. relevant documents: 1.79 Split: Development, No of Queries: 300, Avg. relevant documents: 2.09 Split: Test, No of Queries: 300, Avg. relevant documents: 1.74 Split: Document Pool (Corpus): 52,515 UK regulations, No of Queries: , Avg. relevant documents: #### UK2EU dataset Split: Train, No of Queries: 1,500, Avg. relevant documents: 1.90 Split: Development, No of Queries: 300, Avg. relevant documents: 1.46 Split: Test, No of Queries: 300, Avg. relevant documents: 1.29 Split: Document Pool (Corpus): 3,930 EU directives, No of Queries: , Avg. relevant documents: Dataset Creation ---------------- ### Curation Rationale The dataset was curated by Chalkidis et al. (2021). The transposition pairs are publicly available by the Publications Office of EU (URL ### Source Data #### Initial Data Collection and Normalization The original data are available at EUR-Lex portal (URL) and Legislation.GOV.UK (URL in an unprocessed format. The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (URL For more information on the dataset curation, read Chalkidis et al. (2021). #### Who are the source language producers? ### Annotations #### Annotation process * The original data are available at EUR-Lex portal (URL) and Legislation.GOV.UK (URL in an unprocessed format. * The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (URL #### Who are the annotators? Publications Office of EU (URL ### Personal and Sensitive Information The dataset does not include personal or sensitive information. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Chalkidis et al. (2021) ### Licensing Information EU Data © European Union, 1998-2021 The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes. The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence​​ . This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: URL Read more: URL UK Data You are encouraged to use and re-use the Information that is available under this licence freely and flexibly, with only a few conditions. You are free to: * copy, publish, distribute and transmit the Information; * adapt the Information; * exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application. You must (where you do any of the above): acknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: URL *Ilias Chalkidis, Manos Fergadiotis, Nikos Manginas, Eva Katakalou and Prodromos Malakasiotis.* *Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations* *Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021). Online. 2021* ### Contributions Thanks to @iliaschalkidis for adding this dataset.
[ "### Dataset Summary\n\n\nThe European Union (EU) has a legislation scheme analogous to regulatory compliance for organizations. According to the Treaty on the Functioning of the European Union (TFEU), all published EU directives must take effect at the national level. Thus, all EU member states must adopt a law to transpose a newly issued directive within the period set by the directive (typically 2 years).\n\n\nHere, we have two datasets, EU2UK and UK2EU, containing EU directives and UK regulations, which can serve both as queries and documents under the ground truth assumption that a UK law is relevant to the EU directives it transposes and vice versa.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nEU2UK ('eu2uk'): Given an EU directive *Q*, retrieve the set of relevant documents from the pool of all available UK regulations. Relevant documents are those that transpose the EU directive (*Q*).\n\n\nUK2EU ('uk2eu'): Given a UK regulation *Q*, retrieve the set of relevant documents from the pool of all available EU directives. Relevant documents are those that are being transposed by the UK regulations (*Q*).", "### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nThe following data fields are provided for query documents ('train', 'dev', 'test'):\n\n\n'document\\_id': (str) The ID of the document. \n\n'publication\\_year': (str) The publication year of the document. \n\n'text': (str) The text of the document. \n\n'relevant\\_documents': (List[str]) The list of relevant documents, as represented by their 'document\\_id'.\n\n\nThe following data fields are provided for corpus documents ('corpus'):\n\n\n'document\\_id': (str) The ID of the document. \n\n'publication\\_year': (str) The publication year of the document. \n\n'text': (str) The text of the document.\\", "### Data Splits", "#### EU2UK dataset\n\n\nSplit: Train, No of Queries: 1,400, Avg. relevant documents: 1.79\nSplit: Development, No of Queries: 300, Avg. relevant documents: 2.09\nSplit: Test, No of Queries: 300, Avg. relevant documents: 1.74\nSplit: Document Pool (Corpus): 52,515 UK regulations, No of Queries: , Avg. relevant documents:", "#### UK2EU dataset\n\n\nSplit: Train, No of Queries: 1,500, Avg. relevant documents: 1.90\nSplit: Development, No of Queries: 300, Avg. relevant documents: 1.46\nSplit: Test, No of Queries: 300, Avg. relevant documents: 1.29\nSplit: Document Pool (Corpus): 3,930 EU directives, No of Queries: , Avg. relevant documents: \n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2021). \n\nThe transposition pairs are publicly available by the Publications Office of EU (URL", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe original data are available at EUR-Lex portal (URL) and Legislation.GOV.UK (URL in an unprocessed format. \n\nThe transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (URL\nFor more information on the dataset curation, read Chalkidis et al. (2021).", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\n* The original data are available at EUR-Lex portal (URL) and Legislation.GOV.UK (URL in an unprocessed format.\n* The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (URL", "#### Who are the annotators?\n\n\nPublications Office of EU (URL", "### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nChalkidis et al. (2021)", "### Licensing Information\n\n\nEU Data\n\n\n© European Union, 1998-2021\n\n\nThe Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.\n\n\nThe copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence​​ . This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.\n\n\nSource: URL \n\nRead more: URL\n\n\nUK Data\n\n\nYou are encouraged to use and re-use the Information that is available under this licence freely and flexibly, with only a few conditions.\n\n\nYou are free to:\n\n\n* copy, publish, distribute and transmit the Information;\n* adapt the Information;\n* exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application.\n\n\nYou must (where you do any of the above):\n\n\nacknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: URL\n\n\n*Ilias Chalkidis, Manos Fergadiotis, Nikos Manginas, Eva Katakalou and Prodromos Malakasiotis.*\n*Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations*\n*Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021). Online. 2021*", "### Contributions\n\n\nThanks to @iliaschalkidis for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #document-to-document-retrieval #arxiv-2101.10726 #region-us \n", "### Dataset Summary\n\n\nThe European Union (EU) has a legislation scheme analogous to regulatory compliance for organizations. According to the Treaty on the Functioning of the European Union (TFEU), all published EU directives must take effect at the national level. Thus, all EU member states must adopt a law to transpose a newly issued directive within the period set by the directive (typically 2 years).\n\n\nHere, we have two datasets, EU2UK and UK2EU, containing EU directives and UK regulations, which can serve both as queries and documents under the ground truth assumption that a UK law is relevant to the EU directives it transposes and vice versa.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nEU2UK ('eu2uk'): Given an EU directive *Q*, retrieve the set of relevant documents from the pool of all available UK regulations. Relevant documents are those that transpose the EU directive (*Q*).\n\n\nUK2EU ('uk2eu'): Given a UK regulation *Q*, retrieve the set of relevant documents from the pool of all available EU directives. Relevant documents are those that are being transposed by the UK regulations (*Q*).", "### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nThe following data fields are provided for query documents ('train', 'dev', 'test'):\n\n\n'document\\_id': (str) The ID of the document. \n\n'publication\\_year': (str) The publication year of the document. \n\n'text': (str) The text of the document. \n\n'relevant\\_documents': (List[str]) The list of relevant documents, as represented by their 'document\\_id'.\n\n\nThe following data fields are provided for corpus documents ('corpus'):\n\n\n'document\\_id': (str) The ID of the document. \n\n'publication\\_year': (str) The publication year of the document. \n\n'text': (str) The text of the document.\\", "### Data Splits", "#### EU2UK dataset\n\n\nSplit: Train, No of Queries: 1,400, Avg. relevant documents: 1.79\nSplit: Development, No of Queries: 300, Avg. relevant documents: 2.09\nSplit: Test, No of Queries: 300, Avg. relevant documents: 1.74\nSplit: Document Pool (Corpus): 52,515 UK regulations, No of Queries: , Avg. relevant documents:", "#### UK2EU dataset\n\n\nSplit: Train, No of Queries: 1,500, Avg. relevant documents: 1.90\nSplit: Development, No of Queries: 300, Avg. relevant documents: 1.46\nSplit: Test, No of Queries: 300, Avg. relevant documents: 1.29\nSplit: Document Pool (Corpus): 3,930 EU directives, No of Queries: , Avg. relevant documents: \n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2021). \n\nThe transposition pairs are publicly available by the Publications Office of EU (URL", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe original data are available at EUR-Lex portal (URL) and Legislation.GOV.UK (URL in an unprocessed format. \n\nThe transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (URL\nFor more information on the dataset curation, read Chalkidis et al. (2021).", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\n* The original data are available at EUR-Lex portal (URL) and Legislation.GOV.UK (URL in an unprocessed format.\n* The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (URL", "#### Who are the annotators?\n\n\nPublications Office of EU (URL", "### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nChalkidis et al. (2021)", "### Licensing Information\n\n\nEU Data\n\n\n© European Union, 1998-2021\n\n\nThe Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.\n\n\nThe copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence​​ . This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.\n\n\nSource: URL \n\nRead more: URL\n\n\nUK Data\n\n\nYou are encouraged to use and re-use the Information that is available under this licence freely and flexibly, with only a few conditions.\n\n\nYou are free to:\n\n\n* copy, publish, distribute and transmit the Information;\n* adapt the Information;\n* exploit the Information commercially and non-commercially for example, by combining it with other Information, or by including it in your own product or application.\n\n\nYou must (where you do any of the above):\n\n\nacknowledge the source of the Information in your product or application by including or linking to any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence: URL\n\n\n*Ilias Chalkidis, Manos Fergadiotis, Nikos Manginas, Eva Katakalou and Prodromos Malakasiotis.*\n*Regulatory Compliance through Doc2Doc Information Retrieval: A case study in EU/UK legislation where text similarity has limitations*\n*Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021). Online. 2021*", "### Contributions\n\n\nThanks to @iliaschalkidis for adding this dataset." ]
[ 111, 152, 127, 18, 6, 173, 5, 93, 100, 41, 4, 100, 10, 5, 77, 16, 29, 7, 8, 14, 14, 385, 18 ]
[ "passage: TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #document-to-document-retrieval #arxiv-2101.10726 #region-us \n### Dataset Summary\n\n\nThe European Union (EU) has a legislation scheme analogous to regulatory compliance for organizations. According to the Treaty on the Functioning of the European Union (TFEU), all published EU directives must take effect at the national level. Thus, all EU member states must adopt a law to transpose a newly issued directive within the period set by the directive (typically 2 years).\n\n\nHere, we have two datasets, EU2UK and UK2EU, containing EU directives and UK regulations, which can serve both as queries and documents under the ground truth assumption that a UK law is relevant to the EU directives it transposes and vice versa.### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nEU2UK ('eu2uk'): Given an EU directive *Q*, retrieve the set of relevant documents from the pool of all available UK regulations. Relevant documents are those that transpose the EU directive (*Q*).\n\n\nUK2EU ('uk2eu'): Given a UK regulation *Q*, retrieve the set of relevant documents from the pool of all available EU directives. Relevant documents are those that are being transposed by the UK regulations (*Q*).### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------### Data Instances", "passage: ### Data Fields\n\n\nThe following data fields are provided for query documents ('train', 'dev', 'test'):\n\n\n'document\\_id': (str) The ID of the document. \n\n'publication\\_year': (str) The publication year of the document. \n\n'text': (str) The text of the document. \n\n'relevant\\_documents': (List[str]) The list of relevant documents, as represented by their 'document\\_id'.\n\n\nThe following data fields are provided for corpus documents ('corpus'):\n\n\n'document\\_id': (str) The ID of the document. \n\n'publication\\_year': (str) The publication year of the document. \n\n'text': (str) The text of the document.\\### Data Splits#### EU2UK dataset\n\n\nSplit: Train, No of Queries: 1,400, Avg. relevant documents: 1.79\nSplit: Development, No of Queries: 300, Avg. relevant documents: 2.09\nSplit: Test, No of Queries: 300, Avg. relevant documents: 1.74\nSplit: Document Pool (Corpus): 52,515 UK regulations, No of Queries: , Avg. relevant documents:#### UK2EU dataset\n\n\nSplit: Train, No of Queries: 1,500, Avg. relevant documents: 1.90\nSplit: Development, No of Queries: 300, Avg. relevant documents: 1.46\nSplit: Test, No of Queries: 300, Avg. relevant documents: 1.29\nSplit: Document Pool (Corpus): 3,930 EU directives, No of Queries: , Avg. relevant documents: \n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2021). \n\nThe transposition pairs are publicly available by the Publications Office of EU (URL### Source Data#### Initial Data Collection and Normalization\n\n\nThe original data are available at EUR-Lex portal (URL) and Legislation.GOV.UK (URL in an unprocessed format. \n\nThe transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (URL\nFor more information on the dataset curation, read Chalkidis et al. (2021).#### Who are the source language producers?### Annotations#### Annotation process\n\n\n* The original data are available at EUR-Lex portal (URL) and Legislation.GOV.UK (URL in an unprocessed format.\n* The transposition pairs are provided by the EU member states (in our case, UK) and were downloaded from the SPARQL endpoint of the Publications Office of EU (URL#### Who are the annotators?\n\n\nPublications Office of EU (URL### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases" ]
8fffa60f44dfc9195336b6de57642143b0a747fa
# Dataset Card for the EUR-Lex dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/ - **Repository:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/ - **Paper:** https://www.aclweb.org/anthology/P19-1636/ - **Leaderboard:** N/A - **Point of Contact:** [Ilias Chalkidis](mailto:ihalk@aueb.gr) ### Dataset Summary EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old. EURLEX57K contains 57k legislative documents in English from EUR-Lex (https://eur-lex.europa.eu) with an average length of 727 words. Each document contains four major zones: - the header, which includes the title and name of the legal body enforcing the legal act; - the recitals, which are legal background references; and - the main body, usually organized in articles. **Labeling / Annotation** All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/). While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Supported Tasks and Leaderboards The dataset supports: **Multi-label Text Classification:** Given the text of a document, a model predicts the relevant EUROVOC concepts. **Few-shot and Zero-shot learning:** As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Languages All documents are written in English. ## Dataset Structure ### Data Instances ```json { "celex_id": "31979D0509", "title": "79/509/EEC: Council Decision of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain", "text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,", "eurovoc_concepts": ["192", "2356", "2560", "862", "863"] } ``` ### Data Fields The following data fields are provided for documents (`train`, `dev`, `test`): `celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\ `title`: (**str**) The title of the document.\ `text`: (**str**) The full content of each document, which is represented by its `header`, `recitals` and `main_body`.\ `eurovoc_concepts`: (**List[str]**) The relevant EUROVOC concepts (labels). If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: https://archive.org/download/EURLEX57K/eurovoc_concepts.jsonl ```python import json with open('./eurovoc_concepts.jsonl') as jsonl_file: eurovoc_concepts = {json.loads(concept) for concept in jsonl_file.readlines()} ``` ### Data Splits | Split | No of Documents | Avg. words | Avg. labels | | ------------------- | ------------------------------------ | --- | --- | | Train | 45,000 | 729 | 5 | |Development | 6,000 | 714 | 5 | |Test | 6,000 | 725 | 5 | ## Dataset Creation ### Curation Rationale The dataset was curated by Chalkidis et al. (2019).\ The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en). ### Source Data #### Initial Data Collection and Normalization The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed format. The documents were downloaded from EUR-Lex portal in HTML format. The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql). #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process * The original documents are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed HTML format. The HTML code was striped and the documents split into sections. * The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en). #### Who are the annotators? Publications Office of EU (https://publications.europa.eu/en) ### Personal and Sensitive Information The dataset does not include personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Chalkidis et al. (2019) ### Licensing Information © European Union, 1998-2021 The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes. The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \ Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html ### Citation Information *Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.* *Large-Scale Multi-Label Text Classification on EU Legislation.* *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019* ``` @inproceedings{chalkidis-etal-2019-large, title = "Large-Scale Multi-Label Text Classification on {EU} Legislation", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Androutsopoulos, Ion", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1636", doi = "10.18653/v1/P19-1636", pages = "6314--6322" } ``` ### Contributions Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
eurlex
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "legal-topic-classification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "paperswithcode_id": "eurlex57k", "pretty_name": "the EUR-Lex dataset", "tags": ["legal-topic-classification"], "dataset_info": {"features": [{"name": "celex_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "eurovoc_concepts", "sequence": "string"}], "config_name": "eurlex57k", "splits": [{"name": "train", "num_bytes": 167603718, "num_examples": 45000}, {"name": "test", "num_bytes": 22046706, "num_examples": 6000}, {"name": "validation", "num_bytes": 21942574, "num_examples": 6000}], "download_size": 50289403, "dataset_size": 211592998}}
2024-01-18T11:03:22+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #legal-topic-classification #region-us
Dataset Card for the EUR-Lex dataset ==================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: N/A * Point of Contact: Ilias Chalkidis ### Dataset Summary EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old. EURLEX57K contains 57k legislative documents in English from EUR-Lex (URL) with an average length of 727 words. Each document contains four major zones: * the header, which includes the title and name of the legal body enforcing the legal act; * the recitals, which are legal background references; and * the main body, usually organized in articles. Labeling / Annotation All the documents of the dataset have been annotated by the Publications Office of EU (URL with multiple concepts from EUROVOC (URL While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Supported Tasks and Leaderboards The dataset supports: Multi-label Text Classification: Given the text of a document, a model predicts the relevant EUROVOC concepts. Few-shot and Zero-shot learning: As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Languages All documents are written in English. Dataset Structure ----------------- ### Data Instances ### Data Fields The following data fields are provided for documents ('train', 'dev', 'test'): 'celex\_id': (str) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR. 'title': (str) The title of the document. 'text': (str) The full content of each document, which is represented by its 'header', 'recitals' and 'main\_body'. 'eurovoc\_concepts': (List[str]) The relevant EUROVOC concepts (labels). If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: URL ### Data Splits Dataset Creation ---------------- ### Curation Rationale The dataset was curated by Chalkidis et al. (2019). The documents have been annotated by the Publications Office of EU (URL ### Source Data #### Initial Data Collection and Normalization The original data are available at EUR-Lex portal (URL) in an unprocessed format. The documents were downloaded from EUR-Lex portal in HTML format. The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (URL #### Who are the source language producers? ### Annotations #### Annotation process * The original documents are available at EUR-Lex portal (URL) in an unprocessed HTML format. The HTML code was striped and the documents split into sections. * The documents have been annotated by the Publications Office of EU (URL #### Who are the annotators? Publications Office of EU (URL ### Personal and Sensitive Information The dataset does not include personal or sensitive information. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Chalkidis et al. (2019) ### Licensing Information © European Union, 1998-2021 The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes. The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: URL Read more: URL *Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.* *Large-Scale Multi-Label Text Classification on EU Legislation.* *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019* ### Contributions Thanks to @iliaschalkidis for adding this dataset.
[ "### Dataset Summary\n\n\nEURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.\nEURLEX57K contains 57k legislative documents in English from EUR-Lex (URL) with an average length of 727 words. Each document contains four major zones:\n\n\n* the header, which includes the title and name of the legal body enforcing the legal act;\n* the recitals, which are legal background references; and\n* the main body, usually organized in articles.\n\n\nLabeling / Annotation\n\n\nAll the documents of the dataset have been annotated by the Publications Office of EU (URL with multiple concepts from EUROVOC (URL\nWhile EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nMulti-label Text Classification: Given the text of a document, a model predicts the relevant EUROVOC concepts.\n\n\nFew-shot and Zero-shot learning: As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nThe following data fields are provided for documents ('train', 'dev', 'test'):\n\n\n'celex\\_id': (str) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR. \n\n'title': (str) The title of the document. \n\n'text': (str) The full content of each document, which is represented by its 'header', 'recitals' and 'main\\_body'. \n\n'eurovoc\\_concepts': (List[str]) The relevant EUROVOC concepts (labels).\n\n\nIf you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: URL", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2019). \n\nThe documents have been annotated by the Publications Office of EU (URL", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe original data are available at EUR-Lex portal (URL) in an unprocessed format.\nThe documents were downloaded from EUR-Lex portal in HTML format.\nThe relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (URL", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\n* The original documents are available at EUR-Lex portal (URL) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.\n* The documents have been annotated by the Publications Office of EU (URL", "#### Who are the annotators?\n\n\nPublications Office of EU (URL", "### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nChalkidis et al. (2019)", "### Licensing Information\n\n\n© European Union, 1998-2021\n\n\nThe Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.\n\n\nThe copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.\n\n\nSource: URL \n\nRead more: URL\n\n\n*Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.*\n*Large-Scale Multi-Label Text Classification on EU Legislation.*\n*Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019*", "### Contributions\n\n\nThanks to @iliaschalkidis for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #legal-topic-classification #region-us \n", "### Dataset Summary\n\n\nEURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.\nEURLEX57K contains 57k legislative documents in English from EUR-Lex (URL) with an average length of 727 words. Each document contains four major zones:\n\n\n* the header, which includes the title and name of the legal body enforcing the legal act;\n* the recitals, which are legal background references; and\n* the main body, usually organized in articles.\n\n\nLabeling / Annotation\n\n\nAll the documents of the dataset have been annotated by the Publications Office of EU (URL with multiple concepts from EUROVOC (URL\nWhile EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nMulti-label Text Classification: Given the text of a document, a model predicts the relevant EUROVOC concepts.\n\n\nFew-shot and Zero-shot learning: As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nThe following data fields are provided for documents ('train', 'dev', 'test'):\n\n\n'celex\\_id': (str) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR. \n\n'title': (str) The title of the document. \n\n'text': (str) The full content of each document, which is represented by its 'header', 'recitals' and 'main\\_body'. \n\n'eurovoc\\_concepts': (List[str]) The relevant EUROVOC concepts (labels).\n\n\nIf you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: URL", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2019). \n\nThe documents have been annotated by the Publications Office of EU (URL", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe original data are available at EUR-Lex portal (URL) in an unprocessed format.\nThe documents were downloaded from EUR-Lex portal in HTML format.\nThe relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (URL", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\n* The original documents are available at EUR-Lex portal (URL) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.\n* The documents have been annotated by the Publications Office of EU (URL", "#### Who are the annotators?\n\n\nPublications Office of EU (URL", "### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nChalkidis et al. (2019)", "### Licensing Information\n\n\n© European Union, 1998-2021\n\n\nThe Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.\n\n\nThe copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.\n\n\nSource: URL \n\nRead more: URL\n\n\n*Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.*\n*Large-Scale Multi-Label Text Classification on EU Legislation.*\n*Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019*", "### Contributions\n\n\nThanks to @iliaschalkidis for adding this dataset." ]
[ 97, 326, 120, 18, 6, 180, 11, 39, 4, 76, 10, 5, 59, 16, 29, 7, 8, 14, 15, 219, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #legal-topic-classification #region-us \n### Dataset Summary\n\n\nEURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.\nEURLEX57K contains 57k legislative documents in English from EUR-Lex (URL) with an average length of 727 words. Each document contains four major zones:\n\n\n* the header, which includes the title and name of the legal body enforcing the legal act;\n* the recitals, which are legal background references; and\n* the main body, usually organized in articles.\n\n\nLabeling / Annotation\n\n\nAll the documents of the dataset have been annotated by the Publications Office of EU (URL with multiple concepts from EUROVOC (URL\nWhile EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "passage: ### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nMulti-label Text Classification: Given the text of a document, a model predicts the relevant EUROVOC concepts.\n\n\nFew-shot and Zero-shot learning: As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields\n\n\nThe following data fields are provided for documents ('train', 'dev', 'test'):\n\n\n'celex\\_id': (str) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR. \n\n'title': (str) The title of the document. \n\n'text': (str) The full content of each document, which is represented by its 'header', 'recitals' and 'main\\_body'. \n\n'eurovoc\\_concepts': (List[str]) The relevant EUROVOC concepts (labels).\n\n\nIf you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: URL### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2019). \n\nThe documents have been annotated by the Publications Office of EU (URL### Source Data#### Initial Data Collection and Normalization\n\n\nThe original data are available at EUR-Lex portal (URL) in an unprocessed format.\nThe documents were downloaded from EUR-Lex portal in HTML format.\nThe relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (URL#### Who are the source language producers?### Annotations#### Annotation process\n\n\n* The original documents are available at EUR-Lex portal (URL) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.\n* The documents have been annotated by the Publications Office of EU (URL#### Who are the annotators?\n\n\nPublications Office of EU (URL### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nChalkidis et al. (2019)" ]