sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
sequence | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
dd5650eb094112f8913c5c9f907e43008aeb52cf | From the Evaluating Student Writing Kaggle competition. | carbon12/evaluating_student_writing | [
"region:us"
] | 2022-03-13T05:16:30+00:00 | {} | 2022-03-13T13:03:06+00:00 |
c54df84f9a7566184d83c75d208a97e5aa5a77d3 | gj1997/trial2 | [
"region:us"
] | 2022-03-13T06:51:57+00:00 | {} | 2022-03-13T09:03:58+00:00 |
|
749b7eac6d013c77d95ba1b744bb88ac436ca48b | This dataset contains MFCC feature extracted for 646 short speech audios | Parmann/speech_classification | [
"region:us"
] | 2022-03-13T08:30:16+00:00 | {} | 2022-03-13T08:32:04+00:00 |
088baa7f2aa235290fb8a35850cee1e70bd5ce25 | # text-to-text format from superglue axg
# Note that RTE train and val set has been added
axg: DatasetDict({
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 356
})
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 2490
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 277
})
}) | stjokerli/TextToText_axg_seqio | [
"region:us"
] | 2022-03-13T10:08:17+00:00 | {} | 2022-04-04T09:24:18+00:00 |
aa9340e5512f9d1c196b34645346db83107a0cd3 | axb: DatasetDict({
test: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 1104
})
train: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 2490
})
validation: Dataset({
features: ['idx', 'inputs', 'targets'],
num_rows: 277
})
})
Text to text implemantion of T5
note that RTE train and validation set has been added | stjokerli/TextToText_axb_seqio | [
"region:us"
] | 2022-03-13T10:08:23+00:00 | {} | 2022-04-04T09:25:39+00:00 |
4dc1c8da193d078c788bccf7eebbc301c754b121 | [Needs More Information]
# Dataset Card for ph-en-text
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://huggingface.co/datasets/joypersicanon/ph-en-text/tree/main
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** Mary Joy P. Canon
### Dataset Summary
PhEnText is a large-scale and multi-domain lexical data written in Philippine English text.
It is composed of 20, 562, 265 lines from news articles, religious articles and court decisions.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
ph-en
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
id: "3128940",
text: "Why this happened should be the focus of inquiry."
### Data Splits
80:20 split for train and test data
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | joypersicanon/ph-en-text | [
"region:us"
] | 2022-03-13T10:16:38+00:00 | {} | 2022-03-17T13:30:52+00:00 |
cc60812b3dc5abb00043962616195c023c7c27a2 |
# Top Quark Tagging Reference Dataset
A set of MC simulated training/testing events for the evaluation of top quark tagging architectures.
In total 1.2M training events, 400k validation events and 400k test events. Use “train” for training, “val” for validation during the training and “test” for final testing and reporting results.
## Description
* 14 TeV, hadronic tops for signal, qcd diets background, Delphes ATLAS detector card with Pythia8
* No MPI/pile-up included
* Clustering of particle-flow entries (produced by Delphes E-flow) into anti-kT 0.8 jets in the pT range [550,650] GeV
* All top jets are matched to a parton-level top within ∆R = 0.8, and to all top decay partons within 0.8
* Jets are required to have |eta| < 2
* The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200
* Constituents are sorted by pT, with the highest pT one first
* The truth top four-momentum is stored as truth_px etc.
* A flag (1 for top, 0 for QCD) is kept for each jet. It is called is_signal_new
* The variable "ttv" (= test/train/validation) is kept for each jet. It indicates to which dataset the jet belongs. It is redundant as the different sets are already distributed as different files. | lewtun/top_quark_tagging | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-13T16:55:31+00:00 | {"license": "cc-by-4.0"} | 2022-04-03T13:26:05+00:00 |
845aaad797f618d1f8c9b42c3cb5919f0becdb2a |
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| wanyu/IteraTeR_full_sent | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-03-13T19:29:50+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "IteraTeR_full_sent", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "text-editing"]} | 2022-10-24T17:58:37+00:00 |
792d5310cc82446cccfd3cd8953893b831538976 |
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| wanyu/IteraTeR_full_doc | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-03-13T20:41:13+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "IteraTeR_full_doc", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "text-editing"]} | 2022-10-24T17:58:30+00:00 |
e22e0371dac444239b944f9293f5b491d62b73f0 |
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| wanyu/IteraTeR_human_sent | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-03-13T20:46:23+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "IteraTeR_human_sent", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "text-editing"]} | 2022-10-24T17:58:22+00:00 |
3b0bdabb090d04062ebc17e54ac889a64f5cb791 |
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| wanyu/IteraTeR_human_doc | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-03-13T20:48:31+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "IteraTeR-human-doc", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "text-editing"]} | 2022-10-24T17:58:15+00:00 |
1a2b7bc94feea59665740ea295e504c41b8f9c39 | # AutoNLP Dataset for project: ALBERTFINALYEAR
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project ALBERTFINALYEAR.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "Hasidic or Chasidic Judaism overlaps significantly with Haredi Judaism in its engagement with the se[...]",
"question": "What overlaps significantly with Haredi Judiasm?",
"answers.text": [
"Chasidic Judaism"
],
"answers.answer_start": [
11
]
},
{
"context": "Data compression can be viewed as a special case of data differencing: Data differencing consists of[...]",
"question": "What can classified as data differencing with empty source data?",
"answers.text": [
"Data compression",
"data compression"
],
"answers.answer_start": [
0,
400
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 87433 |
| valid | 10544 |
| Aclairs/ALBERTFINALYEAR | [
"region:us"
] | 2022-03-14T05:29:43+00:00 | {} | 2022-03-14T05:56:07+00:00 |
60f09088d7bfcd3f480609cbf4eeb7571415af81 | reatiny/chinese-spam-10000 | [
"region:us"
] | 2022-03-14T06:53:44+00:00 | {} | 2022-03-16T05:44:04+00:00 |
|
bb60660d157a96f5beae964140c7f52c11c5c3f5 | alkzzz/palui | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-14T07:09:11+00:00 | {"license": "cc-by-4.0"} | 2022-03-14T07:32:35+00:00 |
|
e0536f5bfc7c35bb62f104bb2400c2b36b6029ef | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1647246406 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T08:26:50+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-14T08:26:51+00:00 |
1d84bb9af6e19a7cd6860f4e3149f951e7c1c018 | # GEM Submission
Submission name: mT5_xl
| GEM-submissions/lewtun__mt5_xl__1647246454 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T08:27:38+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "mT5_xl", "tags": ["evaluation", "benchmark"]} | 2022-03-14T08:27:39+00:00 |
2bd261e242dd6801c5bf27ed6dfbe28309ba0387 | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1647247409 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T08:43:33+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-14T08:43:34+00:00 |
39719e276a1e76288e53e4ab8743ffb0ceb7bbe0 |
# Dataset Card for BLURB
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://microsoft.github.io/BLURB/index.html
- **Paper:** [Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing](https://arxiv.org/pdf/2007.15779.pdf)
- **Leaderboard:** https://microsoft.github.io/BLURB/leaderboard.html
- **Point of Contact:**
### Dataset Summary
BLURB is a collection of resources for biomedical natural language processing. In general domains, such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models, such as BERT provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks.
Inspired by prior efforts toward this direction (e.g., BLUE), we have created BLURB (short for Biomedical Language Understanding and Reasoning Benchmark). BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact.
#### BC5-chem
The corpus consists of three separate sets of
articles with diseases, chemicals and their relations annotated.
The training (500 articles) and development (500 articles) sets
were released to task participants in advance to support text-mining
method development. The test set (500 articles) was used for final
system performance evaluation.
- **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/)
#### BC5-disease
The corpus consists of three separate sets of
articles with diseases, chemicals and their relations annotated.
The training (500 articles) and development (500 articles) sets
were released to task participants in advance to support text-mining
method development. The test set (500 articles) was used for final
system performance evaluation.
- **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-v-cdr-corpus
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper:** [BioCreative V CDR task corpus: a resource for chemical disease relation extraction](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4860626/)
#### BC2GM
The BioCreative II Gene Mention task.
The training corpus for the current task consists mainly of
the training and testing corpora (text collections) from the
BCI task, and the testing corpus for the current task
consists of an additional 5,000 sentences that were held
'in reserve' from the previous task.
In the current corpus, tokenization is not provided;
instead participants are asked to identify a gene mention
in a sentence by giving its start and end characters.
As before, the training set consists of a set of sentences,
and for each sentence a set of gene mentions
(GENE annotations).
- **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-ii/task-1a-gene-mention-tagging/
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper:** [verview of BioCreative II gene mention recognition](https://link.springer.com/article/10.1186/gb-2008-9-s2-s2)
#### NCBI Disease
The NCBI disease corpus is fully annotated at the mention
and concept level to serve as a research resource for the biomedical natural
language processing community.
Corpus Characteristics
----------------------
* 793 PubMed abstracts
* 6,892 disease mentions
* 790 unique disease concepts
* Medical Subject Headings (MeSH®)
* Online Mendelian Inheritance in Man (OMIM®)
* 91% of the mentions map to a single disease concept
**divided into training, developing and testing sets.
Corpus Annotation
* Fourteen annotators
* Two-annotators per document (randomly paired)
* Three annotation phases
* Checked for corpus-wide consistency of annotations
- **Homepage:** https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper:** [NCBI disease corpus: a resource for disease name recognition and concept normalization](https://pubmed.ncbi.nlm.nih.gov/24393765/)
#### JNLPBA
The BioNLP / JNLPBA Shared Task 2004 involves the identification
and classification of technical terms referring to concepts of interest to
biologists in the domain of molecular biology. The task was organized by GENIA
Project based on the annotations of the GENIA Term corpus (version 3.02).
Corpus format: The JNLPBA corpus is distributed in IOB format, with each line
containing a single token and its tag, separated by a tab character.
Sentences are separated by blank lines.
- **Homepage: ** http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
- **Repository:** [NER GitHub repo by @GamalC](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/raw/master/data/)
- **Paper: ** [Introduction to the Bio-entity Recognition Task at JNLPBA](https://aclanthology.org/W04-1213)
#### EBM PICO
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
#### ChemProt
- **Homepage:**
- **Repository:**
- **Paper:**
#### DDI
- **Homepage:**
- **Repository:**
- **Paper:**
#### GAD
- **Homepage:**
- **Repository:**
- **Paper:**
#### BIOSSES
BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:
- very strong: 0.80–1.00
- strong: 0.60–0.79
- moderate: 0.40–0.59
- weak: 0.20–0.39
- very weak: 0.00–0.19
- **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
- **Repository:** https://github.com/gizemsogancioglu/biosses
- **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954)
- **Point of Contact:** [Gizem Soğancıoğlu](gizemsogancioglu@gmail.com) and [Arzucan Özgür](gizemsogancioglu@gmail.com)
#### HoC
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
#### PubMedQA
We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at this https URL.
- **Homepage:** https://pubmedqa.github.io/
- **Repository:** https://github.com/pubmedqa/pubmedqa
- **Paper:** [PubMedQA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/pdf/1909.06146.pdf)
- **Leaderboard:** [Question answering](https://pubmedqa.github.io/)
- **Point of Contact:**
#### BioASQ
Task 7b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). 2747 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries).
- **Homepage:** http://bioasq.org/
- **Repository:** http://participants-area.bioasq.org/datasets/
- **Paper:** [Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783?login=false)
### Supported Tasks and Leaderboards
| **Dataset** | **Task** | **Train** | **Dev** | **Test** | **Evaluation Metrics** | **Added** |
|:------------:|:-----------------------:|:---------:|:-------:|:--------:|:----------------------:|-----------|
| BC5-chem | NER | 5203 | 5347 | 5385 | F1 entity-level | **Yes** |
| BC5-disease | NER | 4182 | 4244 | 4424 | F1 entity-level | **Yes** |
| NCBI-disease | NER | 5134 | 787 | 960 | F1 entity-level | **Yes** |
| BC2GM | NER | 15197 | 3061 | 6325 | F1 entity-level | **Yes** |
| JNLPBA | NER | 46750 | 4551 | 8662 | F1 entity-level | **Yes** |
| EBM PICO | PICO | 339167 | 85321 | 16364 | Macro F1 word-level | No |
| ChemProt | Relation Extraction | 18035 | 11268 | 15745 | Micro F1 | No |
| DDI | Relation Extraction | 25296 | 2496 | 5716 | Micro F1 | No |
| GAD | Relation Extraction | 4261 | 535 | 534 | Micro F1 | No |
| BIOSSES | Sentence Similarity | 64 | 16 | 20 | Pearson | **Yes** |
| HoC | Document Classification | 1295 | 186 | 371 | Average Micro F1 | No |
| PubMedQA | Question Answering | 450 | 50 | 500 | Accuracy | **Yes** |
| BioASQ | Question Answering | 670 | 75 | 140 | Accuracy | No |
Datasets used in the BLURB biomedical NLP benchmark. The Train, Dev, and test splits might not be exactly identical to those proposed in BLURB.
This is something to be checked.
### Languages
English from biomedical texts
## Dataset Structure
### Data Instances
* **NER**
```json
{
'id': 0,
'tokens': [ "DPP6", "as", "a", "candidate", "gene", "for", "neuroleptic", "-", "induced", "tardive", "dyskinesia", "." ]
'ner_tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
}
```
* **PICO**
```json
{
'TBD'
}
```
* **Relation Extraction**
```json
{
'TBD'
}
```
* **Sentence Similarity**
```json
{'sentence 1': 'Here, looking for agents that could specifically kill KRAS mutant cells, they found that knockdown of GATA2 was synthetically lethal with KRAS mutation'
'sentence 2': 'Not surprisingly, GATA2 knockdown in KRAS mutant cells resulted in a striking reduction of active GTP-bound RHO proteins, including the downstream ROCK kinase'
'score': 2.2}
```
* **Document Classification**
```json
{
'TBD'
}
```
* **Question Answering**
* PubMedQA
```json
{'context': {'contexts': ['Programmed cell death (PCD) is the regulated death of cells within an organism. The lace plant (Aponogeton madagascariensis) produces perforations in its leaves through PCD. The leaves of the plant consist of a latticework of longitudinal and transverse veins enclosing areoles. PCD occurs in the cells at the center of these areoles and progresses outwards, stopping approximately five cells from the vasculature. The role of mitochondria during PCD has been recognized in animals; however, it has been less studied during PCD in plants.',
'The following paper elucidates the role of mitochondrial dynamics during developmentally regulated PCD in vivo in A. madagascariensis. A single areole within a window stage leaf (PCD is occurring) was divided into three areas based on the progression of PCD; cells that will not undergo PCD (NPCD), cells in early stages of PCD (EPCD), and cells in late stages of PCD (LPCD). Window stage leaves were stained with the mitochondrial dye MitoTracker Red CMXRos and examined. Mitochondrial dynamics were delineated into four categories (M1-M4) based on characteristics including distribution, motility, and membrane potential (ΔΨm). A TUNEL assay showed fragmented nDNA in a gradient over these mitochondrial stages. Chloroplasts and transvacuolar strands were also examined using live cell imaging. The possible importance of mitochondrial permeability transition pore (PTP) formation during PCD was indirectly examined via in vivo cyclosporine A (CsA) treatment. This treatment resulted in lace plant leaves with a significantly lower number of perforations compared to controls, and that displayed mitochondrial dynamics similar to that of non-PCD cells.'],
'labels': ['BACKGROUND', 'RESULTS'],
'meshes': ['Alismataceae',
'Apoptosis',
'Cell Differentiation',
'Mitochondria',
'Plant Leaves'],
'reasoning_free_pred': ['y', 'e', 's'],
'reasoning_required_pred': ['y', 'e', 's']},
'final_decision': 'yes',
'long_answer': 'Results depicted mitochondrial dynamics in vivo as PCD progresses within the lace plant, and highlight the correlation of this organelle with other organelles during developmental PCD. To the best of our knowledge, this is the first report of mitochondria and chloroplasts moving on transvacuolar strands to form a ring structure surrounding the nucleus during developmental PCD. Also, for the first time, we have shown the feasibility for the use of CsA in a whole plant system. Overall, our findings implicate the mitochondria as playing a critical and early role in developmentally regulated PCD in the lace plant.',
'pubid': 21645374,
'question': 'Do mitochondria play a role in remodelling lace plant leaves during programmed cell death?'}
```
### Data Fields
* **NER**
* `id`: string
* `ner_tags`: Sequence[ClassLabel]
* `tokens`: Sequence[String]
* **PICO**
* To be added
* **Relation Extraction**
* To be added
* **Sentence Similarity**
* `sentence 1`: string
* `sentence 2`: string
* `score`: float ranging from 0 (no relation) to 4 (equivalent)
* **Document Classification**
* To be added
* **Question Answering**
* PubMedQA
* `pubid`: integer
* `question`: string
* `context`: sequence of strings [`contexts`, `labels`, `meshes`, `reasoning_required_pred`, `reasoning_free_pred`]
* `long_answer`: string
* `final_decision`: string
### Data Splits
Shown in the table of supported tasks.
## Dataset Creation
### Curation Rationale
* BC5-chem
* BC5-disease
* BC2GM
* JNLPBA
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES
* HoC
* PubMedQA
* BioASQ
### Source Data
[More Information Needed]
### Annotations
All the datasets have been obtained and annotated by experts in the biomedical domain. Check the different citations for further details.
#### Annotation process
* BC5-chem
* BC5-disease
* BC2GM
* JNLPBA
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES - The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.
* HoC
* PubMedQA
* BioASQ
### Dataset Curators
All the datasets have been obtained and annotated by experts in thebiomedical domain. Check the different citations for further details.
### Licensing Information
* BC5-chem
* BC5-disease
* BC2GM
* JNLPBA
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES - BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html).
* HoC
* PubMedQA - MIT License Copyright (c) 2019 pubmedqa
* BioASQ
### Citation Information
* BC5-chem & BC5-disease
```latex
@article{article,
author = {Li, Jiao and Sun, Yueping and Johnson, Robin and Sciaky, Daniela and Wei, Chih-Hsuan and Leaman, Robert and Davis, Allan Peter and Mattingly, Carolyn and Wiegers, Thomas and lu, Zhiyong},
year = {2016},
month = {05},
pages = {baw068},
title = {BioCreative V CDR task corpus: a resource for chemical disease relation extraction},
volume = {2016},
journal = {Database},
doi = {10.1093/database/baw068}
}
```
* BC2GM
```latex
@article{article,
author = {Smith, Larry and Tanabe, Lorraine and Ando, Rie and Kuo, Cheng-Ju and Chung, I-Fang and Hsu, Chun-Nan and Lin, Yu-Shi and Klinger, Roman and Friedrich, Christoph and Ganchev, Kuzman and Torii, Manabu and Liu, Hongfang and Haddow, Barry and Struble, Craig and Povinelli, Richard and Vlachos, Andreas and Baumgartner Jr, William and Hunter, Lawrence and Carpenter, Bob and Wilbur, W.},
year = {2008},
month = {09},
pages = {S2},
title = {Overview of BioCreative II gene mention recognition},
volume = {9 Suppl 2},
journal = {Genome biology},
doi = {10.1186/gb-2008-9-s2-s2}
}
```
* JNLPBA
```latex
@inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and
Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th",
year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
}
```
* NCBI Disiease
```latex
@article{10.5555/2772763.2772800,
author = {Dogan, Rezarta Islamaj and Leaman, Robert and Lu, Zhiyong},
title = {NCBI Disease Corpus},
year = {2014},
issue_date = {February 2014},
publisher = {Elsevier Science},
address = {San Diego, CA, USA},
volume = {47},
number = {C},
issn = {1532-0464},
abstract = {Graphical abstractDisplay Omitted NCBI disease corpus is built as a gold-standard resource for disease recognition.793 PubMed abstracts are annotated with disease mentions and concepts (MeSH/OMIM).14 Annotators produced high consistency level and inter-annotator agreement.Normalization benchmark results demonstrate the utility of the corpus.The corpus is publicly available to the community. Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora.This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH ) or Online Mendelian Inheritance in Man (OMIM ). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency.The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks.The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/.},
journal = {J. of Biomedical Informatics},
month = {feb},
pages = {1–10},
numpages = {10}}
```
* EBM PICO
* ChemProt
* DDI
* GAD
* BIOSSES
```latex
@article{souganciouglu2017biosses,
title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},
author={So{\u{g}}anc{\i}o{\u{g}}lu, Gizem and {\"O}zt{\"u}rk, Hakime and {\"O}zg{\"u}r, Arzucan},
journal={Bioinformatics},
volume={33},
number={14},
pages={i49--i58},
year={2017},
publisher={Oxford University Press}
}
```
* HoC
* PubMedQA
```latex
@inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
}
```
* BioASQ
```latex
@article{10.1093/bioinformatics/btv585,
author = {Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and Högberg, Johan and Stenius, Ulla and Korhonen, Anna},
title = "{Automatic semantic classification of scientific literature according to the hallmarks of cancer}",
journal = {Bioinformatics},
volume = {32},
number = {3},
pages = {432-440},
year = {2015},
month = {10},
abstract = "{Motivation: The hallmarks of cancer have become highly influential in cancer research. They reduce the complexity of cancer into 10 principles (e.g. resisting cell death and sustaining proliferative signaling) that explain the biological capabilities acquired during the development of human tumors. Since new research depends crucially on existing knowledge, technology for semantic classification of scientific literature according to the hallmarks of cancer could greatly support literature review, knowledge discovery and applications in cancer research.Results: We present the first step toward the development of such technology. We introduce a corpus of 1499 PubMed abstracts annotated according to the scientific evidence they provide for the 10 currently known hallmarks of cancer. We use this corpus to train a system that classifies PubMed literature according to the hallmarks. The system uses supervised machine learning and rich features largely based on biomedical text mining. We report good performance in both intrinsic and extrinsic evaluations, demonstrating both the accuracy of the methodology and its potential in supporting practical cancer research. We discuss how this approach could be developed and applied further in the future.Availability and implementation: The corpus of hallmark-annotated PubMed abstracts and the software for classification are available at: http://www.cl.cam.ac.uk/∼sb895/HoC.html .Contact:simon.baker@cl.cam.ac.uk}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/btv585},
url = {https://doi.org/10.1093/bioinformatics/btv585},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/32/3/432/19568147/btv585.pdf},
}
```
### Contributions
* This dataset has been uploaded and generated by Dr. Jorge Abreu Vicente.
* Thanks to [@GamalC](https://github.com/GamalC) for uploading the NER datasets to GitHub, from where I got them.
* I am not part of the team that generated BLURB. This dataset is intended to help researchers to usethe BLURB benchmarking for NLP in Biomedical NLP.
* Thanks to [@bwang482](https://github.com/bwang482) for uploading the [BIOSSES dataset](https://github.com/bwang482/datasets/tree/master/datasets/biosses). We forked the [BIOSSES 🤗 dataset](https://huggingface.co/datasets/biosses) to add it to this BLURB benchmark.
* Thank you to [@tuner007](https://github.com/tuner007) for adding this dataset to the 🤗 hub | EMBO/BLURB | [
"task_categories:question-answering",
"task_categories:token-classification",
"task_categories:sentence-similarity",
"task_categories:text-classification",
"task_ids:closed-domain-qa",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2007.15779",
"arxiv:1909.06146",
"region:us"
] | 2022-03-14T10:29:16+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering", "token-classification", "sentence-similarity", "text-classification"], "task_ids": ["closed-domain-qa", "named-entity-recognition", "parsing", "semantic-similarity-scoring", "text-scoring", "topic-classification"], "pretty_name": "BLURB (Biomedical Language Understanding and Reasoning Benchmark.)"} | 2022-12-09T07:57:37+00:00 |
2e7a18495a4a6b869d49c68c6def0bffc7e1135e | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1647256250 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T11:10:54+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-14T11:10:55+00:00 |
fac45b3184e0ce9b79eecac454acf17e0a51f94e |
# Dataset Card for WikiTableQuestions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [WikiTableQuestions homepage](https://nlp.stanford.edu/software/sempre/wikitable)
- **Repository:** [WikiTableQuestions repository](https://github.com/ppasupat/WikiTableQuestions)
- **Paper:** [Compositional Semantic Parsing on Semi-Structured Tables](https://arxiv.org/abs/1508.00305)
- **Leaderboard:** [WikiTableQuestions leaderboard on PaperWithCode](https://paperswithcode.com/dataset/wikitablequestions)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.
### Supported Tasks and Leaderboards
question-answering, table-question-answering
### Languages
en
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 29.27 MB
- **Size of the generated dataset:** 47.90 MB
- **Total amount of disk used:** 77.18 MB
An example of 'validation' looks as follows:
```
{
"id": "nt-0",
"question": "what was the last year where this team was a part of the usl a-league?",
"answers": ["2004"],
"table": {
"header": ["Year", "Division", "League", ...],
"name": "csv/204-csv/590.csv",
"rows": [
["2001", "2", "USL A-League", ...],
["2002", "2", "USL A-League", ...],
...
]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a `list` of `string` feature.
- `table`: a dictionary feature containing:
- `header`: a `list` of `string` features.
- `rows`: a `list` of `list` of `string` features:
- `name`: a `string` feature.
### Data Splits
| name |train|validation|test |
|-------|----:|---------:|----:|
|default|11321| 2831|4344|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Panupong Pasupat and Percy Liang
### Licensing Information
Creative Commons Attribution Share Alike 4.0 International
### Citation Information
```
@inproceedings{pasupat-liang-2015-compositional,
title = "Compositional Semantic Parsing on Semi-Structured Tables",
author = "Pasupat, Panupong and Liang, Percy",
booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = jul,
year = "2015",
address = "Beijing, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P15-1142",
doi = "10.3115/v1/P15-1142",
pages = "1470--1480",
}
```
### Contributions
Thanks to [@SivilTaram](https://github.com/SivilTaram) for adding this dataset. | wikitablequestions | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"table-question-answering",
"arxiv:1508.00305",
"region:us"
] | 2022-03-14T11:16:52+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "WikiTableQuestions", "tags": ["table-question-answering"], "dataset_info": [{"config_name": "random-split-1", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30364389, "num_examples": 11321}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 7145768, "num_examples": 2831}], "download_size": 29267445, "dataset_size": 48933663}, {"config_name": "random-split-2", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30098954, "num_examples": 11314}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 7411203, "num_examples": 2838}], "download_size": 29267445, "dataset_size": 48933663}, {"config_name": "random-split-3", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 28778697, "num_examples": 11314}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 8731460, "num_examples": 2838}], "download_size": 29267445, "dataset_size": 48933663}, {"config_name": "random-split-4", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30166421, "num_examples": 11321}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 7343736, "num_examples": 2831}], "download_size": 29267445, "dataset_size": 48933663}, {"config_name": "random-split-5", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "table", "struct": [{"name": "header", "sequence": "string"}, {"name": "rows", "sequence": {"sequence": "string"}}, {"name": "name", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30333964, "num_examples": 11316}, {"name": "test", "num_bytes": 11423506, "num_examples": 4344}, {"name": "validation", "num_bytes": 7176193, "num_examples": 2836}], "download_size": 29267445, "dataset_size": 48933663}]} | 2024-01-18T11:19:00+00:00 |
b6ef2478821cfd61a28b32b10598cf2d23608d33 |
# UK PV dataset
PV solar generation data from the UK.
This dataset contains data from 1311 PV systems from 2018 to 2021.
Time granularity varies from 2 minutes to 30 minutes.
This data is collected from live PV systems in the UK. We have obfuscated the location of the PV systems for privacy.
If you are the owner of a PV system in the dataset, and do not want this data to be shared,
please do get in contact with info@openclimatefix.org.
## Files
- metadata.csv: Data about the PV systems, e.g location
- 2min.parquet: Power output for PV systems every 2 minutes.
- 5min.parquet: Power output for PV systems every 5 minutes.
- 30min.parquet: Power output for PV systems every 30 minutes.
- pv.netcdf: (legacy) Time series of PV solar generation every 5 minutes
### metadata.csv
Metadata of the different PV systems.
Note that there are extra PV systems in this metadata that do not appear in the PV time-series data.
The csv columns are:
- ss_id: the id of the system
- latitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- longitude_rounded: latitude of the PV system, but rounded to approximately the nearest km
- llsoacd: TODO
- orientation: The orientation of the PV system
- tilt: The tilt of the PV system
- kwp: The capacity of the PV system
- operational_at: the datetime the PV system started working
### {2,5,30}min.parquet
Time series of solar generation for a number of sytems.
Each file includes the systems for which there is enough granularity.
In particular the systems in 2min.parquet and 5min.parquet are also in 30min.parquet.
The files contain 3 columns:
- ss_id: the id of the system
- timestamp: the timestamp
- generation_wh: the generated power (in kW) at the given timestamp for the given system
### pv.netcdf (legacy)
Time series data of PV solar generation data is in an [xarray](https://docs.xarray.dev/en/stable/) format.
The data variables are the same as 'ss_id' in the metadata.
Each data variable contains the solar generation (in kW) for that PV system.
The ss_id's here are a subset of all the ss_id's in the metadata
The coordinates of the date are tagged as 'datetime' which is the datetime of the solar generation reading.
This is a subset of the more recent `5min.parquet` file.
## example
using Hugging Face Datasets
```python
from datasets import load_dataset
dataset = load_dataset("openclimatefix/uk_pv")
```
## useful links
https://huggingface.co/docs/datasets/share - this repo was made by following this tutorial | openclimatefix/uk_pv | [
"task_categories:time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:en",
"license:mit",
"pv",
"photovoltaic",
"environment",
"climate",
"energy",
"electricity",
"doi:10.57967/hf/0878",
"region:us"
] | 2022-03-14T12:20:19+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1B<n<10B"], "source_datasets": ["original"], "task_categories": ["time-series-forecasting"], "task_ids": ["multivariate-time-series-forecasting"], "pretty_name": "United Kingdom PV Solar generation", "tags": ["pv", "photovoltaic", "environment", "climate", "energy", "electricity"]} | 2022-11-30T17:02:42+00:00 |
090cbc0841fe628b18037e73de742959bffaec77 | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1647263213 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T13:06:57+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-14T13:06:58+00:00 |
d2146561ecc7df707d9e6b8318885fe6a39668a2 |
# Dataset Card for GTZAN
## Table of Contents
- [Dataset Card for GTZAN](#dataset-card-for-gtzan)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://marsyas.info/downloads/datasets.html](http://marsyas.info/downloads/datasets.html)
- **Paper:** [http://ismir2001.ismir.net/pdf/tzanetakis.pdf](http://ismir2001.ismir.net/pdf/tzanetakis.pdf)
- **Point of Contact:**
### Dataset Summary
GTZAN is a dataset for musical genre classification of audio signals. The dataset consists of 1,000 audio tracks, each of 30 seconds long. It contains 10 genres, each represented by 100 tracks. The tracks are all 22,050Hz Mono 16-bit audio files in WAV format. The genres are: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, and rock.
### Languages
English
## Dataset Structure
GTZAN is distributed as a single dataset without a predefined training and test split. The information below refers to the single `train` split that is assigned by default.
### Data Instances
An example of GTZAN looks as follows:
```python
{
"file": "/path/to/cache/genres/blues/blues.00000.wav",
"audio": {
"path": "/path/to/cache/genres/blues/blues.00000.wav",
"array": array(
[
0.00732422,
0.01660156,
0.00762939,
...,
-0.05560303,
-0.06106567,
-0.06417847,
],
dtype=float32,
),
"sampling_rate": 22050,
},
"genre": 0,
}
```
### Data Fields
The types associated with each of the data fields is as follows:
* `file`: a `string` feature.
* `audio`: an `Audio` feature containing the `path` of the sound file, the decoded waveform in the `array` field, and the `sampling_rate`.
* `genre`: a `ClassLabel` feature.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{tzanetakis_essl_cook_2001,
author = "Tzanetakis, George and Essl, Georg and Cook, Perry",
title = "Automatic Musical Genre Classification Of Audio Signals",
url = "http://ismir2001.ismir.net/pdf/tzanetakis.pdf",
publisher = "The International Society for Music Information Retrieval",
year = "2001"
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset. | marsyas/gtzan | [
"region:us"
] | 2022-03-14T14:54:59+00:00 | {"pretty_name": "GTZAN"} | 2023-11-26T18:57:29+00:00 |
73a091b01dfbf7865ee2d1ebef45f2e0cc7c6f73 |
# Dataset Card for GEM/xwikis
## Dataset Description
- **Homepage:** https://github.com/lauhaide/clads
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/2202.09583
- **Leaderboard:** N/A
- **Point of Contact:** Laura Perez-Beltrachini
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/xwikis).
### Dataset Summary
The XWikis Corpus provides datasets with different language pairs and directions for cross-lingual and multi-lingual abstractive document summarisation.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/xwikis')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/xwikis).
#### website
[Github](https://github.com/lauhaide/clads)
#### paper
https://arxiv.org/abs/2202.09583
#### authors
Laura Perez-Beltrachini (University of Edinburgh)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/lauhaide/clads)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://arxiv.org/abs/2202.09583
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@InProceedings{clads-emnlp,
author = "Laura Perez-Beltrachini and Mirella Lapata",
title = "Models and Datasets for Cross-Lingual Summarisation",
booktitle = "Proceedings of The 2021 Conference on Empirical Methods in Natural Language Processing ",
year = "2021",
address = "Punta Cana, Dominican Republic",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Laura Perez-Beltrachini
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
lperez@ed.ac.uk
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`German`, `English`, `French`, `Czech`, `Chinese`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Cross-lingual and Multi-lingual single long input document abstractive summarisation.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Entity descriptive summarisation, that is, generate a summary that conveys the most salient facts of a document related to a given entity.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Laura Perez-Beltrachini (University of Edinburgh) and Ronald Cardenas (University of Edinburgh)
### Dataset Structure
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
For each language pair and direction there exists a train/valid/test split.
The test split is a sample of size 7k from the intersection of titles existing in the four languages (cs,fr,en,de).
Train/valid are randomly split.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
- identification of entity salient information
- translation
- multi-linguality
- cross-lingual transfer, zero-shot, few-shot
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
ROUGE-1/2/L
## Dataset Curation
### Original Curation
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
other
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
found
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The input documents have section structure information.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by another rater
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Bilingual annotators assessed the content overlap of source document and target summaries.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`public domain`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
| GEM/xwikis | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:de",
"language:en",
"language:fr",
"language:cs",
"license:cc-by-sa-4.0",
"arxiv:2202.09583",
"region:us"
] | 2022-03-14T15:31:48+00:00 | {"annotations_creators": ["found"], "language_creators": ["unknown"], "language": ["de", "en", "fr", "cs"], "license": ["cc-by-sa-4.0"], "multilinguality": ["unknown"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "xwikis"} | 2023-02-22T13:05:19+00:00 |
4e5006435c3e73467b513619809df955ee157c3b | EALeon16/poems | [
"license:wtfpl",
"region:us"
] | 2022-03-14T17:50:30+00:00 | {"license": "wtfpl"} | 2022-03-14T17:50:45+00:00 |
|
648664f0f63aa5901cc1bcdc2922558433c07dc7 |
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [github.com/oscar-corpus/corpus](https://github.com/oscar-corpus/corpus)
- **Paper:** [Towards a Cleaner Document-Oriented Multilingual Crawled Corpus](https://oscar-corpus.com/publication/2022/arxiv/towards/)
- **Point of Contact:** [Contact](https://oscar-corpus.com/#contact)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [ungoliant](https://github.com/oscar-corpus/ungoliant) architecture. Data is distributed by language in both original and deduplicated form.
**We are aware of the virus warnings issue. See discussion [here](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/discussions/12) for more info!**
### Usage
```py
from datasets import load_dataset
dataset = load_dataset("oscar-corpus/OSCAR-2201",
use_auth_token=True, # required
language="ar",
streaming=True, # optional
split="train") # optional, but the dataset only has a train split
for d in dataset:
print(d) # prints documents
```
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word representations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
### Issues
OSCAR 22.01 may have quality issues on low size subcorpora, as it has been the case before.
Note that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus.
As an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic.
**If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.**
|Language code|Language|Issues|
|-------------|--------|------|
| | | |
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Data Instances
TODO
### Data Fields
* `id`: a `int64` feature.
* `content`: `string` Newline-separated content
* `warc_headers`: WARC Headers
* `warc_headers.content-length`: `int64` Content length (in bytes) **before** cleaning
* `warc_headers.content-type`: `string` MIME type
* `warc_headers.warc-block-digest`:`string` Algorithm name and calculated value of a digest applied to the full block of the record
* `warc_headers.warc-date`: `string` Crawl date (YYYY-MM-DDThh:mm:ssZ)
* `warc_headers.warc-identified-content-language`: `string` Comma-separated list of language identifications done by CommonCrawl (uses CLD3)
* `warc_headers.warc-record-id`: `string` Record ID
* `warc_headers.warc-refers-to`: `string` Record-ID of a single record for which the present record holds additional content
* `warc_headers.warc-target-uri`: `string` URI from where the content has been fetched
* `warc_headers.warc-type`: `string` Type of the WARC Record
* `metadata`: Metadata
* `metadata.identification.label`: `string` Language identification of the document
* `metadata.identification.prob`: `float` Confidence of the identification
* `metadata.annotation`: `[string]` Annnotations of the document. `null` if none present. (Is `None` if using `datasets`)
* `metadata.sentence_identifications`: `[string]` List of line identifications. `null`/`None` can be present for lines that failed the identification step.
* `meta.offset`: `int64` line offset where the related text begins. Should be used with `meta.nb_sentences` when reading the source files rather than using iterators to get related data.
* `text`: `string` content
See the [WARC Format standard](https://iipc.github.io/warc-specifications/specifications/warc-format/warc-1.1/#warc-type-mandatory) for more details on the `warc_headers` fields, and our [website](https://oscar-corpus.com/post/oscar-v22-01/) for more details about the format in general.
### Data Splits
<details>
<summary>Click to expand the number of samples per configuration</summary>
</details>
## Table
| lang | size | docs | words |
|:----------------------------|:----------|:------------|:----------------|
| _Multilingual_ | 12.1 GB | 1,210,685 | 936,187,711 |
| Afrikaans | 47.0 MB | 12,393 | 6,227,310 |
| Albanian | 3.0 GB | 437,287 | 326,325,149 |
| Alemannic / Swiss German | 363.6 kB | 139 | 37,381 |
| Amharic | 461.0 MB | 37,513 | 30,481,153 |
| Arabic | 84.2 GB | 8,718,929 | 6,103,711,887 |
| Aragonese | 10.6 kB | 12 | 51 |
| Armenian | 4.7 GB | 379,267 | 268,031,270 |
| Assamese | 221.2 MB | 17,084 | 11,109,557 |
| Asturian | 73.6 kB | 77 | 3,919 |
| Avaric | 18.6 kB | 14 | 582 |
| Azerbaijani | 3.5 GB | 491,847 | 291,927,692 |
| Bangla | 15.1 GB | 1,171,501 | 751,877,226 |
| Bashkir | 95.5 MB | 11,198 | 5,418,474 |
| Basque | 1.1 GB | 233,658 | 97,092,942 |
| Belarusian | 1.8 GB | 180,046 | 107,227,860 |
| Bihari languages | 24.2 kB | 27 | 569 |
| Bishnupriya | 2.0 MB | 271 | 98,419 |
| Bosnian | 10.3 kB | 10 | 422 |
| Breton | 33.7 MB | 16,119 | 3,111,619 |
| Bulgarian | 35.1 GB | 2,887,115 | 2,405,981,285 |
| Burmese | 1.9 GB | 158,733 | 44,835,970 |
| Catalan | 13.9 GB | 2,627,307 | 1,508,919,864 |
| Cebuano | 44.6 MB | 5,742 | 5,253,785 |
| Central Kurdish | 716.4 MB | 84,950 | 43,913,025 |
| Chechen | 14.0 MB | 4,086 | 798,766 |
| Chinese | 900.9 GB | 56,524,518 | 23,149,203,886 |
| Chuvash | 41.8 MB | 4,750 | 2,465,782 |
| Cornish | 1.4 kB | 2 | 55 |
| Croatian | 11.2 MB | 11,462 | 505,369 |
| Czech | 58.6 GB | 10,381,916 | 5,452,724,456 |
| Danish | 12.6 GB | 2,265,479 | 1,454,439,292 |
| Dimli (individual language) | 706 Bytes | 1 | 19 |
| Divehi | 217.2 MB | 24,067 | 10,112,205 |
| Dutch | 114.0 GB | 20,206,532 | 12,329,127,151 |
| Eastern Mari | 11.3 MB | 1,612 | 641,525 |
| Egyptian Arabic | 2.8 MB | 1,256 | 176,096 |
| English | 3.2 TB | 431,992,659 | 377,376,402,775 |
| Esperanto | 558.3 MB | 111,932 | 58,416,628 |
| Estonian | 9.2 GB | 1,362,524 | 820,975,443 |
| Filipino | 646.5 MB | 70,394 | 81,881,278 |
| Finnish | 37.8 GB | 4,948,961 | 2,900,615,928 |
| French | 382.2 GB | 52,037,098 | 41,713,990,658 |
| Galician | 255.2 MB | 88,803 | 27,051,212 |
| Georgian | 7.1 GB | 488,588 | 281,430,479 |
| German | 496.7 GB | 70,075,424 | 46,826,676,844 |
| Goan Konkani | 787.2 kB | 46 | 38,831 |
| Greek | 78.3 GB | 6,738,546 | 5,031,242,803 |
| Guarani | 9.0 kB | 10 | 374 |
| Gujarati | 4.8 GB | 136,467 | 301,170,777 |
| Hebrew | 30.3 GB | 3,132,396 | 2,249,377,984 |
| Hindi | 23.3 GB | 1,529,907 | 1,534,799,198 |
| Hungarian | 53.9 GB | 6,866,062 | 4,598,787,907 |
| Icelandic | 2.0 GB | 396,183 | 210,365,124 |
| Ido | 77.3 kB | 105 | 2,690 |
| Iloko | 97.9 kB | 75 | 8,592 |
| Indonesian | 17.4 GB | 2,244,622 | 1,984,195,207 |
| Interlingua | 40.2 kB | 6 | 10,125 |
| Irish | 45.6 MB | 12,233 | 4,877,850 |
| Italian | 229.3 GB | 28,502,092 | 24,294,684,830 |
| Japanese | 258.7 GB | 36,328,931 | 5,592,948,356 |
| Javanese | 152.7 kB | 70 | 10,441 |
| Kalmyk | 9.3 kB | 9 | 250 |
| Kannada | 2.6 GB | 150,850 | 108,450,571 |
| Karachay-Balkar | 119.6 kB | 91 | 4,089 |
| Kazakh | 2.9 GB | 261,085 | 157,267,307 |
| Khmer | 1.9 GB | 121,910 | 30,564,131 |
| Komi | 119.9 kB | 127 | 3,335 |
| Korean | 51.8 GB | 5,881,481 | 3,854,968,649 |
| Kurdish | 150.3 MB | 29,906 | 17,390,759 |
| Kyrgyz | 518.6 MB | 62,244 | 28,028,986 |
| Lao | 337.1 MB | 28,914 | 6,682,982 |
| Latin | 4.1 MB | 4,397 | 187,446 |
| Latvian | 8.2 GB | 1,032,987 | 707,361,898 |
| Lezghian | 375.5 kB | 124 | 19,250 |
| Limburgish | 1.4 kB | 2 | 41 |
| Lithuanian | 20.0 GB | 2,303,070 | 1,712,802,056 |
| Lojban | 1.9 MB | 570 | 260,542 |
| Lombard | 2.6 kB | 2 | 225 |
| Low German | 9.0 MB | 1,938 | 1,012,561 |
| Lower Sorbian | 707 Bytes | 1 | 17 |
| Luxembourgish | 15.8 MB | 5,108 | 1,545,946 |
| Macedonian | 3.6 GB | 341,775 | 244,058,579 |
| Maithili | 21.6 kB | 23 | 483 |
| Malagasy | 57.3 MB | 3,028 | 7,279,056 |
| Malay | 5.3 MB | 5,228 | 217,818 |
| Malayalam | 4.1 GB | 250,972 | 137,831,247 |
| Maltese | 2.5 MB | 2,208 | 118,190 |
| Marathi | 3.3 GB | 250,376 | 160,179,233 |
| Mazanderani | 128.2 kB | 76 | 7,337 |
| Minangkabau | 6.0 MB | 585 | 614,613 |
| Mingrelian | 7.6 MB | 2,550 | 253,333 |
| Mongolian | 2.8 GB | 237,719 | 176,405,432 |
| Nahuatl languages | 8.7 kB | 12 | 179 |
| Nepali | 3.7 GB | 391,947 | 177,885,116 |
| Newari | 5.7 MB | 1,134 | 273,837 |
| Norwegian | 2.8 GB | 973,188 | 279,182,902 |
| Norwegian Nynorsk | 6.8 MB | 5,835 | 459,183 |
| Occitan | 2.1 MB | 373 | 31,061 |
| Odia | 487.9 MB | 52,942 | 23,755,902 |
| Ossetic | 13.9 MB | 3,560 | 800,430 |
| Pashto | 490.3 MB | 50,312 | 46,293,249 |
| Persian | 77.4 GB | 7,665,871 | 6,430,164,396 |
| Piedmontese | 1.7 MB | 698 | 188,270 |
| Polish | 139.0 GB | 19,301,137 | 12,584,498,906 |
| Portuguese | 170.3 GB | 23,735,707 | 18,441,864,893 |
| Punjabi | 1.1 GB | 68,094 | 70,068,604 |
| Quechua | 744 Bytes | 1 | 14 |
| Romanian | 49.2 GB | 4,624,764 | 5,261,803,995 |
| Russia Buriat | 32.9 kB | 39 | 785 |
| Russian | 1.1 TB | 76,060,844 | 62,811,122,663 |
| Sakha | 65.6 MB | 6,284 | 3,473,813 |
| Sanskrit | 136.0 MB | 4,472 | 5,671,369 |
| Scottish Gaelic | 137.7 kB | 136 | 7,769 |
| Serbian | 6.9 GB | 577,472 | 482,932,670 |
| Serbian (Latin) | 931.8 kB | 738 | 92,875 |
| Sicilian | 1.5 kB | 2 | 50 |
| Sindhi | 117.1 MB | 15,516 | 10,685,611 |
| Sinhala | 2.0 GB | 108,593 | 113,179,741 |
| Slovak | 16.5 GB | 2,409,555 | 1,619,121,944 |
| Slovenian | 1.2 GB | 351,894 | 118,400,246 |
| Somali | 2.1 kB | 3 | 109 |
| South Azerbaijani | 14.1 MB | 5,381 | 693,746 |
| Spanish | 381.9 GB | 51,386,247 | 42,829,835,316 |
| Sundanese | 5.0 MB | 263 | 547,145 |
| Swahili | 1.3 MB | 462 | 123,050 |
| Swedish | 48.0 GB | 7,541,278 | 5,078,331,128 |
| Tajik | 870.9 MB | 46,366 | 56,627,727 |
| Tamil | 11.4 GB | 556,772 | 452,343,748 |
| Tatar | 915.3 MB | 76,398 | 51,875,265 |
| Telugu | 3.4 GB | 249,756 | 137,752,065 |
| Thai | 66.1 GB | 5,030,254 | 1,626,779,846 |
| Tibetan | 234.5 MB | 18,683 | 2,286,269 |
| Turkish | 75.1 GB | 10,826,031 | 6,421,221,358 |
| Turkmen | 4.4 MB | 2,485 | 276,632 |
| Ukrainian | 48.8 GB | 4,558,214 | 2,879,585,992 |
| Emiliano-Romagnolo[eml] | 901 Bytes | 1 | 53 |
| Upper Sorbian | 132.8 kB | 110 | 8,825 |
| Urdu | 3.4 GB | 336,994 | 332,816,354 |
| Uyghur | 201.9 MB | 18,556 | 11,240,889 |
| Uzbek | 19.9 MB | 9,526 | 1,370,842 |
| Vietnamese | 98.9 GB | 9,587,233 | 12,283,185,482 |
| Volapük | 825.9 kB | 661 | 57,039 |
| Walloon | 105.7 kB | 138 | 4,386 |
| Waray | 7.6 MB | 933 | 830,872 |
| Welsh | 409.3 MB | 90,378 | 49,488,495 |
| Western Frisian | 75.3 MB | 21,946 | 6,357,929 |
| Western Mari | 743.5 kB | 155 | 43,916 |
| Western Panjabi | 46.7 MB | 6,790 | 4,060,419 |
| Wu Chinese | 137.2 kB | 88 | 3,056 |
| Yiddish | 232.5 MB | 23,418 | 15,809,780 |
| Yoruba | 24.7 kB | 26 | 1,042 |
## Dataset Creation
### Curation Rationale
OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText).
The pipeline works on documents rather than lines.
`Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy.
Threading is done at shard, record and sentence level, making the whole generation process much more efficient.
Filtering will be explained in a future blog post at our [website](https://oscar-corpus.com)
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR 22.01, the **November/December 2021** snapshot was used. It is composed by 64 000 compressed text files containing documents and their headers.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Julien Abadji](https://ujj.space), [Pedro Ortiz Suarez](https://portizs.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@ARTICLE{2022arXiv220106642A,
author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t},
title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = jan,
eid = {arXiv:2201.06642},
pages = {arXiv:2201.06642},
archivePrefix = {arXiv},
eprint = {2201.06642},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@ARTICLE{caswell-etal-2021-quality,
author = {{Caswell}, Isaac and {Kreutzer}, Julia and {Wang}, Lisa and {Wahab}, Ahsan and {van Esch}, Daan and {Ulzii-Orshikh}, Nasanbayar and {Tapo}, Allahsera and {Subramani}, Nishant and {Sokolov}, Artem and {Sikasote}, Claytone and {Setyawan}, Monang and {Sarin}, Supheakmungkol and {Samb}, Sokhar and {Sagot}, Beno{\^\i}t and {Rivera}, Clara and {Rios}, Annette and {Papadimitriou}, Isabel and {Osei}, Salomey and {Ortiz Su{\'a}rez}, Pedro Javier and {Orife}, Iroro and {Ogueji}, Kelechi and {Niyongabo}, Rubungo Andre and {Nguyen}, Toan Q. and {M{\"u}ller}, Mathias and {M{\"u}ller}, Andr{\'e} and {Hassan Muhammad}, Shamsuddeen and {Muhammad}, Nanda and {Mnyakeni}, Ayanda and {Mirzakhalov}, Jamshidbek and {Matangira}, Tapiwanashe and {Leong}, Colin and {Lawson}, Nze and {Kudugunta}, Sneha and {Jernite}, Yacine and {Jenny}, Mathias and {Firat}, Orhan and {Dossou}, Bonaventure F.~P. and {Dlamini}, Sakhile and {de Silva}, Nisansa and {{\c{C}}abuk Ball{\i}}, Sakine and {Biderman}, Stella and {Battisti}, Alessia and {Baruwa}, Ahmed and {Bapna}, Ankur and {Baljekar}, Pallavi and {Abebe Azime}, Israel and {Awokoya}, Ayodele and {Ataman}, Duygu and {Ahia}, Orevaoghene and {Ahia}, Oghenefego and {Agrawal}, Sweta and {Adeyemi}, Mofetoluwa},
title = "{Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Computer Science - Artificial Intelligence},
year = 2021,
month = mar,
eid = {arXiv:2103.12028},
pages = {arXiv:2103.12028},
archivePrefix = {arXiv},
eprint = {2103.12028},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210312028C},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox), [@Uinelj](https://github.com/Uinelj) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| oscar-corpus/OSCAR-2201 | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:sq",
"language:am",
"language:ar",
"language:an",
"language:hy",
"language:as",
"language:ast",
"language:av",
"language:az",
"language:bn",
"language:ba",
"language:eu",
"language:be",
"language:bh",
"language:bpy",
"language:bs",
"language:br",
"language:bg",
"language:my",
"language:ca",
"language:ceb",
"language:ckb",
"language:ce",
"language:zh",
"language:cv",
"language:kw",
"language:hr",
"language:cs",
"language:da",
"language:diq",
"language:dv",
"language:nl",
"language:mhr",
"language:arz",
"language:en",
"language:eo",
"language:et",
"language:tl",
"language:fi",
"language:fr",
"language:gl",
"language:ka",
"language:de",
"language:gom",
"language:el",
"language:gn",
"language:gu",
"language:he",
"language:hi",
"language:hu",
"language:is",
"language:io",
"language:ilo",
"language:id",
"language:ia",
"language:ga",
"language:it",
"language:ja",
"language:jv",
"language:xal",
"language:kn",
"language:krc",
"language:kk",
"language:km",
"language:kv",
"language:ko",
"language:ku",
"language:ky",
"language:lo",
"language:la",
"language:lv",
"language:lez",
"language:li",
"language:lt",
"language:jbo",
"language:lmo",
"language:nds",
"language:dsb",
"language:lb",
"language:mk",
"language:mai",
"language:mg",
"language:ms",
"language:ml",
"language:mt",
"language:mr",
"language:mzn",
"language:min",
"language:xmf",
"language:mn",
"language:nah",
"language:ne",
"language:new",
"language:no",
"language:nn",
"language:oc",
"language:or",
"language:os",
"language:ps",
"language:fa",
"language:pms",
"language:pl",
"language:pt",
"language:pa",
"language:qu",
"language:ro",
"language:bxr",
"language:ru",
"language:sah",
"language:sa",
"language:gd",
"language:sr",
"language:sh",
"language:scn",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:azb",
"language:es",
"language:su",
"language:sw",
"language:sv",
"language:tg",
"language:ta",
"language:tt",
"language:te",
"language:th",
"language:bo",
"language:als",
"language:tr",
"language:tk",
"language:uk",
"language:eml",
"language:hsb",
"language:ur",
"language:ug",
"language:uz",
"language:vi",
"language:vo",
"language:wa",
"language:war",
"language:cy",
"language:fy",
"language:mrj",
"language:pnb",
"language:wuu",
"language:yi",
"language:yo",
"language:mul",
"license:cc0-1.0",
"arxiv:2010.14571",
"arxiv:2201.06642",
"arxiv:2103.12028",
"region:us"
] | 2022-03-14T23:09:14+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["af", "sq", "am", "ar", "an", "hy", "as", "ast", "av", "az", "bn", "ba", "eu", "be", "bh", "bpy", "bs", "br", "bg", "my", "ca", "ceb", "ckb", "ce", "zh", "cv", "kw", "hr", "cs", "da", "diq", "dv", "nl", "mhr", "arz", "en", "eo", "et", "tl", "fi", "fr", "gl", "ka", "de", "gom", "el", "gn", "gu", "he", "hi", "hu", "is", "io", "ilo", "id", "ia", "ga", "it", "ja", "jv", "xal", "kn", "krc", "kk", "km", "kv", "ko", "ku", "ky", "lo", "la", "lv", "lez", "li", "lt", "jbo", "lmo", "nds", "dsb", "lb", "mk", "mai", "mg", "ms", "ml", "mt", "mr", "mzn", "min", "xmf", "mn", "nah", "ne", "new", false, "nn", "oc", "or", "os", "ps", "fa", "pms", "pl", "pt", "pa", "qu", "ro", "bxr", "ru", "sah", "sa", "gd", "sr", "sh", "scn", "sd", "si", "sk", "sl", "so", "azb", "es", "su", "sw", "sv", "tg", "ta", "tt", "te", "th", "bo", "als", "tr", "tk", "uk", "eml", "hsb", "ur", "ug", "uz", "vi", "vo", "wa", "war", "cy", "fy", "mrj", "pnb", "wuu", "yi", "yo", "mul"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "source_datasets": ["original"], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "oscar", "pretty_name": "OSCAR"} | 2023-05-30T06:48:15+00:00 |
6e8665ced0dc6c8f274e1e496a2187b11fe0832d | # Dataset Card for Cartoon Set
## Table of Contents
- [Dataset Card for Cartoon Set](#dataset-card-for-cartoon-set)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://google.github.io/cartoonset/
- **Repository:** https://github.com/google/cartoonset/
- **Paper:** XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary

[Cartoon Set](https://google.github.io/cartoonset/) is a collection of random, 2D cartoon avatar images. The cartoons vary in 10 artwork categories, 4 color categories, and 4 proportion categories, with a total of ~10^13 possible combinations. We provide sets of 10k and 100k randomly chosen cartoons and labeled attributes.
#### Usage
`cartoonset` provides the images as PNG byte strings, this gives you a bit more flexibility into how to load the data. Here we show 2 ways:
**Using PIL:**
```python
import datasets
from io import BytesIO
from PIL import Image
ds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
def process_fn(sample):
img = Image.open(BytesIO(sample["img_bytes"]))
...
return {"img": img}
ds = ds.map(process_fn, remove_columns=["img_bytes"])
```
**Using TensorFlow:**
```python
import datasets
import tensorflow as tf
hfds = datasets.load_dataset("cgarciae/cartoonset", "10k") # or "100k"
ds = tf.data.Dataset.from_generator(
lambda: hfds,
output_signature={
"img_bytes": tf.TensorSpec(shape=(), dtype=tf.string),
},
)
def process_fn(sample):
img = tf.image.decode_png(sample["img_bytes"], channels=3)
...
return {"img": img}
ds = ds.map(process_fn)
```
**Additional features:**
You can also access the features that generated each sample e.g:
```python
ds = datasets.load_dataset("cgarciae/cartoonset", "10k+features") # or "100k+features"
```
Apart from `img_bytes` these configurations add a total of 18 * 2 additional `int` features, these come in `{feature}`, `{feature}_num_categories` pairs where `num_categories` indicates the number of categories for that feature. See [Data Fields](#data-fields) for the complete list of features.
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'img_bytes': b'0x...',
}
```
If `+features` is added to the dataset name, the following additional fields are provided:
```python
{
'img_bytes': b'0x...',
'eye_angle': 0,
'eye_angle_num_categories': 3,
'eye_lashes': 0,
'eye_lashes_num_categories': 2,
'eye_lid': 0,
'eye_lid_num_categories': 2,
'chin_length': 2,
'chin_length_num_categories': 3,
...
}
```
### Data Fields
- `img_bytes`: A byte string containing the raw data of a 500x500 PNG image.
If `+features` is appended to the dataset name, the following additional `int32` fields are provided:
- `eye_angle`
- `eye_angle_num_categories`
- `eye_lashes`
- `eye_lashes_num_categories`
- `eye_lid`
- `eye_lid_num_categories`
- `chin_length`
- `chin_length_num_categories`
- `eyebrow_weight`
- `eyebrow_weight_num_categories`
- `eyebrow_shape`
- `eyebrow_shape_num_categories`
- `eyebrow_thickness`
- `eyebrow_thickness_num_categories`
- `face_shape`
- `face_shape_num_categories`
- `facial_hair`
- `facial_hair_num_categories`
- `facial_hair_num_categories`
- `facial_hair_num_categories`
- `hair`
- `hair_num_categories`
- `hair_num_categories`
- `hair_num_categories`
- `eye_color`
- `eye_color_num_categories`
- `face_color`
- `face_color_num_categories`
- `hair_color`
- `hair_color_num_categories`
- `glasses`
- `glasses_num_categories`
- `glasses_color`
- `glasses_color_num_categories`
- `eyes_slant`
- `eye_slant_num_categories`
- `eyebrow_width`
- `eyebrow_width_num_categories`
- `eye_eyebrow_distance`
- `eye_eyebrow_distance_num_categories`
### Data Splits
Train
## Dataset Creation
### Licensing Information
This data is licensed by Google LLC under a Creative Commons Attribution 4.0 International License.
### Citation Information
```
@article{DBLP:journals/corr/abs-1711-05139,
author = {Amelie Royer and
Konstantinos Bousmalis and
Stephan Gouws and
Fred Bertsch and
Inbar Mosseri and
Forrester Cole and
Kevin Murphy},
title = {{XGAN:} Unsupervised Image-to-Image Translation for many-to-many Mappings},
journal = {CoRR},
volume = {abs/1711.05139},
year = {2017},
url = {http://arxiv.org/abs/1711.05139},
eprinttype = {arXiv},
eprint = {1711.05139},
timestamp = {Mon, 13 Aug 2018 16:47:38 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1711-05139.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
| cgarciae/cartoonset | [
"size_categories:10K<n<100K",
"license:cc-by-4.0",
"arxiv:1711.05139",
"region:us"
] | 2022-03-14T23:35:29+00:00 | {"license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["image", "computer-vision", "generative-modelling"], "pretty_name": "Cartoon Set"} | 2022-03-23T19:12:10+00:00 |
f887b0aa23f386116e46690f4630b2f2c204a880 |
# Dataset Card for "Hebrew_Squad_v1"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/TechnionTDK/hebwiki-qa/](https://github.com/TechnionTDK/hebwiki-qa/)
- **Size of train dataset files:** 62.3 MB
- **Size of validation dataset files:** 9.48 MB
- **Total amount of disk used:** 71.78 MB
### Dataset Summary
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. This Hebrew dataset is an automatic translation of the English SQuAD dataset https://huggingface.co/datasets/squad.
### Supported Tasks and Leaderboards
Extractive Question-Answering
### Languages
Hebrew
## Dataset Structure
Follows the standars SQuAD format.
### Data Instances
#### plain_text
- **Size of train dataset files:** 62.3 MB
- **Size of validation dataset files:** 9.48 MB
- **Total amount of disk used:** 71.78 MB
An example of 'train' looks as follows.
```
{
"id": "56be4db0acb8001400a502ee",
"title": "Super_Bowl_50",
"context": "סופרבול 50 היה משחק כדורגל אמריקאי כדי לקבוע את אלופת ליגת הפוטבול הלאומית (NFL) לעונת 2015. אלופת ועידת הכדורגל האמריקאית (AFC) דנבר ברונקוס ניצחה את אלופת ועידת הכדורגל הלאומית (NFC) קרולינה פנתרס 24–10 כדי לזכות בתואר הסופרבול השלישי שלה. המשחק נערך ב-7 בפברואר 2016 באצטדיון ליווי'ס באזור מפרץ סן פרנסיסקו בסנטה קלרה, קליפורניה. מכיוון שזה היה הסופרבול ה-50, הליגה הדגישה את יום השנה הזהב עם יוזמות שונות בנושא זהב, כמו גם השעיה זמנית את המסורת של שם כל משחק סופרבול עם ספרות רומיות (שתחתן המשחק היה ידוע בתור סופרבול L ), כך שהלוגו יוכל להציג באופן בולט את הספרות הערביות 50.",
"question": "היכן התקיים סופרבול 50?",
"answers": {
"text": ["סנטה קלרה, קליפורניה", "אצטדיון ליווי"],
"answer_start": [311, 271]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### Hebrew_Squad_v1
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----|---------|
|Hebrew_Squad_v1|52405| 7455|
### Contributions
Created by Matan Ben-chorin, May Flaster, Guided by Dr. Oren Mishali.
This is our final project as part of computer engineering B.Sc studies in the Faculty of Electrical Engineering combined with Computer Science at Technion, Israel Institute of Technology.
For more cooperation, please contact email:
Matan Ben-chorin: matan.bh1@gmail.com
May Flaster: mayflaster96@gmail.com
| tdklab/Hebrew_Squad_v1 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:auto_translation",
"language_creators:auto_translation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:squad",
"region:us"
] | 2022-03-15T00:43:59+00:00 | {"annotations_creators": ["auto_translation"], "language_creators": ["auto_translation"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["squad"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Hebrew_Squad_v1", "languages": ["Hebrew", "he"], "licenses": ["cc-by-4-0"]} | 2022-08-04T03:59:05+00:00 |
9be08cd250913eb5d15f945d18aa485e01087d20 | PradeepReddyThathireddy/Inspiring_Content_Detection_Dataset | [
"region:us"
] | 2022-03-15T05:21:26+00:00 | {} | 2022-03-23T07:35:15+00:00 |
|
1161216f7e7185a4b2f4d0a4e0734dc7919dfa15 |
# Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html)
- **Repository:** [Mendeley](https://data.mendeley.com/datasets/zmycy7t9h9)
- **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.
It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.
See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)
For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.
### Supported Tasks and Leaderboards
- [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes)
- [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes)
- ...
### Languages
V4 data for Arabic, Chinese, English, and V12 data for English
## Dataset Structure
### Data Instances
```
{
{'document_id': 'nw/wsj/23/wsj_2311',
'sentences': [{'part_id': 0,
'words': ['CONCORDE', 'trans-Atlantic', 'flights', 'are', '$', '2, 'to', 'Paris', 'and', '$', '3, 'to', 'London', '.']},
'pos_tags': [25, 18, 27, 43, 2, 12, 17, 25, 11, 2, 12, 17, 25, 7],
'parse_tree': '(TOP(S(NP (NNP CONCORDE) (JJ trans-Atlantic) (NNS flights) )(VP (VBP are) (NP(NP(NP ($ $) (CD 2,400) )(PP (IN to) (NP (NNP Paris) ))) (CC and) (NP(NP ($ $) (CD 3,200) )(PP (IN to) (NP (NNP London) ))))) (. .) ))',
'predicate_lemmas': [None, None, None, 'be', None, None, None, None, None, None, None, None, None, None],
'predicate_framenet_ids': [None, None, None, '01', None, None, None, None, None, None, None, None, None, None],
'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None, None],
'speaker': None,
'named_entities': [7, 6, 0, 0, 0, 15, 0, 5, 0, 0, 15, 0, 5, 0],
'srl_frames': [{'frames': ['B-ARG1', 'I-ARG1', 'I-ARG1', 'B-V', 'B-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'O'],
'verb': 'are'}],
'coref_spans': [],
{'part_id': 0,
'words': ['In', 'a', 'Centennial', 'Journal', 'article', 'Oct.', '5', ',', 'the', 'fares', 'were', 'reversed', '.']}]}
'pos_tags': [17, 13, 25, 25, 24, 25, 12, 4, 13, 27, 40, 42, 7],
'parse_tree': '(TOP(S(PP (IN In) (NP (DT a) (NML (NNP Centennial) (NNP Journal) ) (NN article) ))(NP (NNP Oct.) (CD 5) ) (, ,) (NP (DT the) (NNS fares) )(VP (VBD were) (VP (VBN reversed) )) (. .) ))',
'predicate_lemmas': [None, None, None, None, None, None, None, None, None, None, None, 'reverse', None],
'predicate_framenet_ids': [None, None, None, None, None, None, None, None, None, None, None, '01', None],
'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None],
'speaker': None,
'named_entities': [0, 0, 4, 22, 0, 12, 30, 0, 0, 0, 0, 0, 0],
'srl_frames': [{'frames': ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'B-ARGM-TMP', 'I-ARGM-TMP', 'O', 'B-ARG1', 'I-ARG1', 'O', 'B-V', 'O'],
'verb': 'reversed'}],
'coref_spans': [],
}
```
### Data Fields
- **`document_id`** (*`str`*): This is a variation on the document filename
- **`sentences`** (*`List[Dict]`*): All sentences of the same document are in a single example for the convenience of concatenating sentences.
Every element in `sentences` is a *`Dict`* composed of the following data fields:
- **`part_id`** (*`int`*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.
- **`words`** (*`List[str]`*) :
- **`pos_tags`** (*`List[ClassLabel]` or `List[str]`*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.
- tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the [Mendeley repo](https://data.mendeley.com/datasets/zmycy7t9h9)
- arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by `ClassLabel`
- chinese v4 : `datasets.ClassLabel(num_classes=36, names=["X", "AD", "AS", "BA", "CC", "CD", "CS", "DEC", "DEG", "DER", "DEV", "DT", "ETC", "FW", "IJ", "INF", "JJ", "LB", "LC", "M", "MSP", "NN", "NR", "NT", "OD", "ON", "P", "PN", "PU", "SB", "SP", "URL", "VA", "VC", "VE", "VV",])`, where `X` is for pos tag missing
- english v4 : `datasets.ClassLabel(num_classes=49, names=["XX", "``", "$", "''", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
- english v12 : `datasets.ClassLabel(num_classes=51, names="english_v12": ["XX", "``", "$", "''", "*", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "VERB", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
- **`parse_tree`** (*`Optional[str]`*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be `None`.
- **`predicate_lemmas`** (*`List[Optional[str]]`*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are `None`.
- **`predicate_framenet_ids`** (*`List[Optional[int]]`*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or `None`.
- **`word_senses`** (*`List[Optional[float]]`*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1.
- **`speaker`** (*`Optional[str]`*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be `None`.
- **`named_entities`** (*`List[ClassLabel]`*) : The BIO tags for named entities in the sentence.
- tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])`
- **`srl_frames`** (*`List[{"word":str, "frames":List[str]}]`*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format.
- **`coref spans`** (*`List[List[int]]`*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive.
### Data Splits
Each dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{pradhan-etal-2013-towards,
title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
author = {Pradhan, Sameer and
Moschitti, Alessandro and
Xue, Nianwen and
Ng, Hwee Tou and
Bj{\"o}rkelund, Anders and
Uryupina, Olga and
Zhang, Yuchen and
Zhong, Zhi},
booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-3516",
pages = "143--152",
}
```
### Contributions
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset. | conll2012_ontonotesv5 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:coreference-resolution",
"task_ids:parsing",
"task_ids:lemmatization",
"task_ids:word-sense-disambiguation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ar",
"language:en",
"language:zh",
"license:cc-by-nc-nd-4.0",
"semantic-role-labeling",
"region:us"
] | 2022-03-15T10:48:28+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ar", "en", "zh"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "part-of-speech", "coreference-resolution", "parsing", "lemmatization", "word-sense-disambiguation"], "paperswithcode_id": "ontonotes-5-0", "pretty_name": "CoNLL2012 shared task data based on OntoNotes 5.0", "tags": ["semantic-role-labeling"], "dataset_info": [{"config_name": "english_v4", "features": [{"name": "document_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "part_id", "dtype": "int32"}, {"name": "words", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "XX", "1": "``", "2": "$", "3": "''", "4": ",", "5": "-LRB-", "6": "-RRB-", "7": ".", "8": ":", "9": "ADD", "10": "AFX", "11": "CC", "12": "CD", "13": "DT", "14": "EX", "15": "FW", "16": "HYPH", "17": "IN", "18": "JJ", "19": "JJR", "20": "JJS", "21": "LS", "22": "MD", "23": "NFP", "24": "NN", "25": "NNP", "26": "NNPS", "27": "NNS", "28": "PDT", "29": "POS", "30": "PRP", "31": "PRP$", "32": "RB", "33": "RBR", "34": "RBS", "35": "RP", "36": "SYM", "37": "TO", "38": "UH", "39": "VB", "40": "VBD", "41": "VBG", "42": "VBN", "43": "VBP", "44": "VBZ", "45": "WDT", "46": "WP", "47": "WP$", "48": "WRB"}}}}, {"name": "parse_tree", "dtype": "string"}, {"name": "predicate_lemmas", "sequence": "string"}, {"name": "predicate_framenet_ids", "sequence": "string"}, {"name": "word_senses", "sequence": "float32"}, {"name": "speaker", "dtype": "string"}, {"name": "named_entities", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PERSON", "2": "I-PERSON", "3": "B-NORP", "4": "I-NORP", "5": "B-FAC", "6": "I-FAC", "7": "B-ORG", "8": "I-ORG", "9": "B-GPE", "10": "I-GPE", "11": "B-LOC", "12": "I-LOC", "13": "B-PRODUCT", "14": "I-PRODUCT", "15": "B-DATE", "16": "I-DATE", "17": "B-TIME", "18": "I-TIME", "19": "B-PERCENT", "20": "I-PERCENT", "21": "B-MONEY", "22": "I-MONEY", "23": "B-QUANTITY", "24": "I-QUANTITY", "25": "B-ORDINAL", "26": "I-ORDINAL", "27": "B-CARDINAL", "28": "I-CARDINAL", "29": "B-EVENT", "30": "I-EVENT", "31": "B-WORK_OF_ART", "32": "I-WORK_OF_ART", "33": "B-LAW", "34": "I-LAW", "35": "B-LANGUAGE", "36": "I-LANGUAGE"}}}}, {"name": "srl_frames", "list": [{"name": "verb", "dtype": "string"}, {"name": "frames", "sequence": "string"}]}, {"name": "coref_spans", "sequence": {"sequence": "int32", "length": 3}}]}], "splits": [{"name": "train", "num_bytes": 112246121, "num_examples": 1940}, {"name": "validation", "num_bytes": 14116925, "num_examples": 222}, {"name": "test", "num_bytes": 14709044, "num_examples": 222}], "download_size": 193644139, "dataset_size": 141072090}, {"config_name": "chinese_v4", "features": [{"name": "document_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "part_id", "dtype": "int32"}, {"name": "words", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "X", "1": "AD", "2": "AS", "3": "BA", "4": "CC", "5": "CD", "6": "CS", "7": "DEC", "8": "DEG", "9": "DER", "10": "DEV", "11": "DT", "12": "ETC", "13": "FW", "14": "IJ", "15": "INF", "16": "JJ", "17": "LB", "18": "LC", "19": "M", "20": "MSP", "21": "NN", "22": "NR", "23": "NT", "24": "OD", "25": "ON", "26": "P", "27": "PN", "28": "PU", "29": "SB", "30": "SP", "31": "URL", "32": "VA", "33": "VC", "34": "VE", "35": "VV"}}}}, {"name": "parse_tree", "dtype": "string"}, {"name": "predicate_lemmas", "sequence": "string"}, {"name": "predicate_framenet_ids", "sequence": "string"}, {"name": "word_senses", "sequence": "float32"}, {"name": "speaker", "dtype": "string"}, {"name": "named_entities", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PERSON", "2": "I-PERSON", "3": "B-NORP", "4": "I-NORP", "5": "B-FAC", "6": "I-FAC", "7": "B-ORG", "8": "I-ORG", "9": "B-GPE", "10": "I-GPE", "11": "B-LOC", "12": "I-LOC", "13": "B-PRODUCT", "14": "I-PRODUCT", "15": "B-DATE", "16": "I-DATE", "17": "B-TIME", "18": "I-TIME", "19": "B-PERCENT", "20": "I-PERCENT", "21": "B-MONEY", "22": "I-MONEY", "23": "B-QUANTITY", "24": "I-QUANTITY", "25": "B-ORDINAL", "26": "I-ORDINAL", "27": "B-CARDINAL", "28": "I-CARDINAL", "29": "B-EVENT", "30": "I-EVENT", "31": "B-WORK_OF_ART", "32": "I-WORK_OF_ART", "33": "B-LAW", "34": "I-LAW", "35": "B-LANGUAGE", "36": "I-LANGUAGE"}}}}, {"name": "srl_frames", "list": [{"name": "verb", "dtype": "string"}, {"name": "frames", "sequence": "string"}]}, {"name": "coref_spans", "sequence": {"sequence": "int32", "length": 3}}]}], "splits": [{"name": "train", "num_bytes": 77195698, "num_examples": 1391}, {"name": "validation", "num_bytes": 10828169, "num_examples": 172}, {"name": "test", "num_bytes": 9585138, "num_examples": 166}], "download_size": 193644139, "dataset_size": 97609005}, {"config_name": "arabic_v4", "features": [{"name": "document_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "part_id", "dtype": "int32"}, {"name": "words", "sequence": "string"}, {"name": "pos_tags", "sequence": "string"}, {"name": "parse_tree", "dtype": "string"}, {"name": "predicate_lemmas", "sequence": "string"}, {"name": "predicate_framenet_ids", "sequence": "string"}, {"name": "word_senses", "sequence": "float32"}, {"name": "speaker", "dtype": "string"}, {"name": "named_entities", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PERSON", "2": "I-PERSON", "3": "B-NORP", "4": "I-NORP", "5": "B-FAC", "6": "I-FAC", "7": "B-ORG", "8": "I-ORG", "9": "B-GPE", "10": "I-GPE", "11": "B-LOC", "12": "I-LOC", "13": "B-PRODUCT", "14": "I-PRODUCT", "15": "B-DATE", "16": "I-DATE", "17": "B-TIME", "18": "I-TIME", "19": "B-PERCENT", "20": "I-PERCENT", "21": "B-MONEY", "22": "I-MONEY", "23": "B-QUANTITY", "24": "I-QUANTITY", "25": "B-ORDINAL", "26": "I-ORDINAL", "27": "B-CARDINAL", "28": "I-CARDINAL", "29": "B-EVENT", "30": "I-EVENT", "31": "B-WORK_OF_ART", "32": "I-WORK_OF_ART", "33": "B-LAW", "34": "I-LAW", "35": "B-LANGUAGE", "36": "I-LANGUAGE"}}}}, {"name": "srl_frames", "list": [{"name": "verb", "dtype": "string"}, {"name": "frames", "sequence": "string"}]}, {"name": "coref_spans", "sequence": {"sequence": "int32", "length": 3}}]}], "splits": [{"name": "train", "num_bytes": 42017761, "num_examples": 359}, {"name": "validation", "num_bytes": 4859292, "num_examples": 44}, {"name": "test", "num_bytes": 4900664, "num_examples": 44}], "download_size": 193644139, "dataset_size": 51777717}, {"config_name": "english_v12", "features": [{"name": "document_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "part_id", "dtype": "int32"}, {"name": "words", "sequence": "string"}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "XX", "1": "``", "2": "$", "3": "''", "4": "*", "5": ",", "6": "-LRB-", "7": "-RRB-", "8": ".", "9": ":", "10": "ADD", "11": "AFX", "12": "CC", "13": "CD", "14": "DT", "15": "EX", "16": "FW", "17": "HYPH", "18": "IN", "19": "JJ", "20": "JJR", "21": "JJS", "22": "LS", "23": "MD", "24": "NFP", "25": "NN", "26": "NNP", "27": "NNPS", "28": "NNS", "29": "PDT", "30": "POS", "31": "PRP", "32": "PRP$", "33": "RB", "34": "RBR", "35": "RBS", "36": "RP", "37": "SYM", "38": "TO", "39": "UH", "40": "VB", "41": "VBD", "42": "VBG", "43": "VBN", "44": "VBP", "45": "VBZ", "46": "VERB", "47": "WDT", "48": "WP", "49": "WP$", "50": "WRB"}}}}, {"name": "parse_tree", "dtype": "string"}, {"name": "predicate_lemmas", "sequence": "string"}, {"name": "predicate_framenet_ids", "sequence": "string"}, {"name": "word_senses", "sequence": "float32"}, {"name": "speaker", "dtype": "string"}, {"name": "named_entities", "sequence": {"class_label": {"names": {"0": "O", "1": "B-PERSON", "2": "I-PERSON", "3": "B-NORP", "4": "I-NORP", "5": "B-FAC", "6": "I-FAC", "7": "B-ORG", "8": "I-ORG", "9": "B-GPE", "10": "I-GPE", "11": "B-LOC", "12": "I-LOC", "13": "B-PRODUCT", "14": "I-PRODUCT", "15": "B-DATE", "16": "I-DATE", "17": "B-TIME", "18": "I-TIME", "19": "B-PERCENT", "20": "I-PERCENT", "21": "B-MONEY", "22": "I-MONEY", "23": "B-QUANTITY", "24": "I-QUANTITY", "25": "B-ORDINAL", "26": "I-ORDINAL", "27": "B-CARDINAL", "28": "I-CARDINAL", "29": "B-EVENT", "30": "I-EVENT", "31": "B-WORK_OF_ART", "32": "I-WORK_OF_ART", "33": "B-LAW", "34": "I-LAW", "35": "B-LANGUAGE", "36": "I-LANGUAGE"}}}}, {"name": "srl_frames", "list": [{"name": "verb", "dtype": "string"}, {"name": "frames", "sequence": "string"}]}, {"name": "coref_spans", "sequence": {"sequence": "int32", "length": 3}}]}], "splits": [{"name": "train", "num_bytes": 174173192, "num_examples": 10539}, {"name": "validation", "num_bytes": 24264804, "num_examples": 1370}, {"name": "test", "num_bytes": 18254144, "num_examples": 1200}], "download_size": 193644139, "dataset_size": 216692140}]} | 2024-01-18T09:34:57+00:00 |
6de5f4fa6a044e79302def646a39bf2be621dac4 | anjandash/java-8m-methods-v2 | [
"multilinguality:monolingual",
"license:mit",
"region:us"
] | 2022-03-15T11:01:14+00:00 | {"language": ["java"], "license": ["mit"], "multilinguality": ["monolingual"], "pretty_name": ["java-8m-methods-v2"]} | 2022-07-01T19:31:57+00:00 |
|
80ce985b32bd618df18f86436893249c60add630 | # AutoNLP Dataset for project: tweet-sentiment
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project tweet-sentiment.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "I am going to see how long I can do this for.",
"target": 8
},
{
"text": "@anitabora yeah, right. What if our politicians start using uploading their pics, lots of inside sto[...]",
"target": 8
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=13, names=['anger', 'boredom', 'empty', 'enthusiasm', 'fun', 'happiness', 'hate', 'love', 'neutral', 'relief', 'sadness', 'surprise', 'worry'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 31995 |
| valid | 8005 |
| victor/autonlp-data-tweet-sentiment | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-03-15T11:10:29+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-10-25T09:03:17+00:00 |
8990a6df925bf53cd9c864275703193cbfe85715 | hazal/Turkish-Biomedical-corpus-trM | [
"language:tr",
"region:us"
] | 2022-03-15T12:01:31+00:00 | {"language": ["tr"]} | 2022-08-10T10:13:22+00:00 |
|
a6f9aa7bda62c328bd642d32316c63e3387210ec | # BWNS: The Baha'i World News Service dataset.
BWNS articles from 2000 to 2022.
| Dayyan/bwns | [
"region:us"
] | 2022-03-15T19:45:05+00:00 | {} | 2022-03-17T14:41:53+00:00 |
44356ea2ed95383472092db0382ebdab85917fa3 | jeffboudier/testing | [
"license:afl-3.0",
"region:us"
] | 2022-03-15T21:31:32+00:00 | {"license": "afl-3.0"} | 2022-03-15T21:31:32+00:00 |
|
689f949a36ec83a2a6f14e1fc4a52cf22a704d56 | # DISCO: Diachronic Spanish Sonnet Corpus
[](https://zenodo.org/badge/latestdoi/103841064)
The Diachronic Spanish Sonnet Corpus (DISCO) contains sonnets in Spanish in CSV, between the 15th and the 20th centuries (4303 sonnets by 1215 authors from 22 different countries). It includes well-known authors, but also less canonized ones.
This is a CSV compilation taken from the plain text corpus v4 published on git https://github.com/pruizf/disco/tree/v4. It includes the title, author, age and text metadata.
<br><br>
| jorge-henao/disco_poetry_spanish | [
"region:us"
] | 2022-03-16T03:42:59+00:00 | {} | 2022-03-17T03:19:06+00:00 |
fbeac939f336b47d75f06167cf339f6706fbafdc |
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [enwiki_el](https://github.com/GaaH/enwiki_el)
- Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr)
### Dataset Summary
It is intended to be used to train Entity Linking (EL) systems. Links in Wikipedia articles are used to detect named entities.
### Languages
- English
## Dataset Structure
```
{
"title": "Title of the page",
"qid": "QID of the corresponding Wikidata entity",
"words": ["tokens"],
"wikipedia": ["Wikipedia description of each entity"],
"labels": ["NER labels"],
"titles": ["Wikipedia title of each entity"],
"qids": ["QID of each entity"],
}
```
The `words` field contains the article’s text splitted on white-spaces. The other fields are list with same length as `words` and contains data only when the respective token in `words` is the __start of an entity__. For instance, if the _i-th_ token in `words` is an entity, then the _i-th_ element of `wikipedia` contains a description, extracted from Wikipedia, of this entity. The same applies for the other fields. If the entity spans multiple words, then only the index of the first words contains data.
The only exception is the `labels` field, which is used to delimit entities. It uses the IOB encoding: if the token is not part of an entity, the label is `"O"`; if it is the first word of a multi-word entity, the label is `"B"`; otherwise the label is `"I"`. | gcaillaut/enwiki_el | [
"task_categories:other",
"annotations_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:wtfpl",
"region:us"
] | 2022-03-16T10:16:09+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": [], "language": ["en-EN"], "license": ["wtfpl"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "test"} | 2022-07-04T11:36:35+00:00 |
1b1f1f2a456fc59a8c9260f800d7098a34183419 |
Retrieving the 50th example from the train set:
```
> print(dataset['train']['sentence1'][0][50])
Muž hrá na gitare.
> print(dataset['train']['sentence2'][0][50])
Chlapec hrá na gitare.
> print(dataset['train']['similarity_score'][0][50])
3.200000047683716
```
For score explanation see [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt).
| crabz/stsb-sk | [
"task_ids:semantic-similarity-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|stsb_multi_mt",
"language:sk",
"license:unknown",
"region:us"
] | 2022-03-16T10:20:28+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["sk"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|stsb_multi_mt"], "task_categories": ["text-scoring"], "task_ids": ["semantic-similarity-scoring"], "pretty_name": "stsb-sk", "language_bcp47": ["sk-SK"]} | 2022-10-23T04:13:41+00:00 |
45c0c4a3404059175269c9dacfe00cb88b3a5a89 |
# Dataset Card for NMSQA(Natural Multi-speaker Spoken Question Answering)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage:
https://github.com/DanielLin94144/DUAL-textless-SQA
- Repository:
https://github.com/DanielLin94144/DUAL-textless-SQA
- Paper:
https://arxiv.org/abs/2203.04911
- Leaderboard:
- Point of Contact:
Download audio data: [https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz](https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz)
Unzip audio data: `tar -xf nmsqa_audio.tar.gz`
### Dataset Summary
The Natural Multi-speaker Spoken Question Answering (NMSQA) dataset is designed for the task of textless spoken question answering. It is based on the SQuAD dataset and contains spoken questions and passages. The dataset includes the original text, transcriptions, and audio files of the spoken content. This dataset is created to evaluate the performance of models on textless spoken question answering tasks.
### Supported Tasks and Leaderboards
The primary task supported by this dataset is textless spoken question answering, where the goal is to answer questions based on spoken passages without relying on textual information. The dataset can also be used for automatic speech recognition tasks.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
Each instance in the dataset contains the following fields:
- id: Unique identifier for the instance
- title: The title of the passage
- context: The passage text
- question: The question text
- - answer_start: The start index of the answer in the text
- audio_full_answer_end: The end position of the audio answer in seconds
- audio_full_answer_start: The start position of the audio answer in seconds
- audio_full_neg_answer_end: The end position of the audio answer in seconds for an incorrect answer with the same words
- audio_full_neg_answer_start: The start position of the audio answer in seconds for an incorrect answer with the same words
- audio_segment_answer_end: The end position of the audio answer in seconds for the segment
- audio_segment_answer_start: The start position of the audio answer in seconds for the segment
- text: The answer text
- content_segment_audio_path: The audio path for the content segment
- content_full_audio_path: The complete audio path for the content
- content_audio_sampling_rate: The audio sampling rate
- content_audio_speaker: The audio speaker
- content_segment_text: The segment text of the content
- content_segment_normalized_text: The normalized text for generating audio
- question_audio_path: The audio path for the question
- question_audio_sampling_rate: The audio sampling rate
- question_audio_speaker: The audio speaker
- question_normalized_text: The normalized text for generating audio
### Data Fields
The dataset includes the following data fields:
- id
- title
- context
- question
- answers
- content_segment_audio_path
- content_full_audio_path
- content_audio_sampling_rate
- content_audio_speaker
- content_segment_text
- content_segment_normalized_text
- question_audio_path
- question_audio_sampling_rate
- question_audio_speaker
- question_normalized_text
### Data Splits
The dataset is split into train, dev, and test sets.
## Dataset Creation
### Curation Rationale
The NMSQA dataset is created to address the challenge of textless spoken question answering, where the model must answer questions based on spoken passages without relying on textual information.
### Source Data
The NMSQA dataset is based on the SQuAD dataset, with spoken questions and passages created from the original text data.
#### Initial Data Collection and Normalization
The initial data collection involved converting the original SQuAD dataset's text-based questions and passages into spoken audio files. The text was first normalized, and then audio files were generated using text-to-speech methods.
#### Who are the source language producers?
The source language producers are the creators of the SQuAD dataset and the researchers who generated the spoken audio files for the NMSQA dataset.
### Annotations
#### Annotation process
The annotations for the NMSQA dataset are derived from the original SQuAD dataset. Additional annotations, such as audio start and end positions for correct and incorrect answers, as well as audio file paths and speaker information, are added by the dataset creators.
#### Who are the annotators?
The annotators for the NMSQA dataset are the creators of the SQuAD dataset and the researchers who generated the spoken audio files and additional annotations for the NMSQA dataset.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
The NMSQA dataset contributes to the development and evaluation of models for textless spoken question answering tasks, which can lead to advancements in natural language processing and automatic speech recognition. Applications of these technologies can improve accessibility and convenience in various domains, such as virtual assistants, customer service, and voice-controlled devices.
### Discussion of Biases
The dataset inherits potential biases from the original SQuAD dataset, which may include biases in the selection of passages, questions, and answers. Additionally, biases may be introduced in the text-to-speech process and the choice of speakers used to generate the spoken audio files.
### Other Known Limitations
As the dataset is based on the SQuAD dataset, it shares the same limitations, including the fact that it is limited to the English language and mainly focuses on factual questions. Furthermore, the dataset may not cover a wide range of accents, dialects, or speaking styles.
## Additional Information
### Dataset Curators
The NMSQA dataset is curated by Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shang-Wen Li, Abdelrahman Mohamed, Hung-Yi Lee, and Lin-Shan Lee.
### Licensing Information
The licensing information for the dataset is not explicitly mentioned.
### Citation Information
```css
@article{lin2022dual,
title={DUAL: Textless Spoken Question Answering with Speech Discrete Unit Adaptive Learning},
author={Lin, Guan-Ting and Chuang, Yung-Sung and Chung, Ho-Lam and Yang, Shu-wen and Chen, Hsuan-Jui and Li, Shang-Wen and Mohamed, Abdelrahman and Lee, Hung-yi and Lee, Lin-shan},
journal={arXiv preprint arXiv:2203.04911},
year={2022}
}
```
### Contributions
Thanks to [@voidful](https://github.com/voidful) for adding this dataset. | voidful/NMSQA | [
"task_categories:question-answering",
"task_categories:automatic-speech-recognition",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"speech-recognition",
"arxiv:2203.04911",
"region:us"
] | 2022-03-16T16:03:42+00:00 | {"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["expert-generated", "machine-generated", "crowdsourced"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["question-answering", "automatic-speech-recognition"], "task_ids": ["abstractive-qa"], "pretty_name": "NMSQA", "tags": ["speech-recognition"]} | 2023-04-04T03:46:23+00:00 |
f2614cab4939062f7b9313470f297dbc7f26cf66 | LongNN/news_sum | [
"license:gpl-3.0",
"region:us"
] | 2022-03-16T17:07:07+00:00 | {"license": "gpl-3.0"} | 2022-03-16T17:14:08+00:00 |
|
1190b855bc90372a9571b3c59847f42d1675a2fe | # AutoNLP Dataset for project: devign_raw_test
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project devign_raw_test.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "void ff_avg_h264_qpel16_mc32_msa ( uint8_t * dst , const uint8_t * src , ptrdiff_t stride ) { avc_lu[...]",
"target": 0
},
{
"text": "static void sd_cardchange ( void * opaque , bool load ) { SDState * sd = opaque ; qemu_set_irq ( sd [...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 21188 |
| valid | 5298 |
| nimaster/autonlp-data-devign_raw_test | [
"task_categories:text-classification",
"region:us"
] | 2022-03-17T13:06:22+00:00 | {"task_categories": ["text-classification"], "languages": ["en"]} | 2022-03-17T13:07:49+00:00 |
eed50a3535a938b051cb291cee7579376f7a7367 | anthonny/hate_speech | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:unknown",
"region:us"
] | 2022-03-17T13:50:00+00:00 | {"annotations_creators": ["found"], "language_creators": ["crowdsourced"], "language": ["es-EC"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-classification"], "pretty_name": "hate speech"} | 2022-10-25T09:03:21+00:00 |
|
43aa565bcc88b801013e7a3882eee40713e7c725 | **X-SCITLDR**: Cross-Lingual Extreme Summarization of Scholarly Documents
# X-SCITLDR
The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage summarize and translate approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.
# Languages
- German
- Italian
- Chinese
- Japanese
# Related
- [Paper](https://dl.acm.org/doi/abs/10.1145/3529372.3530938)
- [Code](https://github.com/sobamchan/xscitldr/)
- [Contact](mailto:sotaro.takeshita@uni-mannheim.de)
# Citation Information
```
@inproceedings{takeshita-etal-2022-xsci,
author = {Takeshita, Sotaro and Green, Tommaso and Friedrich, Niklas and Eckert, Kai and Ponzetto, Simone Paolo},
title = {X-SCITLDR: Cross-Lingual Extreme Summarization of Scholarly Documents},
year = {2022},
isbn = {9781450393454},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3529372.3530938},
doi = {10.1145/3529372.3530938},
abstract = {The number of scientific publications nowadays is rapidly increasing, causing information overload for researchers and making it hard for scholars to keep up to date with current trends and lines of work. Consequently, recent work on applying text mining technologies for scholarly publications has investigated the application of automatic text summarization technologies, including extreme summarization, for this domain. However, previous work has concentrated only on monolingual settings, primarily in English. In this paper, we fill this research gap and present an abstractive cross-lingual summarization dataset for four different languages in the scholarly domain, which enables us to train and evaluate models that process English papers and generate summaries in German, Italian, Chinese and Japanese. We present our new X-SCITLDR dataset for multilingual summarization and thoroughly benchmark different models based on a state-of-the-art multilingual pre-trained model, including a two-stage 'summarize and translate' approach and a direct cross-lingual model. We additionally explore the benefits of intermediate-stage training using English monolingual summarization and machine translation as intermediate tasks and analyze performance in zero- and few-shot scenarios.},
booktitle = {Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries},
articleno = {4},
numpages = {12},
keywords = {scholarly document processing, summarization, multilinguality},
location = {Cologne, Germany},
series = {JCDL '22}
}
``` | umanlp/xscitldr | [
"region:us"
] | 2022-03-17T14:30:16+00:00 | {} | 2022-07-04T12:49:25+00:00 |
f5b8eff44796cdd3a3c9ebb77383051adae4abc7 |
kaggle datasets | ttxy/kaggle | [
"license:apache-2.0",
"region:us"
] | 2022-03-17T15:02:27+00:00 | {"license": "apache-2.0"} | 2022-03-17T16:00:50+00:00 |
4d1d66c78bfe1ad870fb21f7e7837103b43c42c7 | - `tweet_disaster`, 8562 | ttxy/nlp | [
"region:us"
] | 2022-03-17T15:59:17+00:00 | {} | 2022-07-24T04:58:39+00:00 |
682cc4c36e60a556576b92370f918ed4513f9648 | mrm8488/test2 | [
"license:wtfpl",
"region:us"
] | 2022-03-17T18:40:22+00:00 | {"license": "wtfpl"} | 2022-03-17T18:40:22+00:00 |
|
adb147bd12398f9d56a652005f4895c6b7100ebe | Texto perteneciente a todos los BOE (Boletin Oficial del Estado, España) desde 13 de enero del 2020 al 16 de febrero del 2022.
Separador '|'
Columnas: año | mes | dia | texto del BOE | tamaño | nombre pdf del BOE | Paulosdeanllons/ODS_BOE | [
"license:afl-3.0",
"region:us"
] | 2022-03-18T08:48:15+00:00 | {"license": "afl-3.0"} | 2022-03-23T13:52:31+00:00 |
2e1dc06ac448fac1fe3c032a8919735353d80f58 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| malteos/test-ds | [
"task_categories:text-retrieval",
"multilinguality:monolingual",
"size_categories:unknown",
"region:us"
] | 2022-03-18T10:02:26+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en-US"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": [], "pretty_name": "test ds"} | 2022-10-25T09:03:23+00:00 |
d62cc9c9bad06319b45ec81ba7d840fd1bc63894 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| malteos/test2 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-03-18T10:18:42+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["conditional-text-generation"], "task_ids": ["summarization"], "paperswithcode_id": "cnn-daily-mail-1", "pretty_name": "CNN / Daily Mail"} | 2022-10-23T04:14:36+00:00 |
f7a3fbcdaec21897a76a04cf78ecd94149444327 | This contains crawled ecommerce data from Common Crawl
| elena-soare/crawled-ecommerce | [
"region:us"
] | 2022-03-18T11:19:43+00:00 | {} | 2022-04-04T09:35:10+00:00 |
6dfbbdc8bf9da9500f8eaa2eeb13f150186941d0 |
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# IWN Wordlists
[](https://creativecommons.org/licenses/by-nc-sa/4.0/) [](https://twitter.com/cfiltnlp) [](https://twitter.com/PeopleCentredAI)
We provide the unique word list form the [IndoWordnet (IWN)](https://www.cfilt.iitb.ac.in/indowordnet/) knowledge base.
## Usage
```python
from datasets import load_dataset
language = "hindi" // supported languages: assamese, bengali, bodo, gujarati, hindi, kannada, kashmiri, konkani, malayalam, manipuri, marathi, meitei, nepali, oriya, punjabi, sanskrit, tamil, telugu, urdu.
words = load_dataset("cfilt/iwn_wordlists", language)
word_list = words["train"]["word"]
```
## Citation
```latex
@inproceedings{bhattacharyya2010indowordnet,
title={IndoWordNet},
author={Bhattacharyya, Pushpak},
booktitle={Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)},
year={2010}
}
``` | cfilt/iwn_wordlists | [
"task_categories:token-classification",
"annotations_creators:Shivam Mhaskar, Diptesh Kanojia",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:as",
"language:bn",
"language:mni",
"language:gu",
"language:hi",
"language:kn",
"language:ks",
"language:kok",
"language:ml",
"language:mr",
"language:or",
"language:ne",
"language:pa",
"language:sa",
"language:ta",
"language:te",
"language:ur",
"license:cc-by-nc-sa-4.0",
"abbreviation-detection",
"region:us"
] | 2022-03-18T11:56:41+00:00 | {"annotations_creators": ["Shivam Mhaskar, Diptesh Kanojia"], "language_creators": ["found"], "language": ["as", "bn", "mni", "gu", "hi", "kn", "ks", "kok", "ml", "mr", "or", "ne", "pa", "sa", "ta", "te", "ur"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "plod-filtered", "pretty_name": "PLOD: An Abbreviation Detection Dataset", "tags": ["abbreviation-detection"]} | 2022-11-23T12:06:02+00:00 |
329f8440b131659c97299b2a4cdf38779082e14f | # Parallel Sentences for Spanish language
This repository contains parallel sentences (English + same sentences in Spanish language) in a simple tsv.gz format:
```
english_sentences\tsentence_in_spanish_language
```
## Usage
These sentences can be used to train multi-lingual sentence embedding models. For more details, you could check out [SBERT.net - Multilingual-Model](https://www.sbert.net/examples/training/multilingual/README.html) | hackathon-pln-es/parallel-sentences | [
"region:us"
] | 2022-03-18T18:08:37+00:00 | {} | 2022-04-02T17:38:29+00:00 |
f888d2a1df5a5f11cde2832710cc0d9e59b3b132 | ## Generation procedure
The dataset was constructed using documents from [the Pile](https://pile.eleuther.ai/) scored using [LDNOOBW](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) wordlist (a score is number of curses per character).
The procedure was the following:
1. The first half of the data are 100k documents randomly sampled from the Pile and assigned scores
2. The second half are the most cursing document from the Pile, obtained by scoring the whole Pile and choosing 100k documents with highest scores
3. Then, the dataset was shuffled and a 9:1 train-test split was done
## Basic stats
The average and median scores are 0.013 and 0.019, respectively. | tomekkorbak/pile-curse-full | [
"region:us"
] | 2022-03-18T23:17:53+00:00 | {} | 2022-03-23T20:05:15+00:00 |
9d4d238fbdad8ccfc9058cdcda552527f54bca2a |
# Dataset Card for CCMatrix v1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://opus.nlpl.eu/CCMatrix.php
- **Repository:** None
- **Paper:** https://arxiv.org/abs/1911.04944
### Dataset Summary
This corpus has been extracted from web crawls using the margin-based bitext mining techniques described at https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix.
* 90 languages, 1,197 bitexts
* total number of files: 90
* total number of tokens: 112.14G
* total number of sentence fragments: 7.37G
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Configs are generated for all language pairs in both directions.
You can find the valid pairs in Homepage section of Dataset Description: https://opus.nlpl.eu/CCMatrix.php
E.g.
```
from datasets import load_dataset
dataset = load_dataset("yhavinga/ccmatrix", "en-nl", streaming=True)
```
This will open the `en-nl` dataset in streaming mode. Without streaming, download and prepare will take tens of minutes.
You can inspect elements with:
```
print(next(iter(dataset['train'])))
{'id': 0, 'score': 1.2499677, 'translation': {'en': 'They come from all parts of Egypt, just like they will at the day of His coming.', 'nl': 'Zij kwamen uit alle delen van Egypte, evenals zij op de dag van Zijn komst zullen doen.'}}
```
## Dataset Structure
### Data Instances
For example:
```json
{
"id": 1,
"score": 1.2498379,
"translation": {
"nl": "En we moeten elke waarheid vals noemen die niet minstens door een lach vergezeld ging.”",
"en": "And we should call every truth false which was not accompanied by at least one laugh.”"
}
}
```
### Data Fields
Each example contains an integer id starting with 0, a score, and a translation dictionary with the language 1 and
language 2 texts.
### Data Splits
Only a `train` split is provided.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
IMPORTANT: Please cite reference [2][3] if you use this data.
1. **[CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data](https://arxiv.org/abs/1911.00359)**
by *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Jouli
and Edouard Grave*.
2. **[CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB](https://arxiv.org/abs/1911.04944)** by *Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin*.
3. **[Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125)** by *Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines,
Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky,
Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin.*
This HuggingFace CCMatrix dataset is a wrapper around the service and files prepared and hosted by OPUS:
* **[Parallel Data, Tools and Interfaces in OPUS](https://www.aclweb.org/anthology/L12-1246/)** by *Jörg Tiedemann*.
### Contributions
| yhavinga/ccmatrix | [
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:ast",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:ceb",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:ha",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:ig",
"language:ilo",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:ko",
"language:la",
"language:lb",
"language:lg",
"language:lt",
"language:lv",
"language:mg",
"language:mk",
"language:ml",
"language:mr",
"language:ms",
"language:my",
"language:ne",
"language:nl",
"language:no",
"language:oc",
"language:om",
"language:or",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sq",
"language:sr",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:tl",
"language:tr",
"language:tt",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:wo",
"language:xh",
"language:yi",
"language:yo",
"language:zh",
"language:zu",
"language:se",
"license:unknown",
"conditional-text-generation",
"arxiv:1911.04944",
"arxiv:1911.00359",
"arxiv:2010.11125",
"region:us"
] | 2022-03-19T08:54:43+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["af", "am", "ar", "ast", "az", "be", "bg", "bn", "br", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "ha", "he", "hi", "hr", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "ko", "la", "lb", "lg", "lt", "lv", "mg", "mk", "ml", "mr", "ms", "my", "ne", "nl", "no", "oc", "om", "or", "pl", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "tl", "tr", "tt", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "se"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": {"en-nl": ["n<110M"], "en-af": ["n<9M"], "en-lt": ["<24M"]}, "source_datasets": ["original"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "paperswithcode_id": "ccmatrix", "pretty_name": "CCMatrixV1", "tags": ["conditional-text-generation"]} | 2023-03-09T07:44:58+00:00 |
24d41c732de80b4b883f8e279d484a6d4b5eb017 |
# Dataset Card for MESD
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://data.mendeley.com/datasets/cy34mh68j9/5
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Contiene los datos de la base MESD procesados para hacer 'finetuning' de un modelo 'Wav2Vec' en el Hackaton organizado por 'Somos NLP'.
Ejemplo de referencia:
https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/audio_classification.ipynb
Hemos accedido a la base MESD para obtener ejemplos.
Breve descripción de los autores de la base MESD:
"La Base de Datos del Discurso Emocional Mexicano (MESD en inglés) proporciona enunciados de una sola palabra para las prosodias afectivas de ira, asco, miedo, felicidad, neutro y tristeza con conformación cultural mexicana. El MESD ha sido pronunciado por actores adultos y niños no profesionales: Se dispone de 3 voces femeninas, 2 masculinas y 6 infantiles. Las palabras de los enunciados emocionales y neutros proceden de dos corpus: (corpus A) compuesto por sustantivos y adjetivos que se repiten a través de prosodias emocionales y tipos de voz (femenina, masculina, infantil), y (corpus B) que consiste en palabras controladas por edad de adquisición, frecuencia de uso, familiaridad, concreción, valencia, excitación y clasificaciones de dimensionalidad de emociones discretas.
Las grabaciones de audio se realizaron en un estudio profesional con los siguientes materiales (1) un micrófono Sennheiser e835 con una respuesta de frecuencia plana (100 Hz a 10 kHz), (2) una interfaz de audio Focusrite Scarlett 2i4 conectada al micrófono con un cable XLR y al ordenador, y (3) la estación de trabajo de audio digital REAPER (Rapid Environment for Audio Production, Engineering, and Recording). Los archivos de audio se almacenaron como una secuencia de 24 bits con una frecuencia de muestreo de 48000Hz. La amplitud de las formas de onda acústicas se reescaló entre -1 y 1.
Se crearon dos versiones con reducción de la naturalidad de los locutores a partir de expresiones emocionales humanas para voces femeninas del corpus B. En concreto, la naturalidad se redujo progresivamente de las voces humanas al nivel 1 al nivel 2. En particular, se editaron la duración y el tono medio en las sílabas acentuadas para reducir la diferencia entre las sílabas acentuadas y las no acentuadas. En los enunciados completos, se redujeron las relaciones F2/F1 y F3/F1 editando las frecuencias F2 y F3. También se redujo la intensidad de los armónicos 1 y 4. "
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Español
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Origen: texto que indica si se trata del conjunto de datos MESD original o los casos 'Speaker-embedded naturalness-reduced female voices' donde los autores han generado de forma sintética nuevos datos transformando algunas de las instancias de los audios originales.
Palabra: texto de la palabra que se ha leído.
Emoción: texto de la emoción a la que representa: Valores: 'Enojo', 'Felicidad', 'Miedo', 'Neutral', 'Disgusto', 'Tristeza'.
InfoActor: texto que indica si la voz es de 'Niño', 'Hombre', 'Mujer'.
AudioArray: audio array, remuestreado a 16 Khz.
### Data Splits
Train: 891 ejemplos, mezcla de casos MESD y 'Speaker-embedded naturalness-reduced female voices'.
Validation: 130 ejemplos, todos casos MESD.
Test: 129 ejemplos, todos casos MESD.
## Dataset Creation
### Curation Rationale
Unir los tres subconjuntos de datos y procesarlos para la tarea de finetuning, acorde al input esperado por el modelo Wav2Vec.
### Source Data
#### Initial Data Collection and Normalization
Acceso a los datos en bruto:
https://data.mendeley.com/datasets/cy34mh68j9/5
Conversión a audio arra y remuestreo a 16 Khz.
#### Who are the source language producers?
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Creative Commons, [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), “Mexican Emotional Speech Database (MESD)”, Mendeley Data, V5, doi: 10.17632/cy34mh68j9.5
```
| hackathon-pln-es/MESD | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-19T18:39:32+00:00 | {"license": "cc-by-4.0", "Duville, Mathilde Marie; Alonso-Valerdi, Luz Maria; Ibarra, David (2022), \u201cMexican Emotional Speech Database (MESD)\u201d, Mendeley Data, V5, doi": "10.17632/cy34mh68j9.5"} | 2022-03-25T18:15:07+00:00 |
bc865c50d83a257b7458e3c97ad16533fb491287 | ACLED Dataset for Summarization Task - CSE635 (University at Buffalo)
Actor Description
- 0: N/A
- 1: State Forces
- 2: Rebel Groups
- 3: Political Militias
- 4: Identity Militias
- 5: Rioters
- 6: Protesters
- 7: Civilians
- 8: External/Other Forces | vinaykudari/acled-token-summary | [
"region:us"
] | 2022-03-20T00:39:18+00:00 | {} | 2022-03-20T00:47:22+00:00 |
213f66c68c9410d717ec0b9ad13abd5c67100b7f |
# Dataset Card for PMC Open Access XML
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The XML Open Access includes more than 3.4 million journal articles and preprints that are made available under
license terms that allow reuse.
Not all articles in PMC are available for text mining and other reuse, many have copyright protection, however articles
in the PMC Open Access Subset are made available under Creative Commons or similar licenses that generally allow more
liberal redistribution and reuse than a traditional copyrighted work.
The PMC Open Access Subset is one part of the PMC Article Datasets
This version takes XML version as source, benefiting from the structured text
to split the articles in parts, naming the introduction, methods, results,
discussion and conclusion, and reference with keywords in the text to external or internal
resources (articles, figures, tables, formulas, boxed-text, quotes, code, footnotes, chemicals, graphics, medias).
The dataset was initially created with relation-extraction tasks in mind, between the references in text and the content of the
references (e.g. for PMID, by joining the refered article abstract from the pubmed dataset), but aims in a larger extent to provide
a corpus of pre-annotated text for other tasks (e.g. figure caption to graphic, glossary definition detection, summarization).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Fields
- "accession_id": The PMC ID of the article
- "pmid": The PubMed ID of the article
- "introduction": List of \<title\> and \<p\> elements in \<body\>, sharing their root with a \<title\> containing "introduction" or "background".
- "methods": Same as introduction with "method" keyword.
- "results": Same as introduction with "result" keyword.
- "discussion": Same as introduction with "discussion" keyword.
- "conclusion": Same as introduction with "conclusion" keyword.
- "front": List of \<title\> and \<p\> elements in \<front\> after everything else has been searched.
- "body": List of \<title\> and \<p\> elements in \<body\> after everything else has been searched.
- "back": List of \<title\> and \<p\> elements in \<back\> after everything else has been searched.
- "figure": List of \<fig\> elements of the article.
- "table": List of \<table-wrap\> and \<array\> elements of the article.
- "formula": List of \<disp-formula\> and \<inline-formula\> elements of the article.
- "box": List of \<boxed-text\> elements of the article.
- "code": List of \<code\> elements of the article.
- "quote": List of \<disp-quote\> and \<speech\> elements of the article.
- "chemical": List of \<chem-struct-wrap\> elements of the article.
- "supplementary": List of \<supplementary-material\> and \<inline-supplementary-material\> elements of the article.
- "footnote": List of \<fn-group\> and \<table-wrap-foot\> elements of the article.
- "graphic": List of \<graphic\> and \<inline-graphic\> elements of the article.
- "media": List of \<media\> and \<inline-media\> elements of the article.
- "glossary": Glossary if found in the XML
- "unknown_references": JSON of a dictionnary of each "tag":"text" for the reference that did not indicate a PMID
- "n_references": Total number of references and unknown references
- "license": The licence of the article
- "retracted": If the article was retracted or not
- "last_updated": Last update of the article
- "citation": Citation of the article
- "package_file": path to the folder containing the graphics and media files of the article (to append to the base URL: ftp.ncbi.nlm.nih.gov/pub/pmc/)
In text, the references are in the form ##KEYWORD##IDX_REF##OLD_TEXT##, with keywords (REF, UREF, FIG, TAB, FORMU, BOX, CODE, QUOTE, CHEM, SUPPL, FOOTN, GRAPH, MEDIA) referencing respectively to "pubmed articles" (external), "unknown_references", "figure", "table", "formula", "box", "code", "quote", "chem", "supplementary", "footnote", "graphic" and "media".
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
Internal references (figures, tables, ...) were found using specific tags. Deciding on those tags was done by testing and by looking in the documentation
for the different kind of possible usage.
Then, to split the article into introduction, methods, results, discussion and conclusion, specific keywords in titles were used. Because there are no rules
in this xml to tag those sections, finding the keyword seemed like the most reliable approach to do so. A drawback is that many section do not have those
keywords in the titles but could be assimilated to those. However, the huge diversity in the titles makes it harder to label such sections. This could be the
work of further versions of this dataset.
### Source Data
#### Initial Data Collection and Normalization
Data was obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_noncomm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_comm/xml/
- ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_other/xml/
Additional content for individual articles (graphics, media) can be obtained from:
- ftp.ncbi.nlm.nih.gov/pub/pmc + "package_file"
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
The articles XML are similar accross collections. This means that if a certain collection handles the structure in unusual ways, the whole collection might not be as
well annotated than others. This concerns all the sections (intro, methods, ...), the external references (pmids) and the internal references (tables, figures, ...).
To illustrate that, references are sometime given as a range (e.g. 10-15). In that case, only reference 10 and 15 are linked. This could potentially be handled in a
future version.
### Other Known Limitations
[Needs More Information]
### Preprocessing recommendations
- Filter out empty contents.
- Remove unwanted references from the text, and replace either by the "references_text" or by the reference content itself.
- Unescape HTML special characters: `import html; html.unescape(my_text)`
- Remove superfluous line break in text.
- Remove XML tags (\<italic\>, \<sup\>, \<sub\>, ...), replace by special tokens?
- Join the items of the contents' lists.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
https://www.ncbi.nlm.nih.gov/pmc/about/copyright/
Within the PMC Open Access Subset, there are three groupings:
Commercial Use Allowed - CC0, CC BY, CC BY-SA, CC BY-ND licenses
Non-Commercial Use Only - CC BY-NC, CC BY-NC-SA, CC BY-NC-ND licenses; and
Other - no machine-readable Creative Commons license, no license, or a custom license.
### Citation Information
[Needs More Information] | TomTBT/pmc_open_access_xml | [
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"license:cc-by-4.0",
"license:cc-by-sa-4.0",
"license:cc-by-nc-4.0",
"license:cc-by-nd-4.0",
"license:cc-by-nc-nd-4.0",
"license:cc-by-nc-sa-4.0",
"license:unknown",
"license:other",
"research papers",
"biology",
"medecine",
"region:us"
] | 2022-03-20T09:47:21+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc0-1.0", "cc-by-4.0", "cc-by-sa-4.0", "cc-by-nc-4.0", "cc-by-nd-4.0", "cc-by-nc-nd-4.0", "cc-by-nc-sa-4.0", "unknown", "other"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification", "summarization", "other"], "task_ids": [], "pretty_name": "XML-parsed PMC", "tags": ["research papers", "biology", "medecine"]} | 2024-01-17T16:52:45+00:00 |
c218f8fc022c761fa4515c4fd96e160426701dbf | enimai/MuST-C-fr | [
"task_categories:translation",
"language:en",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2022-03-20T14:27:39+00:00 | {"language": ["en", "fr"], "license": "apache-2.0", "task_categories": ["translation"]} | 2022-11-21T18:39:41+00:00 |
|
66cded2be5d5392e60f0d77f3d027413b84d1e4b | dannyvas23/textosuicidios | [
"license:afl-3.0",
"region:us"
] | 2022-03-20T17:50:26+00:00 | {"license": "afl-3.0"} | 2022-03-21T00:03:08+00:00 |
|
f024a61cb9987afe7063a0f35b90aa6a16385f3d | dannyvas23/notas_suicidios | [
"license:afl-3.0",
"region:us"
] | 2022-03-21T01:18:47+00:00 | {"license": "afl-3.0"} | 2022-03-21T01:37:37+00:00 |
|
f362220b39c6518285689d2616dccaeb318d6970 | hazal/electronic-radiology-phd-thesis-trR | [
"language:tr",
"region:us"
] | 2022-03-21T07:59:10+00:00 | {"language": ["tr"]} | 2022-08-10T10:13:34+00:00 |
|
030caa10ace4b5c8f5084b70fe6bc281c44cc579 |
Hyperion Cloud imagery from the hyperspectral imager | jacobbieker/hyperion-clouds | [
"license:mit",
"region:us"
] | 2022-03-21T08:29:52+00:00 | {"license": "mit"} | 2023-12-23T11:28:31+00:00 |
58aafbe2712ff481c014f562e42723f2820fd5d4 |
# Dataset Card for Monash Time Series Forecasting Repository
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Monash Time Series Forecasting Repository](https://forecastingdata.org/)
- **Repository:** [Monash Time Series Forecasting Repository code repository](https://github.com/rakshitha123/TSForecasting)
- **Paper:** [Monash Time Series Forecasting Archive](https://openreview.net/pdf?id=wEc1mgAjU-)
- **Leaderboard:** [Baseline Results](https://forecastingdata.org/#results)
- **Point of Contact:** [Rakshitha Godahewa](mailto:rakshitha.godahewa@monash.edu)
### Dataset Summary
The first comprehensive time series forecasting repository containing datasets of related time series to facilitate the evaluation of global forecasting models. All datasets are intended to use only for research purpose. Our repository contains 30 datasets including both publicly available time series datasets (in different formats) and datasets curated by us. Many datasets have different versions based on the frequency and the inclusion of missing values, making the total number of dataset variations to 58. Furthermore, it includes both real-world and competition time series datasets covering varied domains.
The following table shows a list of datasets available:
| Name | Domain | No. of series | Freq. | Pred. Len. | Source |
|-------------------------------|-----------|---------------|--------|------------|-------------------------------------------------------------------------------------------------------------------------------------|
| weather | Nature | 3010 | 1D | 30 | [Sparks et al., 2020](https://cran.r-project.org/web/packages/bomrang) |
| tourism_yearly | Tourism | 1311 | 1Y | 4 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) |
| tourism_quarterly | Tourism | 1311 | 1Q-JAN | 8 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) |
| tourism_monthly | Tourism | 1311 | 1M | 24 | [Athanasopoulos et al., 2011](https://doi.org/10.1016/j.ijforecast.2010.04.009) |
| cif_2016 | Banking | 72 | 1M | 12 | [Stepnicka and Burda, 2017](https://doi.org/10.1109/FUZZ-IEEE.2017.8015455) |
| london_smart_meters | Energy | 5560 | 30T | 60 | [Jean-Michel, 2019](https://www.kaggle.com/jeanmidev/smart-meters-in-london) |
| australian_electricity_demand | Energy | 5 | 30T | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU-) |
| wind_farms_minutely | Energy | 339 | 1T | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| bitcoin | Economic | 18 | 1D | 30 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| pedestrian_counts | Transport | 66 | 1H | 48 | [City of Melbourne, 2020](https://data.melbourne.vic.gov.au/Transport/Pedestrian-Counting-System-Monthly-counts-per-hour/b2ak-trbp) |
| vehicle_trips | Transport | 329 | 1D | 30 | [fivethirtyeight, 2015](https://github.com/fivethirtyeight/uber-tlc-foil-response) |
| kdd_cup_2018 | Nature | 270 | 1H | 48 | [KDD Cup, 2018](https://www.kdd.org/kdd2018/kdd-cup) |
| nn5_daily | Banking | 111 | 1D | 56 | [Ben Taieb et al., 2012](https://doi.org/10.1016/j.eswa.2012.01.039) |
| nn5_weekly | Banking | 111 | 1W-MON | 8 | [Ben Taieb et al., 2012](https://doi.org/10.1016/j.eswa.2012.01.039) |
| kaggle_web_traffic | Web | 145063 | 1D | 59 | [Google, 2017](https://www.kaggle.com/c/web-traffic-time-series-forecasting) |
| kaggle_web_traffic_weekly | Web | 145063 | 1W-WED | 8 | [Google, 2017](https://www.kaggle.com/c/web-traffic-time-series-forecasting) |
| solar_10_minutes | Energy | 137 | 10T | 60 | [Solar, 2020](https://www.nrel.gov/grid/solar-power-data.html) |
| solar_weekly | Energy | 137 | 1W-SUN | 5 | [Solar, 2020](https://www.nrel.gov/grid/solar-power-data.html) |
| car_parts | Sales | 2674 | 1M | 12 | [Hyndman, 2015](https://cran.r-project.org/web/packages/expsmooth/) |
| fred_md | Economic | 107 | 1M | 12 | [McCracken and Ng, 2016](https://doi.org/10.1080/07350015.2015.1086655) |
| traffic_hourly | Transport | 862 | 1H | 48 | [Caltrans, 2020](http://pems.dot.ca.gov/) |
| traffic_weekly | Transport | 862 | 1W-WED | 8 | [Caltrans, 2020](http://pems.dot.ca.gov/) |
| hospital | Health | 767 | 1M | 12 | [Hyndman, 2015](https://cran.r-project.org/web/packages/expsmooth/) |
| covid_deaths | Health | 266 | 1D | 30 | [Johns Hopkins University, 2020](https://github.com/CSSEGISandData/COVID-19) |
| sunspot | Nature | 1 | 1D | 30 | [Sunspot, 2015](http://www.sidc.be/silso/newdataset) |
| saugeenday | Nature | 1 | 1D | 30 | [McLeod and Gweon, 2013](http://www.jenvstat.org/v04/i11) |
| us_births | Health | 1 | 1D | 30 | [Pruim et al., 2020](https://cran.r-project.org/web/packages/mosaicData) |
| solar_4_seconds | Energy | 1 | 4S | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| wind_4_seconds | Energy | 1 | 4S | 60 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| rideshare | Transport | 2304 | 1H | 48 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- ) |
| oikolab_weather | Nature | 8 | 1H | 48 | [Oikolab](https://oikolab.com/) |
| temperature_rain | Nature | 32072 | 1D | 30 | [Godahewa et al. 2021](https://openreview.net/pdf?id=wEc1mgAjU- )
### Dataset Usage
To load a particular dataset just specify its name from the table above e.g.:
```python
load_dataset("monash_tsf", "nn5_daily")
```
> Notes:
> - Data might contain missing values as in the original datasets.
> - The prediction length is either specified in the dataset or a default value depending on the frequency is used as in the original repository benchmark.
### Supported Tasks and Leaderboards
#### `time-series-forecasting`
##### `univariate-time-series-forecasting`
The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split.
##### `multivariate-time-series-forecasting`
The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split.
### Languages
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```python
{
'start': datetime.datetime(2012, 1, 1, 0, 0),
'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...],
'feat_static_cat': [0],
'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...],
'item_id': '0'
}
```
### Data Fields
For the univariate regular time series each series has the following keys:
* `start`: a datetime of the first entry of each time series in the dataset
* `target`: an array[float32] of the actual target values
* `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset
* `feat_dynamic_real`: optional array of covariate features
* `item_id`: a string identifier of each time series in a dataset for reference
For the multivariate time series the `target` is a vector of the multivariate dimension for each time point.
### Data Splits
The datasets are split in time depending on the prediction length specified in the datasets. In particular for each time series in a dataset there is a prediction length window of the future in the validation split and another prediction length more in the test split.
## Dataset Creation
### Curation Rationale
To facilitate the evaluation of global forecasting models. All datasets in our repository are intended for research purposes and to evaluate the performance of new forecasting algorithms.
### Source Data
#### Initial Data Collection and Normalization
Out of the 30 datasets, 23 were already publicly available in different platforms with different data formats. The original sources of all datasets are mentioned in the datasets table above.
After extracting and curating these datasets, we analysed them individually to identify the datasets containing series with different frequencies and missing observations. Nine datasets contain time series belonging to different frequencies and the archive contains a separate dataset per each frequency.
#### Who are the source language producers?
The data comes from the datasets listed in the table above.
### Annotations
#### Annotation process
The annotations come from the datasets listed in the table above.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
* [Rakshitha Godahewa](mailto:rakshitha.godahewa@monash.edu)
* [Christoph Bergmeir](mailto:christoph.bergmeir@monash.edu)
* [Geoff Webb](mailto:geoff.webb@monash.edu)
* [Rob Hyndman](mailto:rob.hyndman@monash.edu)
* [Pablo Montero-Manso](mailto:pablo.monteromanso@sydney.edu.au)
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```tex
@InProceedings{godahewa2021monash,
author = "Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
title = "Monash Time Series Forecasting Archive",
booktitle = "Neural Information Processing Systems Track on Datasets and Benchmarks",
year = "2021",
note = "forthcoming"
}
```
### Contributions
Thanks to [@kashif](https://github.com/kashif) for adding this dataset. | monash_tsf | [
"task_categories:time-series-forecasting",
"task_ids:univariate-time-series-forecasting",
"task_ids:multivariate-time-series-forecasting",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:cc-by-4.0",
"region:us"
] | 2022-03-21T09:50:46+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["time-series-forecasting"], "task_ids": ["univariate-time-series-forecasting", "multivariate-time-series-forecasting"], "pretty_name": "Monash Time Series Forecasting Repository", "dataset_info": [{"config_name": "weather", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 176893738, "num_examples": 3010}, {"name": "test", "num_bytes": 177638713, "num_examples": 3010}, {"name": "validation", "num_bytes": 177266226, "num_examples": 3010}], "download_size": 38820451, "dataset_size": 531798677}, {"config_name": "tourism_yearly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 54264, "num_examples": 518}, {"name": "test", "num_bytes": 71358, "num_examples": 518}, {"name": "validation", "num_bytes": 62811, "num_examples": 518}], "download_size": 36749, "dataset_size": 188433}, {"config_name": "tourism_quarterly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162738, "num_examples": 427}, {"name": "test", "num_bytes": 190920, "num_examples": 427}, {"name": "validation", "num_bytes": 176829, "num_examples": 427}], "download_size": 93833, "dataset_size": 530487}, {"config_name": "tourism_monthly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 391518, "num_examples": 366}, {"name": "test", "num_bytes": 463986, "num_examples": 366}, {"name": "validation", "num_bytes": 427752, "num_examples": 366}], "download_size": 199791, "dataset_size": 1283256}, {"config_name": "cif_2016", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24731, "num_examples": 72}, {"name": "test", "num_bytes": 31859, "num_examples": 72}, {"name": "validation", "num_bytes": 28295, "num_examples": 72}], "download_size": 53344, "dataset_size": 84885}, {"config_name": "london_smart_meters", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 684386194, "num_examples": 5560}, {"name": "test", "num_bytes": 687138394, "num_examples": 5560}, {"name": "validation", "num_bytes": 685762294, "num_examples": 5560}], "download_size": 219673439, "dataset_size": 2057286882}, {"config_name": "australian_electricity_demand", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4763162, "num_examples": 5}, {"name": "test", "num_bytes": 4765637, "num_examples": 5}, {"name": "validation", "num_bytes": 4764400, "num_examples": 5}], "download_size": 5770526, "dataset_size": 14293199}, {"config_name": "wind_farms_minutely", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 710078918, "num_examples": 339}, {"name": "test", "num_bytes": 710246723, "num_examples": 339}, {"name": "validation", "num_bytes": 710162820, "num_examples": 339}], "download_size": 71383130, "dataset_size": 2130488461}, {"config_name": "bitcoin", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 336511, "num_examples": 18}, {"name": "test", "num_bytes": 340966, "num_examples": 18}, {"name": "validation", "num_bytes": 338738, "num_examples": 18}], "download_size": 220403, "dataset_size": 1016215}, {"config_name": "pedestrian_counts", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12897120, "num_examples": 66}, {"name": "test", "num_bytes": 12923256, "num_examples": 66}, {"name": "validation", "num_bytes": 12910188, "num_examples": 66}], "download_size": 4587054, "dataset_size": 38730564}, {"config_name": "vehicle_trips", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 105261, "num_examples": 329}, {"name": "test", "num_bytes": 186688, "num_examples": 329}, {"name": "validation", "num_bytes": 145974, "num_examples": 329}], "download_size": 44914, "dataset_size": 437923}, {"config_name": "kdd_cup_2018", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12040046, "num_examples": 270}, {"name": "test", "num_bytes": 12146966, "num_examples": 270}, {"name": "validation", "num_bytes": 12093506, "num_examples": 270}], "download_size": 2456948, "dataset_size": 36280518}, {"config_name": "nn5_daily", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 314828, "num_examples": 111}, {"name": "test", "num_bytes": 366110, "num_examples": 111}, {"name": "validation", "num_bytes": 340469, "num_examples": 111}], "download_size": 287708, "dataset_size": 1021407}, {"config_name": "nn5_weekly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48344, "num_examples": 111}, {"name": "test", "num_bytes": 55670, "num_examples": 111}, {"name": "validation", "num_bytes": 52007, "num_examples": 111}], "download_size": 62043, "dataset_size": 156021}, {"config_name": "kaggle_web_traffic", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 415494391, "num_examples": 145063}, {"name": "test", "num_bytes": 486103806, "num_examples": 145063}, {"name": "validation", "num_bytes": 450799098, "num_examples": 145063}], "download_size": 145485324, "dataset_size": 1352397295}, {"config_name": "kaggle_web_traffic_weekly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 64242469, "num_examples": 145063}, {"name": "test", "num_bytes": 73816627, "num_examples": 145063}, {"name": "validation", "num_bytes": 69029548, "num_examples": 145063}], "download_size": 28930900, "dataset_size": 207088644}, {"config_name": "solar_10_minutes", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29640033, "num_examples": 137}, {"name": "test", "num_bytes": 29707848, "num_examples": 137}, {"name": "validation", "num_bytes": 29673941, "num_examples": 137}], "download_size": 4559353, "dataset_size": 89021822}, {"config_name": "solar_weekly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28614, "num_examples": 137}, {"name": "test", "num_bytes": 34265, "num_examples": 137}, {"name": "validation", "num_bytes": 31439, "num_examples": 137}], "download_size": 24375, "dataset_size": 94318}, {"config_name": "car_parts", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 396653, "num_examples": 2674}, {"name": "test", "num_bytes": 661379, "num_examples": 2674}, {"name": "validation", "num_bytes": 529016, "num_examples": 2674}], "download_size": 39656, "dataset_size": 1587048}, {"config_name": "fred_md", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 314514, "num_examples": 107}, {"name": "test", "num_bytes": 325107, "num_examples": 107}, {"name": "validation", "num_bytes": 319811, "num_examples": 107}], "download_size": 169107, "dataset_size": 959432}, {"config_name": "traffic_hourly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 62071974, "num_examples": 862}, {"name": "test", "num_bytes": 62413326, "num_examples": 862}, {"name": "validation", "num_bytes": 62242650, "num_examples": 862}], "download_size": 22868806, "dataset_size": 186727950}, {"config_name": "traffic_weekly", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 344154, "num_examples": 862}, {"name": "test", "num_bytes": 401046, "num_examples": 862}, {"name": "validation", "num_bytes": 372600, "num_examples": 862}], "download_size": 245126, "dataset_size": 1117800}, {"config_name": "hospital", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 217625, "num_examples": 767}, {"name": "test", "num_bytes": 293558, "num_examples": 767}, {"name": "validation", "num_bytes": 255591, "num_examples": 767}], "download_size": 78110, "dataset_size": 766774}, {"config_name": "covid_deaths", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 176352, "num_examples": 266}, {"name": "test", "num_bytes": 242187, "num_examples": 266}, {"name": "validation", "num_bytes": 209270, "num_examples": 266}], "download_size": 27335, "dataset_size": 627809}, {"config_name": "sunspot", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 304726, "num_examples": 1}, {"name": "test", "num_bytes": 304974, "num_examples": 1}, {"name": "validation", "num_bytes": 304850, "num_examples": 1}], "download_size": 68865, "dataset_size": 914550}, {"config_name": "saugeenday", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 97722, "num_examples": 1}, {"name": "test", "num_bytes": 97969, "num_examples": 1}, {"name": "validation", "num_bytes": 97845, "num_examples": 1}], "download_size": 28721, "dataset_size": 293536}, {"config_name": "us_births", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29923, "num_examples": 1}, {"name": "test", "num_bytes": 30171, "num_examples": 1}, {"name": "validation", "num_bytes": 30047, "num_examples": 1}], "download_size": 16332, "dataset_size": 90141}, {"config_name": "solar_4_seconds", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30513083, "num_examples": 1}, {"name": "test", "num_bytes": 30513578, "num_examples": 1}, {"name": "validation", "num_bytes": 30513331, "num_examples": 1}], "download_size": 794502, "dataset_size": 91539992}, {"config_name": "wind_4_seconds", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30512774, "num_examples": 1}, {"name": "test", "num_bytes": 30513269, "num_examples": 1}, {"name": "validation", "num_bytes": 30513021, "num_examples": 1}], "download_size": 2226184, "dataset_size": 91539064}, {"config_name": "rideshare", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": {"sequence": "float32"}}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4249051, "num_examples": 156}, {"name": "test", "num_bytes": 5161435, "num_examples": 156}, {"name": "validation", "num_bytes": 4705243, "num_examples": 156}], "download_size": 1031826, "dataset_size": 14115729}, {"config_name": "oikolab_weather", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": "float32"}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3299142, "num_examples": 8}, {"name": "test", "num_bytes": 3302310, "num_examples": 8}, {"name": "validation", "num_bytes": 3300726, "num_examples": 8}], "download_size": 1326101, "dataset_size": 9902178}, {"config_name": "temperature_rain", "features": [{"name": "start", "dtype": "timestamp[s]"}, {"name": "target", "sequence": {"sequence": "float32"}}, {"name": "feat_static_cat", "sequence": "uint64"}, {"name": "feat_dynamic_real", "sequence": {"sequence": "float32"}}, {"name": "item_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 88121466, "num_examples": 422}, {"name": "test", "num_bytes": 96059286, "num_examples": 422}, {"name": "validation", "num_bytes": 92090376, "num_examples": 422}], "download_size": 25747139, "dataset_size": 276271128}]} | 2023-06-13T12:26:34+00:00 |
cdf7be8e4e84152a48415aa0f86e11f222365f48 | xiongshunjie/ProDataset | [
"license:apache-2.0",
"region:us"
] | 2022-03-21T11:30:20+00:00 | {"license": "apache-2.0"} | 2022-03-21T11:30:20+00:00 |
|
ccd927007e794cf0a8794aee8482c6dec66ff6fb | Twitter 3.21 | NoCaptain/MyTwitter | [
"region:us"
] | 2022-03-21T14:14:44+00:00 | {} | 2022-03-21T14:51:28+00:00 |
b6a5fc413080ac48e2ad89fb86a0e4f624ec02e3 | Cleaned wikipedia dataset | blo05/cleaned_wiki_en | [
"region:us"
] | 2022-03-21T15:55:39+00:00 | {} | 2022-03-30T09:12:38+00:00 |
f55977a7a91fb5d406494b6b98ff236b36dfb8a0 |
# Dataset Card for LFQA Discourse
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Repo](https://github.com/utcsnlp/lfqa_discourse)
- **Paper:** [How Do We Answer Complex Questions: Discourse Structure of Long-form Answers](https://arxiv.org/abs/2203.11048)
- **Point of Contact:** fangyuan[at]utexas.edu
### Dataset Summary
This dataset contains discourse annotation of long-form answers. There are two types of annotations:
* **Validity:** whether a <question, answer> pair is valid based on a set of invalid reasons defined.
* **Role:** sentence-level role annotation of functional roles for long-form answers.
### Languages
The dataset contains data in English.
## Dataset Structure
### Data Instances
Each instance is a (question, long-form answer) pair from one of the four data sources -- ELI5, WebGPT, NQ, and model-generated answers (denoted as ELI5-model), and our discourse annotation, which consists of QA-pair level validity label and sentence-level functional role label.
We provide all validity and role annotations here. For further train/val/test split, please refer to our [github repository](https://github.com/utcsnlp/lfqa_discourse).
### Data Fields
For validity annotations, each instance contains the following fields:
* `dataset`: The dataset this QA pair belongs to, one of [`NQ`, `ELI5`, `Web-GPT`]. Note that `ELI5` contains both human-written answers and model-generated answers, with model-generated answer distinguished with the `a_id` field mentioned below.
* `q_id`: The question id, same as the original NQ or ELI5 dataset.
* `a_id`: The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy `a_id` (1). For machine generated answers, this field corresponds to the name of the model.
* `question`: The question.
* `answer_paragraph`: The answer paragraph.
* `answer_sentences`: The list of answer sentences, tokenized from the answer paragraph.
* `is_valid`: A boolean value indicating whether the qa pair is valid, values: [`True`, `False`].
* `invalid_reason`: A list of list, each list contains the invalid reason the annotator selected. The invalid reason is one of [`no_valid_answer`, `nonsensical_question`, `assumptions_rejected`, `multiple_questions`].
For role annotations, each instance contains the following fields:
*
* `dataset`: The dataset this QA pair belongs to, one of [`NQ`, `ELI5`, `Web-GPT`]. Note that `ELI5` contains both human-written answers and model-generated answers, with model-generated answer distinguished with the `a_id` field mentioned below.
* `q_id`: The question id, same as the original NQ or ELI5 dataset.
* `a_id`: The answer id, same as the original ELI5 dataset. For NQ, we populate a dummy `a_id` (1). For machine generated answers, this field corresponds to the name of the model.
* `question`: The question.
* `answer_paragraph`: The answer paragraph.
* `answer_sentences`: The list of answer sentences, tokenized from the answer paragraph.
* `role_annotation`: The list of majority role (or adjudicated) role (if exists), for the sentences in `answer_sentences`. Each role is one of [`Answer`, `Answer - Example`, `Answer (Summary)`, `Auxiliary Information`, `Answer - Organizational sentence`, `Miscellaneous`]
* `raw_role_annotation`: A list of list, each list contains the raw role annotations for sentences in `answer_sentences`.
### Data Splits
For train/validation/test splits, please refer to our [repository]((https://github.com/utcsnlp/lfqa_discourse).
## Dataset Creation
Please refer to our [paper](https://arxiv.org/abs/2203.11048) and datasheet for details on dataset creation, annotation process and discussion on limitations.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-sa/4.0/legalcode
### Citation Information
```
@inproceedings{xu2022lfqadiscourse,
title = {How Do We Answer Complex Questions: Discourse Structure of Long-form Answers},
author = {Xu, Fangyuan and Li, Junyi Jessy and Choi, Eunsol},
year = 2022,
booktitle = {Proceedings of the Annual Meeting of the Association for Computational Linguistics},
note = {Long paper}
}
```
### Contributions
Thanks to [@carriex](https://github.com/carriex) for adding this dataset. | fangyuan/lfqa_discourse | [
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|natural_questions",
"source_datasets:extended|eli5",
"license:cc-by-sa-4.0",
"arxiv:2203.11048",
"region:us"
] | 2022-03-21T16:37:57+00:00 | {"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["machine-generated", "found"], "language": ["en-US"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|natural_questions", "extended|eli5"], "task_categories": [], "task_ids": [], "pretty_name": "lfqa_discourse"} | 2023-06-08T03:55:00+00:00 |
b1cb0eb42393e09d5b9090c60a1f55d59273dbfb | # AutoNLP Dataset for project: pruebapoems
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project pruebapoems.
### Languages
The BCP-47 code for the dataset's language is es.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "When I was fair and young, then favor graced me.\r\nOf many was I sought their mistress for to be.\r\nBu[...]",
"target": 1
},
{
"text": "Sigh no more, ladies, sigh no more.\r\n Men were deceivers ever,\r\nOne foot in sea, and one on shore[...]",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=3, names=['Love', 'Mythology & Folklore', 'Nature'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 457 |
| valid | 116 |
| EALeon16/autonlp-data-pruebapoems | [
"task_categories:text-classification",
"language:es",
"region:us"
] | 2022-03-21T17:51:36+00:00 | {"language": ["es"], "task_categories": ["text-classification"]} | 2022-10-25T09:03:29+00:00 |
dc52efd76d818fcd4d0a3b4cc1d6579486b92a0a |
La base de datos consta de una cantidad de 192 347 filas de datos para el entrenamiento, 33 944 para las pruebas y 22630 para la validación. Su contenido está compuesto por comentarios suicidas y comentarios normales de la red social Reddit traducidos al español, y obtenidos de la base de datos: Suicide and Depression Detection de Nikhileswar Komati, la cual se puede visualizar en la siguiente dirección: https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch
Autores
- Danny Vásquez
- César Salazar
- Alexis Cañar
- Yannela Castro
- Daniel Patiño
| hackathon-pln-es/comentarios_depresivos | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-03-21T18:16:53+00:00 | {"license": "cc-by-sa-4.0"} | 2022-04-01T00:40:06+00:00 |
1b7f73b6c66efd03e28c7f409895c878684675b5 | Dataset descargado de la página kaggle.com.
El archivo original contenía información en inglés y posteriormente fue traducida para su uso.
El dataset contiene las columnas:
- Autor: corresponde al autor del poema.
- Contenido: contiene todo el poema.
- Nombre del poema: contiene el título del poema.
- Años: corresponde al tiempo en que fue hecho el poema.
- Tipo: contiene el tipo que pertenece el poema. | hackathon-pln-es/poems-es | [
"license:wtfpl",
"region:us"
] | 2022-03-21T18:36:23+00:00 | {"license": "wtfpl"} | 2022-03-27T17:39:08+00:00 |
a231f6a9b437ed1527687e6ddf180c78978b9d78 | nedroden/nlcity | [
"license:cc",
"region:us"
] | 2022-03-22T10:06:37+00:00 | {"license": "cc"} | 2022-03-22T10:06:37+00:00 |
|
45c0b11a67f833a92e8e04fbaa2577e1c9f75a63 | #HourAI-data
Conversational data used to finetune HourAI
Parsed from: [omoito](https://dynasty-scans.com/series/omoito)
Added some testing conversations that looked ok as well. | archmagos/HourAI-data | [
"region:us"
] | 2022-03-22T11:40:11+00:00 | {} | 2022-03-22T20:26:21+00:00 |
c35d0c6af81729ca8a1049b4c23674b276cb14ee | # NLI-TR for Supervised SimCSE
This dataset is a modified version of [NLI-TR](https://huggingface.co/datasets/nli_tr) dataset. Its intended use is to train Supervised [SimCSE](https://github.com/princeton-nlp/SimCSE) models for sentence-embeddings. Steps followed to produce this dataset are listed below:
1. Merge train split of snli_tr and multinli_tr subsets.
2. Find every premise that has an entailment hypothesis **and** a contradiction hypothesis.
3. Write found triplets into sent0 (premise), sent1 (entailment hypothesis), hard_neg (contradiction hypothesis) format.
See this [Colab Notebook](https://colab.research.google.com/drive/1Ysq1SpFOa7n1X79x2HxyWjfKzuR_gDQV?usp=sharing) for training and evaluation on Turkish sentences. | emrecan/nli_tr_for_simcse | [
"task_categories:text-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"size_categories:100K<n<1M",
"source_datasets:nli_tr",
"language:tr",
"region:us"
] | 2022-03-22T12:01:59+00:00 | {"language": ["tr"], "size_categories": ["100K<n<1M"], "source_datasets": ["nli_tr"], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-scoring", "text-scoring"]} | 2023-01-25T16:56:04+00:00 |
3ced201c9bbc5d73918f0b66ec8e22f1a82a8eed | d0r1h/Real_vs_Fake | [
"license:afl-3.0",
"region:us"
] | 2022-03-22T13:24:29+00:00 | {"license": "afl-3.0"} | 2022-03-22T13:24:29+00:00 |
|
a29926a4f351dd86b0df1e556a2fd28547ef596d | Carlos89apc/TraductorES_Kichwa | [
"license:gpl",
"region:us"
] | 2022-03-22T14:03:19+00:00 | {"license": "gpl"} | 2022-03-22T14:04:09+00:00 |
|
de49bc6dc80030d41ac50d2f3e981bbe78f51e47 | # :newspaper: The Spanish Fake News Corpus




## The Spanish Fake News Corpus Version 2.0 [[ FakeDeS Task @ Iberlef 2021 ]] :metal:
### Corpus Description
The Spanish Fake News Corpus Version 2.0 contains pairs of fake and true publications about different events (all of them were written in Spanish) that were collected from **November 2020 to March 2021**. Different sources from the web were used to gather the information, but mainly of two types: 1) newspapers and media companies websites, and 2) fact-cheking websites. Most of the revised fact-checking sites used follow the recommendations of the International [Fact-Checking Network (IFCN)](https://ifcncodeofprinciples.poynter.org/) that seeks to promote good practice in fact-checking.
The assembled corpus has **572 instances** and the instances were labeled using two classes, true or fake. The test corpus is balanced with respect to these two classes. To compile the true-fake news pair of the test corpus, the following guidelines were followed:
- A fake news is added to the corpus if any of the selected fact-checking sites determines it.
- Given a fake news, its true news counterpart is added if there is evidence that it has been published in a reliable site (established newspaper site or media site).
The topics covered in the corpus are: **Science, Sport, Politics, Society, COVID-19, Environment, and International**.The corpus includes mostly news articles, however, on this occasion social media posts were also included in the category of fake news. Exactly 90 posts were included as fake news (15.73\% of the total). This posts were recovered mainly from Facebook and WhatsApp. The use of the various fact-checking sites involved consulting pages from different countries that offer content in Spanish in addition to Mexico, so different variants of Spanish are included in the test corpus. These sites included countries like Argentina, Bolivia, Chile, Colombia, Costa Rica, Ecuador, Spain, United States, France, Peru, Uruguay, England and Venezuela.
The corpus is concentrated in the file test.xlsx. The meaning of the columns is described next:
<ul>
<li><b>Id</b>: assign an identifier to each instance.</li>
<li><b>Category</b>: indicates the category of the news (true or fake).</li>
<li><b>Topic</b>: indicates the topic related to the news.</li>
<li><b>Source</b>: indicates the name of the source.</li>
<li><b>Headline</b>: contains the headline of the news.</li>
<li><b>Text</b>: contains the raw text of the news.</li>
<li><b>Link</b>: contains the URL of the source.</li>
</ul>
Note that some instances have an empty header intentionally because the source omitted it.
### :pencil: How to cite
If you use the corpus please cite the following articles:
1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.
2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.
3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.
### FakeDeS @ IberLef 2021
>> The corpus was used for the **Fake News Detection in Spanish (FakeDeS)** shared task at the IberLEF 2021 congress. The details of the competition can be viewed in the main page of the [competition](https://sites.google.com/view/fakedes).
### Organizers
- Helena Montserrat Gómez Adorno (IIMAS - UNAM)
- Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN)
- Gemma Bel Enguix (IINGEN - UNAM)
- Claudia Porto Capetillo (IIMAS - UNAM)
## :books: The Spanish Fake News Corpus Version 1.0 (@ MEXLEF 20)
### :page_facing_up: Corpus Description
<p style='text-align: justify;'>
The Spanish Fake News Corpus contains a collection of news compiled from several resources on the Web: established newspapers websites, media companies’ websites, special websites dedicated to validating fake news and websites designated by different journalists as sites that regularly publish fake news. The news were collected from **January to July of 2018** and all of them were written in Spanish. The process of tagging the corpus was manually performed and the method followed is described in the paper.
aspects were considered: 1) news were tagged as true if there was evidence that it has been published in reliable sites, i.e., established newspaper websites or renowned journalists websites; 2) news were tagged as fake if there were news from reliable sites or specialized website in detection of deceptive content for example VerificadoMX (https://verificado.mx) that contradicts it or no other evidence was found about the news besides the source; 3) the correlation between the news was kept by collecting the true-fake news pair of an event; 4) we tried to trace the source of the news.
</p>
The corpus contains 971 news divided into 491 real news and 480 fake news. The corpus covers news from 9 different topics: **Science, Sport, Economy, Education, Entertainment, Politics, Health, Security, and Society**. The corpus was split into train and test sets, using around the 70\% of the corpus for train and the rest for test. We performed a hierarchical distribution of the corpus, i.e., all the categories keep the 70\%-30\% ratio.
The corpus is concentrated in the files train.xlsx and development.xlsx. The meaning of the columns is described next:
<ul>
<li><b>Id</b>: assign an identifier to each instance.</li>
<li><b>Category</b>: indicates the category of the news (true or fake).</li>
<li><b>Topic</b>: indicates the topic related to the news.</li>
<li><b>Source</b>: indicates the name of the source.</li>
<li><b>Headline</b>: contains the headline of the news.</li>
<li><b>Text</b>: contains the raw text of the news.</li>
<li><b>Link</b>: contains the URL of the source.</li>
</ul>
### :pencil: How to cite
If you use the corpus please cite the following articles:
1) Gómez-Adorno, H., Posadas-Durán, J. P., Enguix, G. B., & Capetillo, C. P. (2021). Overview of FakeDeS at IberLEF 2021: Fake News Detection in Spanish Shared Task. Procesamiento del Lenguaje Natural, 67, 223-231.
2) Aragón, M. E., Jarquín, H., Gómez, M. M. Y., Escalante, H. J., Villaseñor-Pineda, L., Gómez-Adorno, H., ... & Posadas-Durán, J. P. (2020, September). Overview of mex-a3t at iberlef 2020: Fake news and aggressiveness analysis in mexican spanish. In Notebook Papers of 2nd SEPLN Workshop on Iberian Languages Evaluation Forum (IberLEF), Malaga, Spain.
3) Posadas-Durán, J. P., Gómez-Adorno, H., Sidorov, G., & Escobar, J. J. M. (2019). Detection of fake news in a new corpus for the Spanish language. Journal of Intelligent & Fuzzy Systems, 36(5), 4869-4876.
### Fake News Detection Task at MEX-A3T
>> The Fake News Corpus in Spanish was used for the **Fake News Detection Task** in the **MEX-A3T** competition at the IberLEF 2020 congress. The details of the competition can be viewed in the main page of the [competition](https://sites.google.com/view/mex-a3t/).
### Authors of the corpus
Juan Manuel Ramírez Cruz (ESIME Zacatenco - IPN), Silvia Úrsula Palacios Alvarado (ESIME Zacatenco - IPN), Karime Elena Franca Tapia (ESIME Zacatenco - IPN), Juan Pablo Francisco Posadas Durán (ESIME Zacatenco - IPN), Helena Montserrat Gómez Adorno (IIMAS - UNAM), Grigori Sidorov (CIC - IPN)
### Aknowledgments
The work was done with partial support of Red Temática de Tecnologías del Lenguaje, CONACYT project 240844 and SIP-IPN projects 20181849 and 20171813
## License
[CC-BY-4.0](https://choosealicense.com/licenses/cc-by-4.0/).
| sayalaruano/FakeNewsCorpusSpanish | [
"region:us"
] | 2022-03-22T14:20:00+00:00 | {} | 2022-03-22T14:37:06+00:00 |
8a03d6240ada811ba3d603f813b91d3be4553764 |
This dataset was obtained from: https://www.kaggle.com/datasets/arseniitretiakov/noticias-falsas-en-espaol
| sayalaruano/FakeNewsSpanish_Kaggle1 | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-22T14:53:20+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-03-22T14:59:40+00:00 |
74fb40b34737bd14e40f2638ac00938243ec9ee3 |
This dataset was obtained from: https://www.kaggle.com/datasets/zulanac/fake-and-real-news | sayalaruano/FakeNewsSpanish_Kaggle2 | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-03-22T15:01:36+00:00 | {"license": "cc-by-nc-sa-4.0"} | 2022-03-22T15:02:43+00:00 |
421b0fbcfc6b450bea2364de6ace4e965cd98c8b | annotations_creators:
- machine-generated
language_creators:
- machine-generated
languages: []
licenses:
- mit
multilinguality: []
pretty_name: Mutli-Radar/Multi-System Precipitation Radar
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- time-series-forecasting
- image-classification
- image-segmentation
- other
task_ids:
- univariate-time-series-forecasting
- multi-label-image-classification
- semantic-segmentation
# Dataset Card for MRMS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mrms.nssl.noaa.gov/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org)
### Dataset Summary
Multi-Radar/Multi-System Precipitation Rate Radar data for 2016-2022. This data contains precipitation rate values for the continental United States.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
This dataset was constructed to help recreate the original dataset used for MetNet/MetNet-2 as well as Deep Generative Model of Radar papers. The datasets were not pubicly released, but this dataset should cover the time period used plus more compared to the datasets in the papers.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
US Government License, no restrictions
### Citation Information
@article(ocf:mrms,
author = {Jacob Bieker}
title = {MRMS Precipitation Rate Dataset}
year = {2022}
} | openclimatefix/mrms | [
"doi:10.57967/hf/0885",
"region:us"
] | 2022-03-22T15:39:47+00:00 | {} | 2022-06-22T12:39:35+00:00 |
4f92928b7f48c7f12925055498f2ed92ac042e06 | Please cite: E. Cardenas., et al. “A Comparison of House Price Classification with Structured and Unstructured Text Data.” Published in AAAI FLAIRS-35. 2022. | erikacardenas300/Zillow-Text-Listings | [
"region:us"
] | 2022-03-22T19:24:10+00:00 | {} | 2022-03-23T01:47:24+00:00 |
a9cee35c7531ae57045e920c657dfced4bbc93e6 | jullarson/sdd | [
"license:apache-2.0",
"region:us"
] | 2022-03-22T20:40:54+00:00 | {"license": "apache-2.0"} | 2022-03-22T20:40:54+00:00 |
|
549d7035f8df8bcd19d41ea355a4a775273b08e5 | This is the corpus file from the [BEIR benchmark](https://github.com/beir-cellar/beir) for the [TREC-COVID 19 dataset](https://ir.nist.gov/trec-covid/).
| nreimers/trec-covid | [
"region:us"
] | 2022-03-22T22:14:03+00:00 | {} | 2022-03-23T12:55:44+00:00 |
e3e19a9a95b3464d2aa336ccf473b4d1cc7de76b | # Dataset Card for Fake-news-latam-omdena
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[latam-chapters-news-detector](https://github.com/OmdenaAI/latam-chapters-news-detector)
- **Repository:**[latam-chapters-news-detector](https://github.com/OmdenaAI/latam-chapters-news-detector)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Since the Cambridge Analytica scandal a pandora box has been opened around the world, bringing to light campaigns even involving our current Latinamerica leaders manipulating public opinion through social media to win an election. There is a common and simple pattern that includes platforms such as facebook and fake news, where the candidates are able to build a nefarious narrative for their own benefit. This fact is a growing concern for our democracies, as many of these practices have been widely spread across the region and more people are gaining access to the internet. Thus, it is a necessity to be able to advise the population, and for that we have to be able to quickly spot these plots on the net before the damage is irreversible.
Therefore, an initial effort was taken to collect this dataset which gathered news from different news sources in Mexico, Colombia and El Salvador. With the objective to train a classification model and deploy it as part of the Politics Fake News Detector in LATAM (Latin America) project [https://github.com/OmdenaAI/latam-chapters-news-detector].
Website articles and tweets were considered.
### Supported Tasks and Leaderboards
Binary fake news classification [with classes "True" and "Fake"]
### Languages
Spanish only
## Dataset Structure
### Data Instances
* Train: 2782
* Test: 310
### Data Fields
[More Information Needed]
### Data Splits
Train and test. Each split was generated with a stratified procedure in order to have the same proportion of fake news in both train and test.
Around 1/3 of the observations in each split have the label 'Fake', while 2/3 have the label 'True'.
## Dataset Creation
### Curation Rationale
For a more specific flow of how the labeling was done, follow this link: https://github.com/OmdenaAI/latam-chapters-news-detector/blob/main/Fake-news_Flowchart.pdf
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Once the capacity to somewhat detect irregularities in the news activity on the internet is developed, we might be able to counter the disinformation with the help of additional research. As we reduce the time spent in looking for those occurrences, more time can be used in validating the results and uncovering the truth; enabling researchers, journalists and organizations to help people make an informed decision whether the public opinion is true or not, so that they can identify on their own if someone is trying to manipulate them for a certain political benefit.
If this matter isn’t tackled with enough urgency, we might see the rise of a new dark era in latin america politics, where many unscrupulous parties and people will manage to gain power and control the lives of many people.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to the Omdena local chapter members from Mexico, Colombia and El Salvador for their amazing effort to collect and curate this dataset. | IsaacRodgz/Fake-news-latam-omdena | [
"region:us"
] | 2022-03-22T23:58:35+00:00 | {} | 2022-03-23T00:20:36+00:00 |
abdbddf991d8dbc29adc013f26970ba3232fd712 | - Problem type: Summarization
languages:
- en
multilinguality:
- monolingual
task_ids:
- summarization
# MeQSum
Dataset for medical question summarization introduced in the ACL 2019 paper "On the Summarization of Consumer Health Questions": https://www.aclweb.org/anthology/P19-1215
### Citation Information
```bibtex
@Inproceedings{MeQSum,
author = {Asma {Ben Abacha} and Dina Demner-Fushman},
title = {On the Summarization of Consumer Health Questions},
booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28th - August 2},
year = {2019},
abstract = {Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language questions that are longer than needed and include peripheral information that increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000 summarized consumer health questions. We explore data augmentation methods and evaluate state-of-the-art neural abstractive models on this new task. In particular, we show that semantic augmentation from question datasets improves the overall performance, and that pointer-generator networks outperform sequence-to-sequence attentional models on this task, with a ROUGE-1 score of 44.16%. We also present a detailed error analysis and discuss directions for improvement that are specific to question summarization. }}
``` | sumedh/MeQSum | [
"license:apache-2.0",
"region:us"
] | 2022-03-23T04:21:51+00:00 | {"license": "apache-2.0"} | 2022-03-24T20:20:43+00:00 |
686600ca31931048cbb9f0acc98b83c29d0036b9 |
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| nthngdy/oscar-small | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:ca",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:pnb",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sa",
"language:sah",
"language:sd",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:yi",
"language:zh",
"license:cc0-1.0",
"arxiv:2010.14571",
"region:us"
] | 2022-03-23T09:26:03+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["af", "am", "ar", "arz", "as", "az", "azb", "ba", "be", "bg", "bn", "bo", "br", "ca", "ce", "ceb", "ckb", "cs", "cv", "cy", "da", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gl", "gu", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mhr", "mk", "ml", "mn", "mr", "ms", "mt", "my", "nds", "ne", "nl", "nn", "no", "or", "os", "pa", "pl", "pnb", "ps", "pt", "ro", "ru", "sa", "sah", "sd", "sh", "si", "sk", "sl", "sq", "sr", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "yi", "zh"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "source_datasets": ["oscar"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "oscar", "pretty_name": "OSCAR"} | 2023-03-08T09:57:45+00:00 |
f2d4e2258fe51ace7062bbeb2a55ad1e890d1c72 |
# Licensing information
Apple MIT License (AML). | 10zinten/op_classical_corpus_bo | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|other",
"language:bo",
"license:other",
"region:us"
] | 2022-03-23T10:53:48+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["bo"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|other"], "task_categories": ["sequence-modeling"], "task_ids": ["language-modeling"], "pretty_name": "Tibetan Classical Buddhist Text Corpus"} | 2022-10-23T04:21:37+00:00 |
50da653240bbb86afedf9d408eae3c2f80aa646f | # GEM Submission
Submission name: This is a test name
| GEM-submissions/lewtun__this-is-a-test-name__1648048960 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-23T15:22:40+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test name", "tags": ["evaluation", "benchmark"]} | 2022-03-23T15:22:42+00:00 |
38dfa1ca1c9df28c02152e1ef34a5866014f7853 | # Citation
```
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}
``` | huggan/edges2shoes | [
"region:us"
] | 2022-03-23T16:12:59+00:00 | {} | 2022-04-12T13:18:05+00:00 |
f5cb9a55f7c4c9e07fa812b2dc21846fc3ffeb78 | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/facades | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-23T16:23:02+00:00 | {} | 2022-04-12T12:57:03+00:00 |
9f145cf7a60b15416a426add7fc62fbed8f94326 | # Citation
```
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}
``` | huggan/night2day | [
"region:us"
] | 2022-03-23T16:43:09+00:00 | {} | 2022-04-12T13:18:51+00:00 |
0d523b5d1a1ab77e4a3a4d86b9cbb3b432dc804d | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/. | huggan/maps | [
"region:us"
] | 2022-03-23T17:05:03+00:00 | {} | 2022-04-12T12:54:14+00:00 |
0eae7d40a244b73068f68bb8f0fd7e456fceb66b | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/cityscapes | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-23T20:09:01+00:00 | {} | 2022-04-12T12:56:44+00:00 |
199e90c44cdbc4f8323367513796e07f75df272f | This dataset is part of the CycleGAN datasets, originally hosted here: https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
# Citation
```
@article{DBLP:journals/corr/ZhuPIE17,
author = {Jun{-}Yan Zhu and
Taesung Park and
Phillip Isola and
Alexei A. Efros},
title = {Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks},
journal = {CoRR},
volume = {abs/1703.10593},
year = {2017},
url = {http://arxiv.org/abs/1703.10593},
eprinttype = {arXiv},
eprint = {1703.10593},
timestamp = {Mon, 13 Aug 2018 16:48:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZhuPIE17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/ae_photos | [
"arxiv:1703.10593",
"region:us"
] | 2022-03-23T21:00:46+00:00 | {} | 2022-04-12T12:56:12+00:00 |
33b406a36ec2927e14c7afd65c598cc57ba77701 | ### dataset-list
The datasets in this dataset repository are from public datasets DeepMatcher,Magellan and WDC, which cover a variety of domains, such as product, citation and restaurant. Each dataset contains entities from two relational tables with multiple attributes, and a set of labeled matching/non-matching entity pairs.
| dataset_name | domain |
| -------------- | ----------- |
| abt_buy | Product |
| amazon_google | Product |
| anime | Anime |
| beer | Product |
| books2 | Book |
| books4 | Book |
| cameras | WDC-Product |
| computers | WDC-Product |
| cosmetics | Cosmetics |
| dblp_acm | Citation |
| dblp_scholar | Citation |
| ebooks1 | eBook |
| fodors_zagat | Restaurant |
| itunes_amazon | Music |
| movies1 | Movie |
| restaurants1 | Restaurant |
| restaurants3 | Restaurant |
| restaurants4 | Restaurant |
| shoes | WDC-Product |
| walmart_amazon | Product |
| watches | WDC-Product |
| RUC-DataLab/ER-dataset | [
"region:us"
] | 2022-03-24T01:49:22+00:00 | {} | 2022-07-05T06:58:55+00:00 |
11391706f44d60008a984b20fbc2a16ce392fa87 |
# liv4ever v1
This is the Livonian 4-lingual parallel corpus. Livonian is a Uralic / Finnic language with just about 20 fluent speakers and no native speakers (as of 2021). The texts and translations in this corpus were collected from all the digital text resources that could be found by the authors; scanned and printed materials are left for future work.
The corpus includes parallel data for Livonian-Latvian, Livonian-Estonian and Livonian-English; the data has been collected in 2021. After retrieval it was normalized in terms of different orthographies of Livonian and manually sentence-aligned where needed. It was collected from the following sources, with sentence counts per language pair:
* Dictionary - example sentences from the Livonian-Latvian-Estonian dictionary;
* liv-lv: 10'388,
* liv-et: 10'378
* Stalte - the alphabet book by Kōrli Stalte, translated into Estonian and Latvian;
* liv-lv: 842,
* liv-et: 685
* Poetry - the poetry collection book "Ma võtan su õnge, tursk / Ma akūb sīnda vizzõ, tūrska", with Estonian translations;
* liv-et: 770
* Vääri - the book by Eduard Vääri about Livonian language and culture;
* liv-et: 592
* Satversme - translations of the Latvian Constitution into Livonian, Estonian and English;
* liv-en: 380,
* liv-lv: 414,
* liv-et: 413
* Facebook - social media posts by the Livonian Institute and Livonian Days with original translations;
* liv-en: 123,
* liv-lv: 124,
* liv-et: 7
* JEFUL - article abstracts from the Journal of Estonian and Finno-Ugric Linguistics, special issues dedicated to Livonian studies, translated into Estonian and English;
* liv-en: 36,
* liv-et: 49
* Trilium - the book with a collection of Livonian poetry, foreword and afterword translated into Estonian and Latvian;
* liv-lv: 51,
* liv-et: 53
* Songs - material crawled off lyricstranslate.com;
* liv-en: 54,
* liv-lv: 54,
* liv-fr: 31 | tartuNLP/liv4ever | [
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"language:liv",
"license:cc-by-nc-sa-4.0",
"conditional-text-generation",
"region:us"
] | 2022-03-24T07:40:49+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en", "liv"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "Liv4ever", "language_bcp47": ["en-US", "liv"], "tags": ["conditional-text-generation"]} | 2022-10-25T11:30:49+00:00 |
639b92ed9b6f2c613185744d5e0d145e24b070b4 | #NQ-retrieval
This is a nicely formatted version of the [Natural Questions](https://ai.google.com/research/NaturalQuestions/) dataset, formatted to train and evaluate retrieval systems.
Each row contains the following entries:
- **question**: Original question send for Google Search Engine
- **title**: Title of Wikipedia article
- **candidates**: A list with the passages from the original Wikipedia HTML document
- **passage_types**: Types (text, table, list) of the candidate passages
- **long_answers**: IDs which candidate passages where selected as relevant from annotators. Might be empty if no relevant passage has been identified
- **document_url** | sentence-transformers/NQ-retrieval | [
"region:us"
] | 2022-03-24T08:17:51+00:00 | {} | 2022-03-24T08:18:36+00:00 |
1c3412bf9133897681719c058d1dbcc9221e89f1 | # GEM Submission
Submission name: This is a test name
| GEM-submissions/lewtun__this-is-a-test-name__1648111972 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-24T08:52:52+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test name", "tags": ["evaluation", "benchmark"]} | 2022-03-24T08:52:55+00:00 |
8ce4cef364c585c2d63a6b0ae7fc178995c9a34a | # Citation
```
@article{DBLP:journals/corr/abs-1710-10196,
author = {Tero Karras and
Timo Aila and
Samuli Laine and
Jaakko Lehtinen},
title = {Progressive Growing of GANs for Improved Quality, Stability, and Variation},
journal = {CoRR},
volume = {abs/1710.10196},
year = {2017},
url = {http://arxiv.org/abs/1710.10196},
eprinttype = {arXiv},
eprint = {1710.10196},
timestamp = {Mon, 13 Aug 2018 16:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1710-10196.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | huggan/CelebA-HQ | [
"arxiv:1710.10196",
"region:us"
] | 2022-03-24T09:12:05+00:00 | {} | 2022-04-12T13:10:49+00:00 |
44f81ea0f9562e2b49e02af7a98c77bd977341ad | Jira/mao | [
"license:gpl",
"region:us"
] | 2022-03-24T09:24:23+00:00 | {"license": "gpl"} | 2022-03-24T10:10:27+00:00 |
|
b76e14dda40adde0cc5a58831ece800723c1a29a | Gare/Classical_Chinese_to_Modern_Chinese | [
"license:mit",
"region:us"
] | 2022-03-24T12:55:43+00:00 | {"license": "mit"} | 2022-03-26T07:47:40+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.