sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
sequence | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
b37680e9413ca148de6f60b3c4b9c956a11974c4 |
# Dataset Card
## Dataset Summary
We split [the original xquad dataset] (https://github.com/deepmind/xquad) into subsets.
We keep the original data format.
## Supported Tasks
extractive question answering
## Language
Thai
## Dataset Split
There are 876/161/153 question-answer pairs from 34/7/7 articles for train/validation/test separately.
| zhufy/xquad_split | [
"region:us"
] | 2022-03-02T23:29:22+00:00 | {} | 2022-02-24T02:29:43+00:00 |
c574d814c1502e2cdbe22ad61ae0e56013f08a9a | # AutoNLP Dataset for project: traffic_nlp_binary
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project traffic_nlp_binary.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "1 train is still delayed in both directions",
"target": 1
},
{
"text": "maybe there was no train traffic ????. i know the feeling.",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=2, names=['0', '1'], names_file=None, id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2195 |
| valid | 549 |
| zwang199/autonlp-data-traffic_nlp_binary | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-10-25T09:02:03+00:00 |
ad25d57e9499f8417e25ac06dd57f6010786aa65 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: [HomePage](https://fancyerii.github.io)**
- **Repository: fancyerii**
- **Paper: No Paper**
- **Leaderboard: No**
- **Point of Contact:**
### Dataset Summary
测试数据集
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
中文
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fancyerii](https://github.com/fancyerii) for adding this dataset.
| fancyerii/test | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"size_categories:10K<n<100K",
"region:us"
] | 2022-03-03T07:42:22+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-classification"], "pretty_name": "demo"} | 2022-10-25T09:02:14+00:00 |
67ebcf8c69b45feb3883d695f04227078a6c9da9 | # Dataset Card for anime-faces
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Repository:** https://www.kaggle.com/soumikrakshit/anime-faces
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** https://github.com/Mckinsey666
### Dataset Summary
This is a dataset consisting of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm in https://github.com/nagadomi/lbpcascade_animeface. All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset.
Some outliers are still present in the dataset:
Bad cropping results
Some non-human faces.
Feel free to contribute to this dataset by adding images of similar quality or adding image labels.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
Has a data folder with png files inside.
### Data Splits
Only training set
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information]
---
annotations_creators:
- found
language_creators:
- found
languages:
- unknown
licenses:
- unknown
multilinguality:
- unknown
pretty_name: anime-faces
size_categories:
- unknown
source_datasets:
- original
task_categories:
- image-classification
task_ids: []
--- | huggan/anime-faces | [
"license:cc0-1.0",
"region:us"
] | 2022-03-03T13:15:34+00:00 | {"license": "cc0-1.0"} | 2022-03-22T10:01:22+00:00 |
f0f49db9aeb2fe8e7640ae7ee10da1582ecd9569 | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1646314818 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-03T13:40:20+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-03T13:40:29+00:00 |
2a1eb941a4459be7ac03c51e4c2875d938aee9bf | # GEM Submission
Submission name: This is a test
| GEM-submissions/lewtun__this-is-a-test__1646316929 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-03T14:15:31+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]} | 2022-03-03T14:15:35+00:00 |
fa900453f521486ba24c32a3045e2ee7ccd2a40f | firzens/authors | [
"region:us"
] | 2022-03-04T07:46:26+00:00 | {} | 2022-03-04T07:48:26+00:00 |
|
fdf66398fed02051156c3b34d80b2f4fbe5f01f4 | NLPC-UOM/Sinhala-Tamil-Aligned-Parallel-Corpus | [
"language:si",
"license:mit",
"region:us"
] | 2022-03-04T08:28:19+00:00 | {"annotations_creators": [], "language": ["si"], "license": ["mit"]} | 2022-10-25T09:02:16+00:00 |
|
d8ff10fc5ffd05877bf61ea19f0833565c5a6fd8 | # AnanyaSinhalaNERDataset
---
annotations_creators: []
language:
- si
license:
- mit
---
This is part of the dataset used in the paper: Manamini, S.A.P.M., Ahamed, A.F., Rajapakshe, R.A.E.C., Reemal, G.H.A., Jayasena, S., Dias, G.V. and Ranathunga, S., 2016, April. Ananya-a Named-Entity-Recognition (NER) system for Sinhala language. In 2016 Moratuwa Engineering Research Conference (MERCon) (pp. 30-35). IEEE.
| NLPC-UOM/AnanyaSinhalaNERDataset | [
"region:us"
] | 2022-03-04T08:32:54+00:00 | {} | 2022-10-25T09:02:18+00:00 |
10e2ca5f1dc12387e94e13477c4da59e20584b59 | [Needs More Information]
# Dataset Card for GFS-Reforecast
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org)
### Dataset Summary
This dataset consists of various sets of historical operational GFS forecasts, and analysis files from 2016-2022. The analysis files and forecasts are initialized at 00, 06, 12, and 18 UTC every day and ran for multiple hours. Additionally, raw observations are also included, which are the observations that are used to initialize the analysis and forecasts. The dataset is being expanded over time as more historical data is processed, and more observations as well.
The `data/forecasts/GFSv16/` folder holds the historical operational forecasts out to 48 hours from initialization, on all pressure levels, and for all variables that are present in every timestep (so not any accumulated values). The data is all stored as zipped Zarr stores, openable by xarray.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
This dataset was constructed to help create a similar and expanded dataset to that used in Kiesler 2022 paper, where graph networks were used for weather forecasting.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
US Government License, no restrictions
### Citation Information
@article(gfs,
author = {Jacob Bieker}
title = {GFS NWP Weather Dataset}
year = {2022}
} | openclimatefix/gfs-reforecast | [
"region:us"
] | 2022-03-04T09:08:46+00:00 | {} | 2023-03-03T17:19:15+00:00 |
080f677a026e304c38666d759ef625d621dc8cb9 |
# Dataset Card for FiNER-139
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [SEC-BERT](#sec-bert)
- [About Us](#about-us)
## Dataset Description
- **Homepage:** [FiNER](https://github.com/nlpaueb/finer)
- **Repository:** [FiNER](https://github.com/nlpaueb/finer)
- **Paper:** [FiNER, Loukas et al. (2022)](https://arxiv.org/abs/2203.06482)
- **Point of Contact:** [Manos Fergadiotis](mailto:fergadiotis@aueb.gr)
### Dataset Summary
<div style="text-align: justify">
<strong>FiNER-139</strong> is comprised of 1.1M sentences annotated with <strong>eXtensive Business Reporting Language (XBRL)</strong> tags extracted from annual and quarterly reports of publicly-traded companies in the US.
Unlike other entity extraction tasks, like named entity recognition (NER) or contract element extraction, which typically require identifying entities of a small set of common types (e.g., persons, organizations), FiNER-139 uses a much larger label set of <strong>139 entity types</strong>.
Another important difference from typical entity extraction is that FiNER focuses on numeric tokens, with the correct tag depending mostly on context, not the token itself.
</div>
### Supported Tasks
<div style="text-align: justify">
To promote transparency among shareholders and potential investors, publicly traded companies are required to file periodic financial reports annotated with tags from the eXtensive Business Reporting Language (XBRL), an XML-based language, to facilitate the processing of financial information.
However, manually tagging reports with XBRL tags is tedious and resource-intensive.
We, therefore, introduce <strong>XBRL tagging</strong> as a <strong>new entity extraction task</strong> for the <strong>financial domain</strong> and study how financial reports can be automatically enriched with XBRL tags.
To facilitate research towards automated XBRL tagging we release FiNER-139.
</div>
### Languages
**FiNER-139** is compiled from approximately 10k annual and quarterly **English** reports
## Dataset Structure
### Data Instances
This is a "train" split example:
```json
{
'id': 40
'tokens': ['In', 'March', '2014', ',', 'the', 'Rialto', 'segment', 'issued', 'an', 'additional', '$', '100', 'million', 'of', 'the', '7.00', '%', 'Senior', 'Notes', ',', 'at', 'a', 'price', 'of', '102.25', '%', 'of', 'their', 'face', 'value', 'in', 'a', 'private', 'placement', '.']
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 0, 0, 0, 41, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Data Fields
**id**: ID of the example <br>
**tokens**: List of tokens for the specific example. <br>
**ner_tags**: List of tags for each token in the example. Tags are provided as integer classes.<br>
If you want to use the class names you can access them as follows:
```python
import datasets
finer_train = datasets.load_dataset("nlpaueb/finer-139", split="train")
finer_tag_names = finer_train.features["ner_tags"].feature.names
```
**finer_tag_names** contains a list of class names corresponding to the integer classes e.g.
```
0 -> "O"
1 -> "B-AccrualForEnvironmentalLossContingencies"
```
### Data Splits
| Training | Validation | Test
| -------- | ---------- | -------
| 900,384 | 112,494 | 108,378
## Dataset Creation
### Curation Rationale
The dataset was curated by [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482) <br>
### Source Data
#### Initial Data Collection and Normalization
<div style="text-align: justify">
FiNER-139 is compiled from approximately 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the [US Securities
and Exchange Commission's (SEC)](https://www.sec.gov/) [Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/edgar.shtml) system.
The reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approximately 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances.
We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the <strong>IOB2</strong> annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.
</div>
### Annotations
#### Annotation process
<div style="text-align: justify">
All the examples were annotated by professional auditors as required by the Securities & Exchange Commission (SEC) legislation.
Even though the gold XBRL tags come from professional auditors there are still some discrepancies. Consult [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482), (Section 9.4) for more details
</div>
#### Who are the annotators?
Professional auditors
### Personal and Sensitive Information
The dataset contains publicly available annual and quarterly reports (filings)
## Additional Information
### Dataset Curators
[Loukas et al. (2022)](https://arxiv.org/abs/2203.06482)
### Licensing Information
<div style="text-align: justify">
Access to SEC's EDGAR public database is free, allowing research of public companies' financial information and operations by reviewing the filings the companies makes with the SEC.
</div>
### Citation Information
If you use this dataset cite the following
```
@inproceedings{loukas-etal-2022-finer,
title = {FiNER: Financial Numeric Entity Recognition for XBRL Tagging},
author = {Loukas, Lefteris and
Fergadiotis, Manos and
Chalkidis, Ilias and
Spyropoulou, Eirini and
Malakasiotis, Prodromos and
Androutsopoulos, Ion and
Paliouras George},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)},
publisher = {Association for Computational Linguistics},
location = {Dublin, Republic of Ireland},
year = {2022},
url = {https://arxiv.org/abs/2203.06482}
}
```
## SEC-BERT
<img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="SEC-BERT" width="400"/>
<div style="text-align: justify">
We also pre-train our own BERT models (<strong>SEC-BERT</strong>) for the financial domain, intended to assist financial NLP research and FinTech applications. <br>
<strong>SEC-BERT</strong> consists of the following models:
* [**SEC-BERT-BASE**](https://huggingface.co/nlpaueb/sec-bert-base): Same architecture as BERT-BASE trained on financial documents.
* [**SEC-BERT-NUM**](https://huggingface.co/nlpaueb/sec-bert-num): Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation
* [**SEC-BERT-SHAPE**](https://huggingface.co/nlpaueb/sec-bert-shape): Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.
These models were pre-trained on 260,773 10-K filings (annual reports) from 1993-2019, publicly available at [U.S. Securities and Exchange Commission (SEC)](https://www.sec.gov/)
</div>
## About Us
<div style="text-align: justify">
[**AUEB's Natural Language Processing Group**](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
</div>
[Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) | nlpaueb/finer-139 | [
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2203.06482",
"region:us"
] | 2022-03-04T10:00:23+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["structure-prediction", "named-entity-recognition", "entity-extraction"], "task_ids": ["named-entity-recognition"], "pretty_name": "FiNER-139"} | 2022-10-23T04:05:03+00:00 |
9283dd0d667c67679d54ae59bf871e765e81a8d7 | # GEM Submission
Submission name: SeqPlan
| GEM-submissions/ratishsp__seqplan__1646397329 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-04T12:35:30+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "SeqPlan", "tags": ["evaluation", "benchmark"]} | 2022-03-04T12:35:32+00:00 |
376f8f130939ea4c01e718c71e2cf8f88577e5ef | # GEM Submission
Submission name: SeqPlan - RotoWire
| GEM-submissions/ratishsp__seqplan__1646397829 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-04T12:43:49+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "SeqPlan - RotoWire", "tags": ["evaluation", "benchmark"]} | 2022-03-14T09:21:16+00:00 |
4bbf7c8537c8d75ea9b57ec23b4e33505d365cce |
# Dataset Card alvenir_asr_da_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Prompts/sentence selection](#prompts/sentence-selection)
- [Recording](#recording)
- [Evaluation](#evaluation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://alvenir.ai
- **Repository:** https://github.com/danspeech/alvenir-asr-da-eval/
### Dataset Summary
This dataset was created by Alvenir in order to evaluate ASR models in Danish. It can also be used for training but the amount is very limited.
The dataset consists of .wav files with corresponding reference text. The amount of data is just above 5 hours spread across 50 speakers with age in the interval 20-60 years old. The data was collected by a third party vendor through their software and people. All recordings have been validated.
## Dataset Structure
### Data Instances
A data point consists of a path to the audio file, called path and its sentence. Additional fields will eventually be added such as age and gender.
`
{'audio': {'path': `some_path.wav', 'array': array([-0.044223, -0.00031411, -0.00435671, ..., 0.00612312, 0.00014581, 0.00091009], dtype=float32), 'sampling_rate': 16000}}
`
### Data Fields
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence the user was prompted to speak
### Data Splits
Since the idea behind the dataset is for it to be used as a test/eval ASR dataset for Danish, there is only test split.
## Dataset Creation
### Prompts/sentence selection
The sentences used for prompts were gathered from the danish part of open subtitles (OSS) (need reference) and wikipedia (WIKI). The OSS prompts sampled randomly across the dataset making sure that all prompts are unique. The WIKI prompts were selected by first training a topic model with 30 topics on wikipedia and than randomly sampling an equal amount of unique sentences from each topic. All sentences were manually inspected.
### Recording
50 unique speakers were all sent 20 WIKI sentences and 60 sentences from OSS. The recordings took place through third party recording software.
### Evaluation
All recordings were evaluated by third party to confirm alignment between audio and text.
### Personal and Sensitive Information
The dataset consists of people who have given their voice to the dataset for ASR purposes. You agree to not attempt to determine the identity of any of the speakers in the dataset.
### Licensing Information
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/)
| Alvenir/alvenir_asr_da_eval | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-04T13:14:47+00:00 | {"license": "cc-by-4.0"} | 2022-06-16T08:13:33+00:00 |
3cf59334aa52a74c008a67a3de30f98dd8a28118 |
# XTREME-S
## Dataset Description
- **Fine-Tuning script:** [research-projects/xtreme-s](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s)
- **Paper:** [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752)
- **Leaderboard:** [TODO(PVP)]()
- **FLEURS amount of disk used:** 350 GB
- **Multilingual Librispeech amount of disk used:** 2700 GB
- **Voxpopuli amount of disk used:** 400 GB
- **Covost2 amount of disk used:** 70 GB
- **Minds14 amount of disk used:** 5 GB
- **Total amount of disk used:** ca. 3500 GB
The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval.
***TLDR; XTREME-S is the first speech benchmark that is both diverse, fully accessible, and reproducible. All datasets can be downloaded with a single line of code.
An easy-to-use and flexible fine-tuning script is provided and actively maintained.***
XTREME-S covers speech recognition with Fleurs, Multilingual LibriSpeech (MLS) and VoxPopuli, speech translation with CoVoST-2, speech classification with LangID (Fleurs) and intent classification (MInds-14) and finally speech(-text) retrieval with Fleurs. Each of the tasks covers a subset of the 102 languages included in XTREME-S, from various regions:
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
## Design principles
### Diversity
XTREME-S aims for task, domain and language
diversity. Tasks should be diverse and cover several domains to
provide a reliable evaluation of model generalization and
robustness to noisy naturally-occurring speech in different
environments. Languages should be diverse to ensure that
models can adapt to a wide range of linguistic and phonological
phenomena.
### Accessibility
The sub-dataset for each task can be downloaded
with a **single line of code** as shown in [Supported Tasks](#supported-tasks).
Each task is available under a permissive license that allows the use and redistribution
of the data for research purposes. Tasks have been selected based on their usage by
pre-existing multilingual pre-trained models, for simplicity.
### Reproducibility
We produce fully **open-sourced, maintained and easy-to-use** fine-tuning scripts
for each task as shown under [Fine-tuning Example](#fine-tuning-and-evaluation-example).
XTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use.
In general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning.
## Fine-tuning and Evaluation Example
We provide a fine-tuning script under [**research-projects/xtreme-s**](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s).
The fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any [Hugging Face model](https://huggingface.co/models) on XTREME-S.
The example script is actively maintained by [@anton-l](https://github.com/anton-l) and [@patrickvonplaten](https://github.com/patrickvonplaten). Feel free
to reach out via issues or pull requests on GitHub if you have any questions.
## Leaderboards
The leaderboard for the XTREME-S benchmark can be found at [this address (TODO(PVP))]().
## Supported Tasks
Note that the suppoprted tasks are focused particularly on linguistic aspect of speech,
while nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are **not** evaluated.
<p align="center">
<img src="https://github.com/patrickvonplaten/scientific_images/raw/master/xtreme_s.png" alt="Datasets used in XTREME"/>
</p>
### 1. Speech Recognition (ASR)
We include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets.
#### FLEURS-ASR
*FLEURS-ASR* is the speech version of the FLORES machine translation benchmark, covering 2000 n-way parallel sentences in n=102 languages.
```py
from datasets import load_dataset
fleurs_asr = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_asr = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_asr)
# load audio sample on the fly
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
# for analyses see language groups
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
all_language_groups[lang_group_id]
```
#### Multilingual LibriSpeech (MLS)
*MLS* is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits.
```py
from datasets import load_dataset
mls = load_dataset("google/xtreme_s", "mls.pl") # for Polish
# to download all data for multi-lingual fine-tuning uncomment following line
# mls = load_dataset("google/xtreme_s", "mls.all")
# see structure
print(mls)
# load audio sample on the fly
audio_input = mls["train"][0]["audio"] # first decoded audio sample
transcription = mls["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### VoxPopuli
*VoxPopuli* is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials.
**VoxPopuli has to download the whole dataset 100GB since languages
are entangled into each other - maybe not worth testing here due to the size**
```py
from datasets import load_dataset
voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.ro") # for Romanian
# to download all data for multi-lingual fine-tuning uncomment following line
# voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.all")
# see structure
print(voxpopuli)
# load audio sample on the fly
audio_input = voxpopuli["train"][0]["audio"] # first decoded audio sample
transcription = voxpopuli["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
#### (Optionally) BABEL
*BABEL* from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations).
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as")
```
**The above command is expected to fail with a nice error message,
explaining how to download BABEL**
The following should work:
```py
from datasets import load_dataset
babel = load_dataset("google/xtreme_s", "babel.as", data_dir="/path/to/IARPA_BABEL_OP1_102_LDC2016S06.zip")
# see structure
print(babel)
# load audio sample on the fly
audio_input = babel["train"][0]["audio"] # first decoded audio sample
transcription = babel["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
### 2. Speech Translation (ST)
We include the CoVoST-2 dataset for automatic speech translation.
#### CoVoST-2
The *CoVoST-2* benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the "X->En" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))].
```py
from datasets import load_dataset
covost_2 = load_dataset("google/xtreme_s", "covost2.id.en") # for Indonesian to English
# to download all data for multi-lingual fine-tuning uncomment following line
# covost_2 = load_dataset("google/xtreme_s", "covost2.all")
# see structure
print(covost_2)
# load audio sample on the fly
audio_input = covost_2["train"][0]["audio"] # first decoded audio sample
transcription = covost_2["train"][0]["transcription"] # first transcription
translation = covost_2["train"][0]["translation"] # first translation
# use audio_input and translation to fine-tune your model for AST
```
### 3. Speech Classification
We include two multilingual speech classification datasets: FLEURS-LangID and Minds-14.
#### Language Identification - FLEURS-LangID
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
```py
from datasets import load_dataset
fleurs_langID = load_dataset("google/xtreme_s", "fleurs.all") # to download all data
# see structure
print(fleurs_langID)
# load audio sample on the fly
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
language = fleurs_langID["train"].features["lang_id"].names[language_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
#### Intent classification - Minds-14
Minds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language.
```py
from datasets import load_dataset
minds_14 = load_dataset("google/xtreme_s", "minds14.fr-FR") # for French
# to download all data for multi-lingual fine-tuning uncomment following line
# minds_14 = load_dataset("google/xtreme_s", "minds14.all")
# see structure
print(minds_14)
# load audio sample on the fly
audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
intent_class = minds_14["train"][0]["intent_class"] # first transcription
intent = minds_14["train"].features["intent_class"].names[intent_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
### 4. (Optionally) Speech Retrieval
We optionally include one speech retrieval dataset: FLEURS-Retrieval as explained in the [FLEURS paper](https://arxiv.org/abs/2205.12446).
#### FLEURS-Retrieval
FLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
```py
from datasets import load_dataset
fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.all")
# see structure
print(fleurs_retrieval)
# load audio sample on the fly
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
```
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-structure)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-structure)
Note that for MLS, XTREME-S uses `path` instead of `file` and `transcription` instead of `text`.
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-structure)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-structure)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-structure)
Note that for Covost2, XTREME-S uses `path` instead of `file` and `transcription` instead of `sentence`.
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-structure)
Please click on the link of the dataset cards to get more information about its dataset structure.
## Dataset Creation
The XTREME-S benchmark is composed of the following datasets:
- [FLEURS](https://huggingface.co/datasets/google/fleurs#dataset-creation)
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-creation)
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-creation)
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-creation)
- [Covost2](https://huggingface.co/datasets/covost2#dataset-creation)
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-creation)
Please visit the corresponding dataset cards to get more information about the source data.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through XTREME-S should generalize to all languages.
### Other Known Limitations
The benchmark has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on XTREME-S should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
#### XTREME-S
```
@article{conneau2022xtreme,
title={XTREME-S: Evaluating Cross-lingual Speech Representations},
author={Conneau, Alexis and Bapna, Ankur and Zhang, Yu and Ma, Min and von Platen, Patrick and Lozhkov, Anton and Cherry, Colin and Jia, Ye and Rivera, Clara and Kale, Mihir and others},
journal={arXiv preprint arXiv:2203.10752},
year={2022}
}
```
#### MLS
```
@article{Pratap2020MLSAL,
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
journal={ArXiv},
year={2020},
volume={abs/2012.03411}
}
```
#### VoxPopuli
```
@article{wang2021voxpopuli,
title={Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation},
author={Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel},
journal={arXiv preprint arXiv:2101.00390},
year={2021}
}
```
#### CoVoST 2
```
@article{DBLP:journals/corr/abs-2007-10310,
author = {Changhan Wang and
Anne Wu and
Juan Miguel Pino},
title = {CoVoST 2: {A} Massively Multilingual Speech-to-Text Translation Corpus},
journal = {CoRR},
volume = {abs/2007.10310},
year = {2020},
url = {https://arxiv.org/abs/2007.10310},
eprinttype = {arXiv},
eprint = {2007.10310},
timestamp = {Thu, 12 Aug 2021 15:37:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-10310.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Minds14
```
@article{gerz2021multilingual,
title={Multilingual and cross-lingual intent detection from spoken data},
author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Micha{\l} and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan},
journal={arXiv preprint arXiv:2104.08524},
year={2021}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@anton-l](https://github.com/anton-l), [@aconneau](https://github.com/aconneau) for adding this dataset
| google/xtreme_s | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|multilingual_librispeech",
"source_datasets:extended|covost2",
"language:afr",
"language:amh",
"language:ara",
"language:asm",
"language:ast",
"language:azj",
"language:bel",
"language:ben",
"language:bos",
"language:cat",
"language:ceb",
"language:cmn",
"language:ces",
"language:cym",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:spa",
"language:est",
"language:fas",
"language:ful",
"language:fin",
"language:tgl",
"language:fra",
"language:gle",
"language:glg",
"language:guj",
"language:hau",
"language:heb",
"language:hin",
"language:hrv",
"language:hun",
"language:hye",
"language:ind",
"language:ibo",
"language:isl",
"language:ita",
"language:jpn",
"language:jav",
"language:kat",
"language:kam",
"language:kea",
"language:kaz",
"language:khm",
"language:kan",
"language:kor",
"language:ckb",
"language:kir",
"language:ltz",
"language:lug",
"language:lin",
"language:lao",
"language:lit",
"language:luo",
"language:lav",
"language:mri",
"language:mkd",
"language:mal",
"language:mon",
"language:mar",
"language:msa",
"language:mlt",
"language:mya",
"language:nob",
"language:npi",
"language:nld",
"language:nso",
"language:nya",
"language:oci",
"language:orm",
"language:ory",
"language:pan",
"language:pol",
"language:pus",
"language:por",
"language:ron",
"language:rus",
"language:bul",
"language:snd",
"language:slk",
"language:slv",
"language:sna",
"language:som",
"language:srp",
"language:swe",
"language:swh",
"language:tam",
"language:tel",
"language:tgk",
"language:tha",
"language:tur",
"language:ukr",
"language:umb",
"language:urd",
"language:uzb",
"language:vie",
"language:wol",
"language:xho",
"language:yor",
"language:yue",
"language:zul",
"license:cc-by-4.0",
"arxiv:2203.10752",
"arxiv:2205.12446",
"arxiv:2007.10310",
"region:us"
] | 2022-03-04T14:10:40+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["afr", "amh", "ara", "asm", "ast", "azj", "bel", "ben", "bos", "cat", "ceb", "cmn", "ces", "cym", "dan", "deu", "ell", "eng", "spa", "est", "fas", "ful", "fin", "tgl", "fra", "gle", "glg", "guj", "hau", "heb", "hin", "hrv", "hun", "hye", "ind", "ibo", "isl", "ita", "jpn", "jav", "kat", "kam", "kea", "kaz", "khm", "kan", "kor", "ckb", "kir", "ltz", "lug", "lin", "lao", "lit", "luo", "lav", "mri", "mkd", "mal", "mon", "mar", "msa", "mlt", "mya", "nob", "npi", "nld", "nso", "nya", "oci", "orm", "ory", "pan", "pol", "pus", "por", "ron", "rus", "bul", "snd", "slk", "slv", "sna", "som", "srp", "swe", "swh", "tam", "tel", "tgk", "tha", "tur", "ukr", "umb", "urd", "uzb", "vie", "wol", "xho", "yor", "yue", "zul"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|multilingual_librispeech", "extended|covost2"], "task_categories": ["automatic-speech-recognition", "speech-processing"], "task_ids": ["speech-recognition"], "paperswithcode_id": "librispeech-1", "pretty_name": "The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech (XTREME-S) benchmark is a benchmark designed to evaluate speech representations across languages, tasks, domains and data regimes. It covers 102 languages from 10+ language families, 3 different domains and 4 task families: speech recognition, translation, classification and retrieval."} | 2022-07-28T11:47:02+00:00 |
4d770e93b949baa821a5a6603039849e590cb260 | anjandash/java-8m-methods-v1 | [
"multilinguality:monolingual",
"license:mit",
"region:us"
] | 2022-03-04T17:16:46+00:00 | {"language": ["java"], "license": ["mit"], "multilinguality": ["monolingual"], "pretty_name": ["java-8m-methods-v1"]} | 2022-07-01T19:32:32+00:00 |
|
b46e2b76a97206642c5af891b8eb9bc6dad228b7 |
# Dataset Card for ElkarHizketak
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ElkarHizketak homepage](http://ixa.si.ehu.es/node/12934)
- **Paper:** [Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque](https://aclanthology.org/2020.lrec-1.55/)
- **Point of Contact:** [Arantxa Otegi](mailto:arantza.otegi@ehu.eus)
### Dataset Summary
ElkarHizketak is a low resource conversational Question Answering (QA) dataset in Basque created by Basque speaker volunteers. The dataset contains close to 400 dialogues and more than 1600 question and answers, and its small size presents a realistic low-resource scenario for conversational QA systems. The dataset is built on top of Wikipedia sections about popular people and organizations. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
### Supported Tasks and Leaderboards
- `extractive-qa`: The dataset can be used to train a model for Conversational Question Answering.
### Languages
The text in the dataset is in Basque.
## Dataset Structure
### Data Instances
An example from the train split:
```
{'dialogue_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d',
'wikipedia_page_title': 'Howard Becker',
'background': 'Howard Saul Becker (Chicago,Illinois, 1928ko apirilaren 18an) Estatu Batuetako soziologoa bat da. Bere ekarpen handienak desbiderakuntzaren soziologian, artearen soziologian eta musikaren soziologian egin ditu. "Outsiders" (1963) bere lanik garrantzitsuetako da eta bertan garatu zuen bere etiketatze-teoria. Nahiz eta elkarrekintza sinbolikoaren edo gizarte-konstruktibismoaren korronteen barruan sartu izan, berak ez du bere burua inongo paradigman kokatzen. Chicagoko Unibertsitatean graduatua, Becker Chicagoko Soziologia Eskolako bigarren belaunaldiaren barruan kokatu ohi da, Erving Goffman eta Anselm Strauss-ekin batera.',
'section_title': 'Hastapenak eta hezkuntza.',
'context': 'Howard Saul Becker Chicagon jaio zen 1928ko apirilaren 18an. Oso gazte zelarik piano jotzen asi zen eta 15 urte zituenean dagoeneko tabernetan aritzen zen pianoa jotzen. Beranduago Northwestern Unibertsitateko banda batean jo zuen. Beckerren arabera, erdi-profesional gisa aritu ahal izan zen Bigarren Mundu Gerra tokatu eta musikari gehienak soldadugai zeudelako. Musikari bezala egin zuen lan horretan egin zuen lehen aldiz drogaren kulturaren ezagutza, aurrerago ikerketa-gai hartuko zuena. 1946an bere graduazpiko soziologia titulua lortu zuen Chicagoko Unibertsitatean. Ikasten ari zen bitartean, pianoa jotzen jarraitu zuen modu erdi-profesionalean. Hala ere, soziologiako masterra eta doktoretza eskuratu zituen Chicagoko Unibertsitatean. Unibertsitate horretan Chicagoko Soziologia Eskolaren jatorrizko tradizioaren barruan hezia izan zen. Chicagoko Soziologia Eskolak garrantzi berezia ematen zion datu kualitatiboen analisiari eta Chicagoko hiria hartzen zuen ikerketa eremu bezala. Beckerren hasierako lan askok eskola honen tradizioaren eragina dute, bereziko Everett C. Hughes-en eragina, bere tutore eta gidari izan zena. Askotan elkarrekintzaile sinboliko bezala izendatua izan da, nahiz eta Beckerek berak ez duen gogoko izendapen hori. Haren arabera, bere leinu akademikoa Georg Simmel, Robert E. Park eta Everett Hughes dira. Doktoretza lortu ostean, 23 urterekin, Beckerrek marihuanaren erabilpena ikertu zuen "Institut for Juvenil Reseac"h-en. Ondoren Illinoisko Unibertsitatean eta Standfor Unibertsitateko ikerketa institutu batean aritu zen bere irakasle karrera hasi aurretik. CANNOTANSWER',
'turn_id': 'C_50be3f56f0d04c99a82f1f950baf0c2d_q#0',
'question': 'Zer da desbiderakuntzaren soziologia?',
'yesno': 2,
'answers': {'text': ['CANNOTANSWER'],
'answer_start': [1601],
'input_text': ['CANNOTANSWER']},
'orig_answer': {'text': 'CANNOTANSWER', 'answer_start': 1601}}
```
### Data Fields
The different fields are:
- `dialogue_id`: string,
- `wikipedia_page_title`: title of the wikipedia page as a string,
- `background`: string,
- `section_title`: title os the section as a string,
- `context`: context of the question as a string string,
- `turn_id`: string,
- `question`: question as a string,
- `yesno`: Class label that represents if the question is a yes/no question. Possible values are "y" (0), "n" (1), "x" (2),
- `answers`: a dictionary with three fields:
- `text`: list of texts of the answer as a string,
- `answer_start`: list of positions of the answers in the context as an int32,
- `input_text`: list of strings,
}
),
- `orig_answer`: {
- `text`: original answer text as a string,
- `answer_start`: original position of the answer as an int32,
},
### Data Splits
The data is split into a training, development and test set. The split sizes are as follow:
- train: 1,306 questions / 301 dialogues
- development: 161 questions / 38 dialogues
- test: 167 questions / 38 dialogues
## Dataset Creation
### Curation Rationale
This is the first non-English conversational QA dataset and the first conversational dataset for Basque. Its small size presents a realistic low-resource scenario for conversational QA systems.
### Source Data
#### Initial Data Collection and Normalization
First we selected sections of Wikipedia articles about people, as less specialized knowledge is required to converse about people than other categories. In order to retrieve articles we selected the following categories in Basque Wikipedia: Biografia (Biography is the equivalent category in English Wikipedia), Biografiak (People) and Gizabanako biziak (Living people). We applied this category filter and downloaded the articles using a querying tool provided by the Wikimedia foundation. Once we retrieved the articles, we selected sections from them that contained between 175 and 300 words. These filters and threshold were set after some pilot studies where we check the adequacy of the people involved in the selected articles and the length of the passages in order to have enough but not to much information to hold a conversation.
Then, dialogues were collected during some online sessions that we arranged with Basque speaking volunteers. The dialogues involve two crowd workers: (1) a student ask questions after reading a small introduction about the person, but without seeing the section text; and (2) a teacher answers the questions selecting a span of text of the section.
#### Who are the source language producers?
The language producers are Basque speaking volunteers which hold a conversation using a text-based chat interface developed for those purposes.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Arantxa Otegi, Jon Ander Campos, Aitor Soroa and Eneko Agirre from the [HiTZ Basque Center for Language Technologies](https://www.hitz.eus/) and [Ixa NLP Group](https://www.ixa.eus/) at the University of the Basque Country (UPV/EHU).
### Licensing Information
Copyright (C) by Ixa Taldea, University of the Basque Country UPV/EHU.
This dataset is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0).
To view a copy of this license, visit [https://creativecommons.org/licenses/by-sa/4.0/legalcode](https://creativecommons.org/licenses/by-sa/4.0/legalcode).
### Citation Information
If you are using this dataset in your work, please cite this publication:
```bibtex
@inproceedings{otegi-etal-2020-conversational,
title = "{Conversational Question Answering in Low Resource Scenarios: A Dataset and Case Study for Basque}",
author = "Otegi, Arantxa and
Agirre, Aitor and
Campos, Jon Ander and
Soroa, Aitor and
Agirre, Eneko",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.55",
pages = "436--442"
}
```
### Contributions
Thanks to [@antxa](https://github.com/antxa) for adding this dataset. | elkarhizketak | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:eu",
"license:cc-by-sa-4.0",
"dialogue-qa",
"region:us"
] | 2022-03-04T19:04:55+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["eu"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "ElkarHizketak", "tags": ["dialogue-qa"], "dataset_info": {"features": [{"name": "dialogue_id", "dtype": "string"}, {"name": "wikipedia_page_title", "dtype": "string"}, {"name": "background", "dtype": "string"}, {"name": "section_title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "turn_ids", "sequence": "string"}, {"name": "questions", "sequence": "string"}, {"name": "yesnos", "sequence": {"class_label": {"names": {"0": "y", "1": "n", "2": "x"}}}}, {"name": "answers", "sequence": [{"name": "texts", "sequence": "string"}, {"name": "answer_starts", "sequence": "int32"}, {"name": "input_texts", "sequence": "string"}]}, {"name": "orig_answers", "struct": [{"name": "texts", "sequence": "string"}, {"name": "answer_starts", "sequence": "int32"}]}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 1024378, "num_examples": 301}, {"name": "validation", "num_bytes": 125667, "num_examples": 38}, {"name": "test", "num_bytes": 127640, "num_examples": 38}], "download_size": 1927474, "dataset_size": 1277685}} | 2024-01-18T11:18:59+00:00 |
fb8b329c87153970e0d65e79f8b50220cc2b5ed9 |
# Dataset Card for HashSet Distant Sampled
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
HashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/hashset_distant_sampled | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
"region:us"
] | 2022-03-04T22:13:50+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["hi", "en"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "HashSet Distant Sampled", "tags": ["word-segmentation"]} | 2022-10-20T18:13:24+00:00 |
0df29003f66c0cb4e17e908cb42e3843d4bd6b11 |
# Dataset Card for HashSet Distant
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/hashset_distant | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
"region:us"
] | 2022-03-04T22:36:15+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["hi", "en"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "HashSet Distant", "tags": ["word-segmentation"]} | 2022-10-20T18:13:21+00:00 |
0a9c9a5d4ce9c5607c1939227efded92d225b28d | Edited version of cited dataset
Citation: Gupta, Raj, Vishwanath, Ajay, and Yang, Yinping. Global Reactions to COVID-19 on Twitter: A Labelled Dataset with Latent Topic, Sentiment and Emotion Attributes: Twitter COVID dataset Jan 2021. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2021-06-20. https://doi.org/10.3886/E120321V8-89860 | chiarab/covid-tweet-sentiment | [
"region:us"
] | 2022-03-04T22:56:30+00:00 | {} | 2022-03-04T23:35:22+00:00 |
d5aeed029db258e17d93b7e2bf0d1a84ff4f56e5 |
# Dataset Card for HashSet Manual
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Manual: contains 1.9k manually annotated hashtags. Each row consists of the hashtag, segmented hashtag ,named entity annotations, whether the hashtag contains mix of hindi and english tokens and/or contains non-english tokens.
### Languages
Mostly Hindi and English.
## Dataset Structure
### Data Instances
```
{
"index": 10,
"hashtag": "goodnewsmegan",
"segmentation": "good news megan",
"spans": {
"start": [
8
],
"end": [
13
],
"text": [
"megan"
]
},
"source": "roman",
"gold_position": null,
"mix": false,
"other": false,
"ner": true,
"annotator_id": 1,
"annotation_id": 2088,
"created_at": "2021-12-30 17:10:33.800607",
"updated_at": "2021-12-30 17:10:59.714840",
"lead_time": 3896.182,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
],
"candidate": [
"goodnewsmegan",
"goodnewsmeg an",
"goodnews megan",
"goodnewsmega n",
"go odnewsmegan",
"good news megan",
"good newsmegan",
"g oodnewsmegan",
"goodnewsme gan",
"goodnewsm egan"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `spans`: named entity spans.
- `source`: data source.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `mix`: The hashtag has a mix of English and Hindi tokens.
- `other`: The hashtag has non-English tokens.
- `ner`: The hashtag has named entities.
- `annotator_id`: annotator ID.
- `annotation_id`: annotation ID.
- `created_at`: Creation date timestamp.
- `updated_at`: Update date timestamp.
- `lead_time`: Lead time field annotated by Kodali et al..
- `rank`: Rank of each candidate selected by a baseline word segmenter ( WordBreaker ).
- `candidates`: Candidates selected by a baseline word segmenter ( WordBreaker ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/hashset_manual | [
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:hi",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:2201.06741",
"region:us"
] | 2022-03-05T05:52:48+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["hi", "en"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": ["named-entity-recognition"], "pretty_name": "HashSet Manual", "tags": ["word-segmentation"]} | 2022-10-20T18:13:18+00:00 |
926842c8fbeadabe99a88d30d4b7ce06a42fb64c |
# Dataset Card for STAN Large
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [mounicam/hashtag_master](https://github.com/mounicam/hashtag_master)
- **Paper:** [Multi-task Pairwise Neural Ranking for Hashtag Segmentation](https://aclanthology.org/P19-1242/)
### Dataset Summary
The description below was taken from the paper "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"
by Maddela et al..
"STAN large, our new expert curated dataset, which includes all 12,594 unique English hashtags and their
associated tweets from the same Stanford dataset.
STAN small is the most commonly used dataset in previous work. However, after reexamination, we found annotation
errors in 6.8% of the hashtags in this dataset, which is significant given that the error rate of the state-of-the art
models is only around 10%. Most of the errors were related to named entities. For example, #lionhead,
which refers to the “Lionhead” video game company, was labeled as “lion head”.
We therefore constructed the STAN large dataset of 12,594 hashtags with additional quality control for human annotations."
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "PokemonPlatinum",
"segmentation": "Pokemon Platinum",
"alternatives": {
"segmentation": [
"Pokemon platinum"
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `alternatives`: other segmentations that are also accepted as a gold segmentation.
Although `segmentation` has exactly the same characters as `hashtag` except for the spaces, the segmentations inside `alternatives` may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{maddela-etal-2019-multi,
title = "Multi-task Pairwise Neural Ranking for Hashtag Segmentation",
author = "Maddela, Mounica and
Xu, Wei and
Preo{\c{t}}iuc-Pietro, Daniel",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1242",
doi = "10.18653/v1/P19-1242",
pages = "2538--2549",
abstract = "Hashtags are often employed on social media and beyond to add metadata to a textual utterance with the goal of increasing discoverability, aiding search, or providing additional semantics. However, the semantic content of hashtags is not straightforward to infer as these represent ad-hoc conventions which frequently include multiple words joined together and can include abbreviations and unorthodox spellings. We build a dataset of 12,594 hashtags split into individual segments and propose a set of approaches for hashtag segmentation by framing it as a pairwise ranking problem between candidate segmentations. Our novel neural approaches demonstrate 24.6{\%} error reduction in hashtag segmentation accuracy compared to the current state-of-the-art method. Finally, we demonstrate that a deeper understanding of hashtag semantics obtained through segmentation is useful for downstream applications such as sentiment analysis, for which we achieved a 2.6{\%} increase in average recall on the SemEval 2017 sentiment analysis dataset.",
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/stan_large | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:agpl-3.0",
"word-segmentation",
"region:us"
] | 2022-03-05T06:47:42+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["agpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "STAN Large", "tags": ["word-segmentation"]} | 2022-10-20T18:13:15+00:00 |
af6d38e28c5033a1f89b50b9e26950fe73550e29 |
# Dataset Card for STAN Small
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [mounicam/hashtag_master](https://github.com/mounicam/hashtag_master)
- **Paper:** [Multi-task Pairwise Neural Ranking for Hashtag Segmentation](https://aclanthology.org/P19-1242/)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 300,
"hashtag": "microsoftfail",
"segmentation": "microsoft fail",
"alternatives": {
"segmentation": [
"Microsoft fail"
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `alternatives`: other segmentations that are also accepted as a gold segmentation.
Although `segmentation` has exactly the same characters as `hashtag` except for the spaces, the segmentations inside `alternatives` may have characters corrected to uppercase.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/stan_small | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:1501.03210",
"region:us"
] | 2022-03-05T07:02:09+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction", "conditional-text-generation"], "task_ids": [], "pretty_name": "STAN Small", "tags": ["word-segmentation"]} | 2022-10-20T18:13:12+00:00 |
27f9f67d4662570c17e251438164c3508643c32d |
# Dataset Card for BOUN
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
Dev-BOUN is a Development set that includes 500 manually segmented hashtags. These are selected from tweets about movies,
tv shows, popular people, sports teams etc.
Test-BOUN is a Test set that includes 500 manually segmented hashtags. These are selected from tweets about movies, tv shows, popular people, sports teams etc.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "tryingtosleep",
"segmentation": "trying to sleep"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/boun | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T07:17:18+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "BOUN", "tags": ["word-segmentation"]} | 2022-10-20T18:13:09+00:00 |
292e00146ecc1be6feefdb52362eace417791f4f |
# Dataset Card for Dev-Stanford
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting Hashtags and Analyzing Their Grammatical Structure](https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/asi.23989?author_access_token=qbKcE1jrre5nbv_Tn9csbU4keas67K9QMdWULTWMo8NOtY2aA39ck2w5Sm4ePQ1MZhbjCdEuaRlPEw2Kd12jzvwhwoWP0fdroZAwWsmXHPXxryDk_oBCup1i9_VDNIpU)
### Dataset Summary
1000 hashtags manually segmented by Çelebi et al. for development purposes,
randomly selected from the Stanford Sentiment Tweet Corpus by Sentiment140.
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 15,
"hashtag": "marathonmonday",
"segmentation": "marathon monday"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{celebi2018segmenting,
title={Segmenting hashtags and analyzing their grammatical structure},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
journal={Journal of the Association for Information Science and Technology},
volume={69},
number={5},
pages={675--686},
year={2018},
publisher={Wiley Online Library}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/dev_stanford | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T07:28:41+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "Dev-Stanford", "tags": ["word-segmentation"]} | 2022-10-20T18:13:37+00:00 |
48f64996c295b22e76cec4454362babfad31f581 |
# Dataset Card for Test-Stanford
## Dataset Description
- **Paper:** [Towards Deep Semantic Analysis Of Hashtags](https://arxiv.org/abs/1501.03210)
### Dataset Summary
Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 1467856821,
"hashtag": "therapyfail",
"segmentation": "therapy fail",
"gold_position": 8,
"rank": {
"position": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20
],
"candidate": [
"therap y fail",
"the rap y fail",
"t her apy fail",
"the rap yfail",
"t he rap y fail",
"thera py fail",
"ther apy fail",
"th era py fail",
"therapy fail",
"therapy fai l",
"the r apy fail",
"the rapyfa il",
"the rapy fail",
"t herapy fail",
"the rapyfail",
"therapy f ai l",
"therapy fa il",
"the rapyf a il",
"therapy f ail",
"the ra py fail"
]
}
}
```
### Data Fields
- `index`: a numerical index annotated by Kodali et al..
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
- `gold_position`: position of the gold segmentation on the `segmentation` field inside the `rank`.
- `rank`: Rank of each candidate selected by a baseline word segmenter ( Segmentations Seeder Module ).
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@misc{bansal2015deep,
title={Towards Deep Semantic Analysis Of Hashtags},
author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
year={2015},
eprint={1501.03210},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/test_stanford | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"arxiv:1501.03210",
"region:us"
] | 2022-03-05T08:26:17+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "Test-Stanford", "tags": ["word-segmentation"]} | 2022-10-20T18:13:07+00:00 |
2d33f11d465c83eb043544177daceb8f4d508343 |
# Battery Abstracts Dataset
This dataset includes 29,472 battery papers and 17,191 non-battery papers, a total of 46,663 papers. These papers are manually labelled in terms of the journals to which they belong. 14 battery journals and 1,044 non battery journals were selected to form this database.
- training_data.csv: Battery papers: 20,629, Non-battery papers: 12,034. Total: 32,663.
- val_data.csv: Battery papers: 5,895, Non-battery papers: 3,438. Total: 9,333.
- test_data.csv: Battery papers: 2,948, Non-battery papers: 1,719. Total: 4,667.
# Usage
```
from datasets import load_dataset
dataset = load_dataset("batterydata/paper-abstracts")
```
# Citation
```
@article{huang2022batterybert,
title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement},
author={Huang, Shu and Cole, Jacqueline M},
journal={J. Chem. Inf. Model.},
year={2022},
doi={10.1021/acs.jcim.2c00035},
url={DOI:10.1021/acs.jcim.2c00035},
pages={DOI: 10.1021/acs.jcim.2c00035},
publisher={ACS Publications}
}
``` | batterydata/paper-abstracts | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-03-05T13:55:17+00:00 | {"language": ["en"], "license": ["apache-2.0"], "task_categories": ["text-classification"], "pretty_name": "Battery Abstracts Dataset"} | 2022-09-05T14:54:02+00:00 |
586ba42e6c8a76b305b4e27fc20ce99226a2c1d4 | A new Swahili tweet dataset for sentiment analysis.
## Issues ⚠️
Incase you have any difficulties or issues while trying to run the script
you can raise it on the issues section.
## Pull Requests 🔧
If you have something to add or new idea to implement, you are welcome to create a pull requests on improvement.
## Give it a Like 👍
If you find this dataset useful, give it a like so as many people can get to know it.
## Credits
All the credits to [Davis David ](https://twitter.com/Davis_McDavid), [Zephania Reuben](https://twitter.com/nsomazr) & [Eliya Masesa](https://twitter.com/eliya_masesa) | Davis/Swahili-tweet-sentiment | [
"license:mit",
"region:us"
] | 2022-03-05T16:03:06+00:00 | {"license": "mit"} | 2022-03-05T17:58:17+00:00 |
4fb954beab9774a12cac3a13ee08616d5e10df6d |
# Dataset Card for NRU-HSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [glushkovato/hashtag_segmentation](https://github.com/glushkovato/hashtag_segmentation/)
- **Paper:** [Char-RNN and Active Learning for Hashtag Segmentation](https://arxiv.org/abs/1911.03270)
### Dataset Summary
Real hashtags collected from several pages about civil services on vk.com (a Russian social network) and then segmented manually.
### Languages
Russian
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "ЁлкаВЗазеркалье",
"segmentation": "Ёлка В Зазеркалье"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{glushkova2019char,
title={Char-RNN and Active Learning for Hashtag Segmentation},
author={Glushkova, Taisiya and Artemova, Ekaterina},
journal={arXiv preprint arXiv:1911.03270},
year={2019}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/nru_hse | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:ru",
"license:unknown",
"word-segmentation",
"arxiv:1911.03270",
"region:us"
] | 2022-03-05T17:40:41+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["ru"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "NRU-HSE", "tags": ["word-segmentation"]} | 2022-10-20T18:12:59+00:00 |
e51544fd07e72dfa6bf830b56e417adba8dc50ba |
# Dataset Card for The Loyola University of Delaware Identifier Splitting Oracle
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Loyola University of Delaware Identifier Splitting Oracle](http://www.cs.loyola.edu/~binkley/ludiso/)
- **Paper:** [An empirical study of identifier splitting techniques](https://dl.acm.org/doi/10.1007/s10664-013-9261-0)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
The Loyola University of Delaware Identifier Splitting Oracle is a dataset for identifier segmentation,
i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
- C
- C++
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "::CreateProcess",
"segmentation": ":: Create Process",
"language": "cpp",
"source": "mozilla-source-1.1"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
- `language`: the programming language of the source.
- `source`: the source of the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@article{hill2014empirical,
title={An empirical study of identifier splitting techniques},
author={Hill, Emily and Binkley, David and Lawrie, Dawn and Pollock, Lori and Vijay-Shanker, K},
journal={Empirical Software Engineering},
volume={19},
number={6},
pages={1754--1780},
year={2014},
publisher={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/loyola | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T19:23:21+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "The Loyola University of Delaware Identifier Splitting Oracle", "tags": ["word-segmentation"]} | 2022-10-20T18:13:04+00:00 |
f47b2a116e3e6ad75fc4dbf17a4c8527d0fb0126 | This dataset is presented for the task of Answering Questions on the Holy Qur'an.
https://sites.google.com/view/quran-qa-2022
QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are coupled with their extracted answers to constitute 1,337 question-passage-answer triplets. It is split into training (65%), development (10%), and test (25%) sets.
QRCD is a JSON Lines (JSONL) file; each line is a JSON object that comprises a question-passage pair, along with its answers extracted from the accompanying passage. The dataset adopts the format shown below. The sample below has two JSON objects, one for each of the above two questions. | AhmedSSoliman/QRCD | [
"region:us"
] | 2022-03-05T20:46:25+00:00 | {} | 2022-03-06T18:58:06+00:00 |
f60c3e93c0985c90741d15948afc694f9460b3d9 |
# Dataset Card for synQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [synQA homepage](https://github.com/maxbartolo/improving-qa-model-robustness)
- **Paper:** [Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation](https://aclanthology.org/2021.emnlp-main.696/)
- **Point of Contact:** [Max Bartolo](max.bartolo@ucl.ac.uk)
### Dataset Summary
SynQA is a Reading Comprehension dataset created in the work "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation" (https://aclanthology.org/2021.emnlp-main.696/).
It consists of 314,811 synthetically generated questions on the passages in the SQuAD v1.1 (https://arxiv.org/abs/1606.05250) training set.
In this work, we use a synthetic adversarial data generation to make QA models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA (https://adversarialqa.github.io/) dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8% of the time on average, compared to 17.6% for a model trained without synthetic data.
For full details on how the dataset was created, kindly refer to the paper.
### Supported Tasks
`extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1).ilable as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
```
{
"data": [
{
"title": "None",
"paragraphs": [
{
"context": "Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.",
"qas": [
{
"id": "689f275aacba6c43ff112b2c7cb16129bfa934fa",
"question": "What material is the statue of Christ made of?",
"answers": [
{
"answer_start": 190,
"text": "organic copper"
}
]
},
{
"id": "73bd3f52f5934e02332787898f6e568d04bc5403",
"question": "Who is on the Main Building's gold dome?",
"answers": [
{
"answer_start": 111,
"text": "the Virgin Mary."
}
]
},
{
"id": "4d459d5b75fd8a6623446290c542f99f1538cf84",
"question": "What kind of statue is at the end of the main drive?",
"answers": [
{
"answer_start": 667,
"text": "modern stone"
}
]
},
{
"id": "987a1e469c5b360f142b0a171e15cef17cd68ea6",
"question": "What type of dome is on the Main Building at Notre Dame?",
"answers": [
{
"answer_start": 79,
"text": "gold"
}
]
}
]
}
]
}
]
}
```
### Data Fields
- title: all "None" in this dataset
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text.
### Data Splits
The dataset is composed of a single split of 314,811 examples that we used in a two-stage fine-tuning process (refer to the paper for further details).
## Dataset Creation
### Curation Rationale
This dataset was created to investigate the effects of using synthetic adversarial data generation to improve robustness of state-of-the-art QA models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250).
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and a BART-Large generative model for the questions.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a support resource for improve the ability of systems t handle questions that contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, selected candidate answers, generated questions, quality re-labelling process, as well as any algorithmic biases that may be exacerbated from the adversarial annotation process used to collect the SQuAD and AdversarialQA data on which the generators were trained.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela during work carried out at University College London (UCL) and Facebook AI Research (FAIR).
### Licensing Information
This dataset is distributed under the [MIT License](https://opensource.org/licenses/MIT).
### Citation Information
```
@inproceedings{bartolo-etal-2021-improving,
title = "Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation",
author = "Bartolo, Max and
Thrush, Tristan and
Jia, Robin and
Riedel, Sebastian and
Stenetorp, Pontus and
Kiela, Douwe",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.696",
doi = "10.18653/v1/2021.emnlp-main.696",
pages = "8830--8848",
abstract = "Despite recent progress, state-of-the-art question answering models remain vulnerable to a variety of adversarial attacks. While dynamic adversarial data collection, in which a human annotator tries to write examples that fool a model-in-the-loop, can improve model robustness, this process is expensive which limits the scale of the collected data. In this work, we are the first to use synthetic adversarial data generation to make question answering models more robust to human adversaries. We develop a data generation pipeline that selects source passages, identifies candidate answers, generates questions, then finally filters or re-labels them to improve quality. Using this approach, we amplify a smaller human-written adversarial dataset to a much larger set of synthetic question-answer pairs. By incorporating our synthetic data, we improve the state-of-the-art on the AdversarialQA dataset by 3.7F1 and improve model generalisation on nine of the twelve MRQA datasets. We further conduct a novel human-in-the-loop evaluation and show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8.8{\%} of the time on average, compared to 17.6{\%} for a model trained without synthetic data.",
}
```
### Contributions
Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset.
| mbartolo/synQA | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:1606.05250",
"region:us"
] | 2022-03-05T21:24:45+00:00 | {"annotations_creators": ["generated"], "language_creators": ["found"], "language": ["en"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "open-domain-qa"], "pretty_name": "synQA"} | 2022-10-25T09:02:24+00:00 |
3a424cd1ff2d75a58e267c7f897e1f7d6ae121d4 | Paulosdeanllons/sedar | [
"license:afl-3.0",
"region:us"
] | 2022-03-05T22:38:44+00:00 | {"license": "afl-3.0"} | 2022-03-05T22:38:44+00:00 |
|
1877395c47bcf77735761c694234dd55d3598bc5 |
# Dataset Card for BT11
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
BT11 is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
```
{
"index": 20170,
"identifier": "currentLineHighlight",
"segmentation": "current Line Highlight"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{butler2011improving,
title={Improving the tokenisation of identifier names},
author={Butler, Simon and Wermelinger, Michel and Yu, Yijun and Sharp, Helen},
booktitle={European Conference on Object-Oriented Programming},
pages={130--154},
year={2011},
organization={Springer}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/bt11 | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T22:41:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "BT11", "tags": ["word-segmentation"]} | 2022-10-20T18:13:02+00:00 |
5ccd62cfd185abd77dffc846d2cd3499e0c286c9 |
# Dataset Card for Binkley
## Dataset Description
- **Paper:** [Normalizing Source Code Vocabulary](https://www.researchgate.net/publication/224198190_Normalizing_Source_Code_Vocabulary)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Binkley is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- C
- C++
- Java
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "init_g16_i",
"segmentation": "init _ g 16 _ i"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{inproceedings,
author = {Lawrie, Dawn and Binkley, David and Morrell, Christopher},
year = {2010},
month = {11},
pages = {3 - 12},
title = {Normalizing Source Code Vocabulary},
journal = {Proceedings - Working Conference on Reverse Engineering, WCRE},
doi = {10.1109/WCRE.2010.10}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/binkley | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T22:56:51+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "Binkley", "tags": ["word-segmentation"]} | 2022-10-20T18:12:56+00:00 |
df859ecce54578af17e873cf79438b082632de1d |
# Dataset Card for Jhotdraw
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Jhotdraw is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
### Languages
- Java
## Dataset Structure
### Data Instances
```
{
"index": 0,
"identifier": "abstractconnectorserializeddataversion",
"segmentation": "abstract connector serialized data version"
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{madani2010recognizing,
title={Recognizing words from source code identifiers using speech recognition techniques},
author={Madani, Nioosha and Guerrouj, Latifa and Di Penta, Massimiliano and Gueheneuc, Yann-Gael and Antoniol, Giuliano},
booktitle={2010 14th European Conference on Software Maintenance and Reengineering},
pages={68--77},
year={2010},
organization={IEEE}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/jhotdraw | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T23:13:37+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "Jhotdraw", "tags": ["word-segmentation"]} | 2022-10-20T18:12:53+00:00 |
9046da8c9a595ead11d7d243780db677f2ce9618 |
# Dataset Card for Lynx
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Helpful or Not? An investigation on the feasibility of identifier splitting via CNN-BiLSTM-CRF](https://ksiresearch.org/seke/seke18paper/seke18paper_167.pdf)
### Dataset Summary
In programming languages, identifiers are tokens (also called symbols) which name language entities.
Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
Lynx is a dataset for identifier segmentation, i.e. the task of adding spaces between the words on a identifier.
Besides identifier segmentation, the gold labels for this dataset also include abbreviation expansion.
### Languages
- C
## Dataset Structure
### Data Instances
```
{
"index": 3,
"identifier": "abspath",
"segmentation": "abs path",
"expansion": "absolute path",
"spans": {
"text": [
"abs"
],
"expansion": [
"absolute"
],
"start": [
0
],
"end": [
4
]
}
}
```
### Data Fields
- `index`: a numerical index.
- `identifier`: the original identifier.
- `segmentation`: the gold segmentation for the identifier, without abbreviation expansion.
- `expansion`: the gold segmentation for the identifier, with abbreviation expansion.
- `spans`: the start and end index of each abbreviation, the text of the abbreviation and its corresponding expansion.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
### Citation Information
```
@inproceedings{madani2010recognizing,
title={Recognizing words from source code identifiers using speech recognition techniques},
author={Madani, Nioosha and Guerrouj, Latifa and Di Penta, Massimiliano and Gueheneuc, Yann-Gael and Antoniol, Giuliano},
booktitle={2010 14th European Conference on Software Maintenance and Reengineering},
pages={68--77},
year={2010},
organization={IEEE}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. | ruanchaves/lynx | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-05T23:19:48+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["code"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction", "code-generation", "conditional-text-generation"], "task_ids": [], "pretty_name": "Lynx", "tags": ["word-segmentation"]} | 2022-10-20T18:12:51+00:00 |
dec0e19ff4bab5b5b1a972909b2ea38118644d0f |
# Dataset Card for SNAP
## Dataset Description
- **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
- **Paper:** [Segmenting hashtags using automatically created training data](http://www.lrec-conf.org/proceedings/lrec2016/pdf/708_Paper.pdf)
### Dataset Summary
Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data".
### Languages
English
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "BrandThunder",
"segmentation": "Brand Thunder"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@inproceedings{celebi2016segmenting,
title={Segmenting hashtags using automatically created training data},
author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
booktitle={Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},
pages={2981--2985},
year={2016}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library.
| ruanchaves/snap | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:unknown",
"word-segmentation",
"region:us"
] | 2022-03-06T00:17:23+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["structure-prediction"], "task_ids": [], "pretty_name": "SNAP", "tags": ["word-segmentation"]} | 2022-10-20T18:12:47+00:00 |
0a295fc67ae9892cf83d9f585fbd5f29330bf502 | A collection of 38,176 emoji images from Facebook, Google, Apple, WhatsApp, Samsung, [JoyPixels](https://www.joypixels.com/), Twitter, [emojidex](https://www.emojidex.com/), LG, [OpenMoji](https://openmoji.org/), and Microsoft. It includes all the emojis for these apps/platforms as of early 2022.
* Counts: Facebook=3664, Google=3664, Apple=3961, WhatsApp=3519, Samsung=3752, JoyPixels=3538, Twitter=3544, emojidex=2040, LG=3051, OpenMoji=3512, Microsoft=3931.
* Sizes: Facebook=144x144, Google=144x144, Apple=144x144, WhatsApp=144x144, Samsung=108x108, JoyPixels=144x144, Twitter=144x144, emojidex=144x144, LG=136x128, OpenMoji=144x144, Microsoft=144x144.
* The tar files directly contain the image files (they're not inside a parent folder).
* The emoji code points are at the end of the filename, but there are some adjustments needed to parse them into the Unicode character consistently across all sets of emojis in this dataset. Here's some JavaScript code to convert the file name of an emoji image into the actual Unicode emoji character:
```js
let filename = ...;
let fixedFilename = filename.replace(/(no|light|medium|medium-light|medium-dark|dark)-skin-tone/, "").replace(/__/, "_").replace(/--/, "-");
let emoji = String.fromCodePoint(...fixedFilename.split("_")[1].split(".")[0].split("-").map(hex => parseInt(hex, 16)));
```
## Facebook examples:

## Google examples:

## Apple examples:

## WhatsApp examples:

## Samsung examples:

## JoyPixels examples:

## Twitter examples:

## emojidex examples:

## LG examples:

## OpenMoji examples:

## Microsoft examples:
 | rocca/emojis | [
"region:us"
] | 2022-03-06T02:31:30+00:00 | {} | 2022-04-29T08:37:55+00:00 |
b6ac7236577e02ea792277816649217bd6068381 | Carlisle/msmarco-passage-non-abs | [
"license:mit",
"region:us"
] | 2022-03-06T18:38:34+00:00 | {"license": "mit"} | 2022-03-06T18:40:15+00:00 |
|
207e3206c2b03cfd98e167d1f2588c7412e37f6b | Carlisle/msmarco-passage-abs | [
"license:mit",
"region:us"
] | 2022-03-06T19:30:55+00:00 | {"license": "mit"} | 2022-03-06T20:04:45+00:00 |
|
72047fee5890ca82c752902aedb138cc72c6fb96 |
# Dataset Card for COVID-19 French News dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The COVID-19 French News dataset is a French-language dataset containing just over 40k unique news articles from more than 50 different French-speaking online newspapers. The dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an integrated web crawler and information extractor for news. The current version supports abstractive summarization and topic classification. Dataset Card not finished yet.
### Languages
The text in the dataset is in French.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `title`: title of the article
- `description`: description or a summary of the article
- `text`: the actual article text in raw form
- `domain`: source domain of the article (i.e. lemonde.fr)
- `url`: article URL, the original URL where it was scraped
- `labels`: classification labels
## Data Splits
COVID-19 French News dataset has only the training set, i.e. it has to be loaded with train split specified: fr_covid_news = load_dataset('gustavecortal/fr_covid_news', split="train")
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
### Annotations
#### Annotation process
[More Information Needed]
### Personal and Sensitive Information
As one can imagine, data contains contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help researchers develop better French topic classification and abstractive summarization models for news related to COVID-19.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The data was originally collected by Gustave Cortal (gustavecortal@gmail.com)
### Licensing Information
Usage of the dataset is restricted to non-commercial research purposes only.
### Citation Information
```
@dataset{fr_covid_news,
author = {Gustave Cortal},
year = {2022},
title = {COVID-19 - French News Dataset},
url = {https://www.gustavecortal.com}
}
```
### Contributions
[@gustavecortal](https://github.com/gustavecortal) | gustavecortal/fr_covid_news | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:multi-label-classification",
"task_ids:multi-class-classification",
"task_ids:language-modeling",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:fr",
"license:unknown",
"region:us"
] | 2022-03-06T21:28:35+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["fr"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification", "sequence-modeling", "conditional-text-generation"], "task_ids": ["topic-classification", "multi-label-classification", "multi-class-classification", "language-modeling", "summarization", "other-stuctured-to-text"], "pretty_name": "COVID-19 French News dataset", "language_bcp47": ["fr-FR"]} | 2022-10-20T18:01:24+00:00 |
9e85227220f34246003bdde92d4c2151106d21f6 | m-newhauser/senator-tweets | [
"region:us"
] | 2022-03-07T16:37:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "date", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "username", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "party", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 145722682, "num_examples": 79754}, {"name": "test", "num_bytes": 36427736, "num_examples": 19939}], "download_size": 232535302, "dataset_size": 182150418}} | 2024-01-26T12:44:20+00:00 |
|
e5322fec79e6702f69d79829efdc7853f1853802 | ---
annotations_creators:
- crowdsourced
languages:
- en
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
--- | FinScience/FS-distilroberta-fine-tuned | [
"language:en",
"region:us"
] | 2022-03-07T17:24:39+00:00 | {"language": ["en"]} | 2022-10-25T09:02:42+00:00 |
d2ae9ace717cb0ac375fb3b2c14d2bb5205da8a8 | Carlisle/msmacro-test | [
"license:mit",
"region:us"
] | 2022-03-07T18:09:33+00:00 | {"license": "mit"} | 2022-03-11T00:19:32+00:00 |
|
8b0ee369302c23871e42335fe72e76622f486fdf | Carlisle/msmacro-passage-non-abs-small | [
"license:mit",
"region:us"
] | 2022-03-07T18:19:10+00:00 | {"license": "mit"} | 2022-03-07T18:19:10+00:00 |
|
18ce5e787650a1f682fec9588df0cc463a984f0e | Carlisle/msmacro-test-corpus | [
"license:mit",
"region:us"
] | 2022-03-07T18:32:48+00:00 | {"license": "mit"} | 2022-03-11T00:13:14+00:00 |
|
87615eac7add0a10355c50b25b5cff17e782cad3 |
# Dataset Card for "MIMICause"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additinal-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/)
- **Paper:** [MIMICause: Representation and automatic extraction of causal relation types from clinical notes](https://arxiv.org/abs/2110.07090)
- **Size of downloaded dataset files:** 333.4 KB
- **Size of the generated dataset:** 491.2 KB
- **Total amount of disk used:** 668.2 KB
### Dataset Summary
MIMICause Dataset is a dataset for representation and automatic extraction of causal relation types from clinical notes. The MIMICause dataset requires manual download of the mimicause.zip file from the **Community Annotations Downloads** section of the n2c2 dataset on the [Harvard's DBMI Data Portal](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/) after signing their agreement forms, which is a quick and easy procedure.
The dataset has 2714 samples having both explicit and implicit causality in which entities are in the same sentence or different sentences. The nine semantic causal relations (with directionality) between entitities E1 and E2 in a text snippets are -- (1) Cause(E1,E2) (2) Cause(E2,E1) (3) Enable(E1,E2) (4) Enable(E2,E1) (5) Prevent(E1,E2) (6) Prevent(E2,E1) (7) Hinder(E1,E2) (8) Hinder(E2,E1) (9) Other.
### Supported Tasks
Causal relation extraction between entities expressed implicitly or explicitly, in single or across multiple sentences.
## Dataset Structure
### Data Instances
An example of a data sample looks as follows:
```
{
"E1": "Florinef",
"E2": "fluid retention",
"Text": "Treated with <e1>Florinef</e1> in the past, was d/c'd due to <e2>fluid retention</e2>.",
"Label": 0
}
```
### Data Fields
The data fields are the same among all the splits.
- `E1`: a `string` value.
- `E2`: a `string` value.
- `Text`: a `large_string` value.
- `Label`: a `ClassLabel` categorical value.
### Data Splits
The original dataset that gets downloaded from the [Harvard's DBMI Data Portal](https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/) have all the data in a single split. The dataset loading provided here through huggingface datasets splits the data into the following train, validation and test splits for convenience.
| name |train|validation|test|
|---------|----:|---------:|---:|
|mimicause| 1953| 489 | 272|
## Additional Information
### Citation Information
```
@inproceedings{khetan-etal-2022-mimicause,
title={MIMICause: Representation and automatic extraction of causal relation types from clinical notes},
author={Vivek Khetan and Md Imbesat Hassan Rizvi and Jessica Huber and Paige Bartusiak and Bogdan Sacaleanu and Andrew Fano},
booktitle ={Findings of the Association for Computational Linguistics: ACL 2022},
month={may},
year={2022},
publisher={Association for Computational Linguistics},
address={Dublin, The Republic of Ireland},
url={},
doi={},
pages={},
}
``` | pensieves/mimicause | [
"license:apache-2.0",
"arxiv:2110.07090",
"region:us"
] | 2022-03-07T20:33:38+00:00 | {"license": "apache-2.0", "pretty_name": "MIMICause"} | 2022-03-29T13:54:48+00:00 |
86d2ca7da33fbef822c6a0786c12eaa8cb3772fa |
# Quasper into squad version
This is a change of format of [qasper](https://huggingface.co/datasets/qasper) dataset into squad format. | z-uo/qasper-squad | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2022-03-08T09:20:15+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "qasper-squad", "language_bcp47": ["en-US"]} | 2022-10-25T09:02:49+00:00 |
b333b72d400f6b4a23fd33524065cb732b372c8a | shpotes/bosch-small-traffic-lights-dataset | [
"license:other",
"region:us"
] | 2022-03-08T14:48:14+00:00 | {"license": "other"} | 2022-03-10T20:00:45+00:00 |
|
abab96a91ef584e7da293226844f0eaafb9498b7 | Carlosholivan/base | [
"license:apache-2.0",
"region:us"
] | 2022-03-08T18:14:11+00:00 | {"license": "apache-2.0"} | 2022-03-08T18:14:11+00:00 |
|
4a906f0b97bc7341bfc5d4453ae23a78edefc0b3 |
# Dataset Card for the-antiwork-subreddit-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-antiwork-subreddit-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theantiworksubredditdataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theantiworksubredditdataset)
### Dataset Summary
This corpus contains the complete data for the activity of the /r/Antiwork subreddit until 2022-02-18.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] | SocialGrep/the-antiwork-subreddit-dataset | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-03-08T21:09:51+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]} | 2022-07-01T16:57:34+00:00 |
90b930b5609f5f668c765a5d23f9610d5d0dbcf1 | Dataset for Loyal Health Inc Software Engineer Machine Learning Interview | christianloyal/loyal_clinc_MLE | [
"license:mit",
"region:us"
] | 2022-03-09T00:42:08+00:00 | {"license": "mit"} | 2022-03-10T17:50:54+00:00 |
1b9776677fd2d5b21056e200089942709d0c3206 | This is my first dataset | hadehuang/testdataset | [
"region:us"
] | 2022-03-09T08:20:00+00:00 | {} | 2022-03-09T08:24:49+00:00 |
09dbf84b296f8ecf26bed37536f39a14a2048657 | # Dataset Card for Emoevent
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [EmoEvent dataset repository](https://github.com/fmplaza/EmoEvent)
- **Paper: EmoEvent:** [A Multilingual Emotion Corpus based on different Events](https://aclanthology.org/2020.lrec-1.186.pdf)
- **Leaderboard:** [Leaderboard for EmoEvent / Spanish version](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385)
- **Point of Contact: fmplaza@ujaen.es**
### Dataset Summary
EmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019.
Three annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. Morevoer, the tweets are annotated as offensive (OFF) or non-offensive (NO).
### Supported Tasks and Leaderboards
This dataset is intended for multi-class emotion classification and binary offensive classification.
Competition [EmoEvalEs task on emotion detection for Spanish at IberLEF 2021](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385)
### Languages
- Spanish
- English
## Dataset Structure
### Data Instances
For each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the []() to explore more examples.
```
{'id': 'a0c1a858-a9b8-4cb1-8a81-1602736ff5b8',
'event': 'GameOfThrones',
'tweet': 'ARYA DE MI VIDA. ERES MAS ÉPICA QUE EL GOL DE INIESTA JODER #JuegodeTronos #VivePoniente',
'offensive': 'NO',
'emotion': 'joy',
}
```
```
{'id': '3YCT0L9OMMFP7KWKQSTJRJO0YHUSN2a0c1a858-a9b8-4cb1-8a81-1602736ff5b8',
'event': 'GameOfThrones',
'tweet': 'The #NotreDameCathedralFire is indeed sad and people call all offered donations humane acts, but please if you have money to donate, donate to humans and help bring food to their tables and affordable education first. What more humane than that? #HumanityFirst',
'offensive': 'NO',
'emotion': 'sadness',
}
```
### Data Fields
- `id`: a string to identify the tweet
- `event`: a string containing the event associated with the tweet
- `tweet`: a string containing the text of the tweet
- `offensive`: a string containing the offensive gold label
- `emotion`: a string containing the emotion gold label
### Data Splits
The EmoEvent dataset has 2 subsets: EmoEvent_es (Spanish version) and EmoEvent_en (English version)
Each subset contains 3 splits: _train_, _validation_, and _test_. Below are the statistics subsets.
| EmoEvent_es | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 5,723 |
| Validation | 844 |
| Test | 1,656 |
| EmoEvent_en | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 5,112 |
| Validation | 744 |
| Test | 1,447 |
## Dataset Creation
### Source Data
Twitter
#### Who are the annotators?
Amazon Mechanical Turkers
## Additional Information
### Licensing Information
The EmoEvent dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@inproceedings{plaza-del-arco-etal-2020-emoevent,
title = "{{E}mo{E}vent: A Multilingual Emotion Corpus based on different Events}",
author = "{Plaza-del-Arco}, {Flor Miriam} and Strapparava, Carlo and {Ure{\~n}a-L{\’o}pez}, L. Alfonso and {Mart{\’i}n-Valdivia}, M. Teresa",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France", publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.186", pages = "1492--1498",
language = "English",
ISBN = "979-10-95546-34-4"
}
``` | fmplaza/EmoEvent | [
"language:en",
"language:es",
"license:apache-2.0",
"region:us"
] | 2022-03-09T10:17:46+00:00 | {"language": ["en", "es"], "license": "apache-2.0"} | 2024-02-06T14:28:03+00:00 |
59566ca6c10db39a863bef6d894e095e85e5c930 | khcy82dyc/zzzz | [
"license:apache-2.0",
"region:us"
] | 2022-03-09T10:43:45+00:00 | {"license": "apache-2.0"} | 2022-03-09T11:03:58+00:00 |
|
d74c67aec2ac5a2f561bcb30aa8e1fc7d7d88b92 |
# Dataset Card for "IndicParaphrase"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicParaphrase is the paraphrasing dataset released as part of IndicNLG Suite. Each
input is paired with up to 5 references. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 5.57M.
### Supported Tasks and Leaderboards
**Tasks:** Paraphrase generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One example from the `hi` dataset is given below in JSON format.
```
{
'id': '1',
'input': 'निजी क्षेत्र में प्रदेश की 75 प्रतिशत नौकरियां हरियाणा के युवाओं के लिए आरक्षित की जाएगी।',
'references': ['प्रदेश के युवाओं को निजी उद्योगों में 75 प्रतिशत आरक्षण देंगे।',
'युवाओं के लिए हरियाणा की सभी प्राइवेट नौकरियों में 75 प्रतिशत आरक्षण लागू किया जाएगा।',
'निजी क्षेत्र में 75 प्रतिशत आरक्षित लागू कर प्रदेश के युवाओं का रोजगार सुनिश्चत किया जाएगा।',
'प्राईवेट कम्पनियों में हरियाणा के नौजवानों को 75 प्रतिशत नौकरियां में आरक्षित की जाएगी।',
'प्रदेश की प्राइवेट फैक्टरियों में 75 फीसदी रोजगार हरियाणा के युवाओं के लिए आरक्षित किए जाएंगे।'],
'target': 'प्रदेश के युवाओं को निजी उद्योगों में 75 प्रतिशत आरक्षण देंगे।'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `pivot (string)`: English sentence used as the pivot
- `input (string)`: Input sentence
- `references (list of strings)`: Paraphrases of `input`, ordered according to the least n-gram overlap
- `target (string)`: The first reference (most dissimilar paraphrase)
### Data Splits
We first select 10K instances each for the validation and test and put remaining in the training dataset. `Assamese (as)`, due to its low-resource nature, could only be split into validation and test sets with 4,420 examples each.
Individual dataset with train-dev-test example counts are given below:
Language | ISO 639-1 Code |Train | Dev | Test |
--------------|----------------|-------|-----|------|
Assamese | as | - | 4,420 | 4,420 |
Bengali | bn | 890,445 | 10,000 | 10,000 |
Gujarati | gu | 379,202 | 10,000 | 10,000 |
Hindi | hi | 929,507 | 10,000 | 10,000 |
Kannada | kn | 522,148 | 10,000 | 10,000 |
Malayalam | ml |761,933 | 10,000 | 10,000 |
Marathi | mr |406,003 | 10,000 | 10,000 |
Oriya | or | 105,970 | 10,000 | 10,000 |
Punjabi | pa | 266,704 | 10,000 | 10,000 |
Tamil | ta | 497,798 | 10,000 | 10,000 |
Telugu | te | 596,283 | 10,000 | 10,000 |
## Dataset Creation
### Curation Rationale
[More information needed]
### Source Data
[Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
### Contributions
| ai4bharat/IndicParaphrase | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-09T11:28:53+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-paraphrase-generation"], "pretty_name": "IndicParaphrase"} | 2022-10-13T05:08:55+00:00 |
03d5016d18872b209e80fd9eb913225c096defd0 | # Comparing model predictions and ground truth labels with Rubrix and Hugging Face
## Build dataset
You can skip this step if you run:
```python
from datasets import load_dataset
import rubrix as rb
ds = rb.DatasetForTextClassification.from_datasets(load_dataset("rubrix/sst2_with_predictions", split="train"))
```
Otherwise, the following cell will run the pipeline over the training set and store labels and predictions.
```python
from datasets import load_dataset
from transformers import pipeline, AutoModelForSequenceClassification
import rubrix as rb
name = "distilbert-base-uncased-finetuned-sst-2-english"
# Need to define id2label because surprisingly the pipeline has uppercase label names
model = AutoModelForSequenceClassification.from_pretrained(name, id2label={0: 'negative', 1: 'positive'})
nlp = pipeline("sentiment-analysis", model=model, tokenizer=name, return_all_scores=True)
dataset = load_dataset("glue", "sst2", split="train")
# batch predict
def predict(example):
return {"prediction": nlp(example["sentence"])}
# add predictions to the dataset
dataset = dataset.map(predict, batched=True).rename_column("sentence", "text")
# build rubrix dataset from hf dataset
ds = rb.DatasetForTextClassification.from_datasets(dataset, annotation="label")
```
```python
# Install Rubrix and start exploring and sharing URLs with interesting subsets, etc.
rb.log(ds, "sst2")
```
```python
ds.to_datasets().push_to_hub("rubrix/sst2_with_predictions")
```
Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]
## Analize misspredictions and ambiguous labels
### With the UI
With Rubrix's UI you can:
- Combine filters and full-text/DSL queries to quickly find important samples
- All URLs contain the state so you can share with collaborator and annotator specific dataset regions to work on.
- Sort examples by score, as well as custom metadata fields.

### Programmatically
Let's find all the wrong predictions from Python. This is useful for bulk operations (relabelling, discarding, etc.) as well as
```python
import pandas as pd
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>this particular , anciently demanding métier</td>
<td>[(negative, 0.9386059045791626), (positive, 0.06139408051967621)]</td>
<td>positive</td>
</tr>
<tr>
<th>1</th>
<td>under our skin</td>
<td>[(positive, 0.7508484721183777), (negative, 0.24915160238742828)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>evokes a palpable sense of disconnection , made all the more poignant by the incessant use of cell phones .</td>
<td>[(negative, 0.6634528636932373), (positive, 0.3365470767021179)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>into a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(positive, 0.6178210377693176), (negative, 0.3821789622306824)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>transcends ethnic lines .</td>
<td>[(positive, 0.9758220314979553), (negative, 0.024177948012948036)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>is barely</td>
<td>[(negative, 0.9922297596931458), (positive, 0.00777028314769268)]</td>
<td>positive</td>
</tr>
<tr>
<th>7</th>
<td>a pulpy concept that , in many other hands would be completely forgettable</td>
<td>[(negative, 0.9738760590553284), (positive, 0.026123959571123123)]</td>
<td>positive</td>
</tr>
<tr>
<th>8</th>
<td>of hollywood heart-string plucking</td>
<td>[(positive, 0.9889695644378662), (negative, 0.011030420660972595)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>the intimate , unguarded moments of folks who live in unusual homes --</td>
<td>[(positive, 0.9967381358146667), (negative, 0.0032618637196719646)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>steals the show</td>
<td>[(negative, 0.8031412363052368), (positive, 0.1968587338924408)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>enough</td>
<td>[(positive, 0.7941301465034485), (negative, 0.2058698982000351)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>this is the kind of movie that you only need to watch for about thirty seconds before you say to yourself , ` ah , yes ,</td>
<td>[(negative, 0.7889454960823059), (positive, 0.21105451881885529)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>troubled and determined homicide cop</td>
<td>[(negative, 0.6632784008979797), (positive, 0.33672159910202026)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>human nature is a goofball movie , in the way that malkovich was , but it tries too hard</td>
<td>[(positive, 0.5959018468856812), (negative, 0.40409812331199646)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to watch too many barney videos</td>
<td>[(negative, 0.9909896850585938), (positive, 0.00901023019105196)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
```python
df.annotation.hist()
```
<AxesSubplot:>

```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and annotated_as:negative").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>a minimalist beauty and the beast</td>
<td>[(positive, 0.9100378751754761), (negative, 0.08996208757162094)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>plunges you into a reality that is , more often then not , difficult and sad ,</td>
<td>[(positive, 0.967541515827179), (negative, 0.03245845437049866)]</td>
<td>negative</td>
</tr>
<tr>
<th>4</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>and social commentary</td>
<td>[(positive, 0.7863275408744812), (negative, 0.2136724889278412)]</td>
<td>negative</td>
</tr>
<tr>
<th>6</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>before pulling the plug on the conspirators and averting an american-russian armageddon</td>
<td>[(positive, 0.6992855072021484), (negative, 0.30071452260017395)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>in tight pants and big tits</td>
<td>[(positive, 0.7850217819213867), (negative, 0.2149781733751297)]</td>
<td>negative</td>
</tr>
<tr>
<th>9</th>
<td>that it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(positive, 0.6591460108757019), (negative, 0.3408539891242981)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>actress-producer and writer</td>
<td>[(positive, 0.8167378306388855), (negative, 0.1832621842622757)]</td>
<td>negative</td>
</tr>
<tr>
<th>11</th>
<td>gives devastating testimony to both people 's capacity for evil and their heroic capacity for good .</td>
<td>[(positive, 0.8960123062133789), (negative, 0.10398765653371811)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>deep into the girls ' confusion and pain as they struggle tragically to comprehend the chasm of knowledge that 's opened between them</td>
<td>[(positive, 0.9729612469673157), (negative, 0.027038726955652237)]</td>
<td>negative</td>
</tr>
<tr>
<th>13</th>
<td>a younger lad in zen and the art of getting laid in this prickly indie comedy of manners and misanthropy</td>
<td>[(positive, 0.9875985980033875), (negative, 0.012401451356709003)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>15</th>
<td>so preachy-keen and</td>
<td>[(positive, 0.9644021391868591), (negative, 0.035597823560237885)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>` christian bale 's quinn ( is ) a leather clad grunge-pirate with a hairdo like gandalf in a wind-tunnel and a simply astounding cor-blimey-luv-a-duck cockney accent . '</td>
<td>[(positive, 0.9713286757469177), (negative, 0.028671346604824066)]</td>
<td>negative</td>
</tr>
<tr>
<th>18</th>
<td>passion , grief and fear</td>
<td>[(positive, 0.9849751591682434), (negative, 0.015024829655885696)]</td>
<td>negative</td>
</tr>
<tr>
<th>19</th>
<td>to keep the extremes of screwball farce and blood-curdling family intensity on one continuum</td>
<td>[(positive, 0.8838250637054443), (negative, 0.11617499589920044)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and score:{0.99 TO *}").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>plays like a living-room war of the worlds , gaining most of its unsettling force from the suggested and the unknown .</td>
<td>[(positive, 0.9968075752258301), (negative, 0.003192420583218336)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>accept it as life and</td>
<td>[(positive, 0.9987508058547974), (negative, 0.0012492131209000945)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>overcomes the script 's flaws and envelops the audience in his character 's anguish , anger and frustration .</td>
<td>[(positive, 0.9953157901763916), (negative, 0.004684178624302149)]</td>
<td>negative</td>
</tr>
<tr>
<th>3</th>
<td>will no doubt rally to its cause , trotting out threadbare standbys like ` masterpiece ' and ` triumph ' and all that malarkey ,</td>
<td>[(negative, 0.9936562180519104), (positive, 0.006343740504235029)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>we do n't get williams ' usual tear and a smile , just sneers and bile , and the spectacle is nothing short of refreshing .</td>
<td>[(positive, 0.9982783794403076), (negative, 0.0017216014675796032)]</td>
<td>negative</td>
</tr>
<tr>
<th>5</th>
<td>somehow manages to bring together kevin pollak , former wrestler chyna and dolly parton</td>
<td>[(negative, 0.9979034662246704), (positive, 0.002096540294587612)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>there 's an admirable rigor to jimmy 's relentless anger , and to the script 's refusal of a happy ending ,</td>
<td>[(positive, 0.9928517937660217), (negative, 0.007148175034672022)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>the bottom line with nemesis is the same as it has been with all the films in the series : fans will undoubtedly enjoy it , and the uncommitted need n't waste their time on it</td>
<td>[(positive, 0.995850682258606), (negative, 0.004149340093135834)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>is genial but never inspired , and little</td>
<td>[(negative, 0.9921030402183533), (positive, 0.007896988652646542)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>heaped upon a project of such vast proportions need to reap more rewards than spiffy bluescreen technique and stylish weaponry .</td>
<td>[(negative, 0.9958089590072632), (positive, 0.004191054962575436)]</td>
<td>positive</td>
</tr>
<tr>
<th>10</th>
<td>than recommended -- as visually bland as a dentist 's waiting room , complete with soothing muzak and a cushion of predictable narrative rhythms</td>
<td>[(negative, 0.9988711476325989), (positive, 0.0011287889210507274)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>spectacle and</td>
<td>[(positive, 0.9941601753234863), (negative, 0.005839805118739605)]</td>
<td>negative</td>
</tr>
<tr>
<th>12</th>
<td>groan and</td>
<td>[(negative, 0.9987359642982483), (positive, 0.0012639997294172645)]</td>
<td>positive</td>
</tr>
<tr>
<th>13</th>
<td>'re not likely to have seen before , but beneath the exotic surface ( and exotic dancing ) it 's surprisingly old-fashioned .</td>
<td>[(positive, 0.9908103942871094), (negative, 0.009189637377858162)]</td>
<td>negative</td>
</tr>
<tr>
<th>14</th>
<td>its metaphors are opaque enough to avoid didacticism , and</td>
<td>[(negative, 0.990602970123291), (positive, 0.00939704105257988)]</td>
<td>positive</td>
</tr>
<tr>
<th>15</th>
<td>by kevin bray , whose crisp framing , edgy camera work , and wholesale ineptitude with acting , tone and pace very obviously mark him as a video helmer making his feature debut</td>
<td>[(positive, 0.9973387122154236), (negative, 0.0026612314395606518)]</td>
<td>negative</td>
</tr>
<tr>
<th>16</th>
<td>evokes the frustration , the awkwardness and the euphoria of growing up , without relying on the usual tropes .</td>
<td>[(positive, 0.9989104270935059), (negative, 0.0010896018939092755)]</td>
<td>negative</td>
</tr>
<tr>
<th>17</th>
<td>, incoherence and sub-sophomoric</td>
<td>[(negative, 0.9962475895881653), (positive, 0.003752368036657572)]</td>
<td>positive</td>
</tr>
<tr>
<th>18</th>
<td>seems intimidated by both her subject matter and the period trappings of this debut venture into the heritage business .</td>
<td>[(negative, 0.9923072457313538), (positive, 0.007692818529903889)]</td>
<td>positive</td>
</tr>
<tr>
<th>19</th>
<td>despite downplaying her good looks , carries a little too much ai n't - she-cute baggage into her lead role as a troubled and determined homicide cop to quite pull off the heavy stuff .</td>
<td>[(negative, 0.9948075413703918), (positive, 0.005192441400140524)]</td>
<td>positive</td>
</tr>
</tbody>
</table>
</div>
```python
# Get dataset slice with wrong predictions
df = rb.load("sst2", query="predicted:ko and score:{* TO 0.6}").to_pandas()
# display first 20 examples
with pd.option_context('display.max_colwidth', None):
display(df[["text", "prediction", "annotation"]].head(20))
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>text</th>
<th>prediction</th>
<th>annotation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>get on a board and , uh , shred ,</td>
<td>[(positive, 0.5352609753608704), (negative, 0.46473899483680725)]</td>
<td>negative</td>
</tr>
<tr>
<th>1</th>
<td>is , truly and thankfully , a one-of-a-kind work</td>
<td>[(positive, 0.5819814801216125), (negative, 0.41801854968070984)]</td>
<td>negative</td>
</tr>
<tr>
<th>2</th>
<td>starts as a tart little lemon drop of a movie and</td>
<td>[(negative, 0.5641832947731018), (positive, 0.4358167052268982)]</td>
<td>positive</td>
</tr>
<tr>
<th>3</th>
<td>between flaccid satire and what</td>
<td>[(negative, 0.5532692074775696), (positive, 0.44673076272010803)]</td>
<td>positive</td>
</tr>
<tr>
<th>4</th>
<td>it certainly does n't feel like a film that strays past the two and a half mark</td>
<td>[(negative, 0.5386656522750854), (positive, 0.46133431792259216)]</td>
<td>positive</td>
</tr>
<tr>
<th>5</th>
<td>who liked there 's something about mary and both american pie movies</td>
<td>[(negative, 0.5086333751678467), (positive, 0.4913666248321533)]</td>
<td>positive</td>
</tr>
<tr>
<th>6</th>
<td>many good ideas as bad is the cold comfort that chin 's film serves up with style and empathy</td>
<td>[(positive, 0.557632327079773), (negative, 0.44236767292022705)]</td>
<td>negative</td>
</tr>
<tr>
<th>7</th>
<td>about its ideas and</td>
<td>[(positive, 0.518638551235199), (negative, 0.48136141896247864)]</td>
<td>negative</td>
</tr>
<tr>
<th>8</th>
<td>of a sick and evil woman</td>
<td>[(negative, 0.5554516315460205), (positive, 0.4445483684539795)]</td>
<td>positive</td>
</tr>
<tr>
<th>9</th>
<td>though this rude and crude film does deliver a few gut-busting laughs</td>
<td>[(positive, 0.5045541524887085), (negative, 0.4954459071159363)]</td>
<td>negative</td>
</tr>
<tr>
<th>10</th>
<td>to squeeze the action and our emotions into the all-too-familiar dramatic arc of the holocaust escape story</td>
<td>[(negative, 0.5050069093704224), (positive, 0.49499306082725525)]</td>
<td>positive</td>
</tr>
<tr>
<th>11</th>
<td>that throws a bunch of hot-button items in the viewer 's face and asks to be seen as hip , winking social commentary</td>
<td>[(negative, 0.5873904228210449), (positive, 0.41260960698127747)]</td>
<td>positive</td>
</tr>
<tr>
<th>12</th>
<td>'s soulful and unslick</td>
<td>[(positive, 0.5931627750396729), (negative, 0.40683719515800476)]</td>
<td>negative</td>
</tr>
</tbody>
</table>
</div>
```python
from rubrix.metrics.commons import *
```
```python
text_length("sst2", query="predicted:ko").visualize()
```
 | rubrix/sst2_with_predictions | [
"region:us"
] | 2022-03-09T14:13:30+00:00 | {} | 2022-09-16T12:23:05+00:00 |
8279d43fc305c5248886d841cb49bd8380456ec9 |
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts and debug codebases that would eventually use the original OSCAR dataset.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
| nthngdy/oscar-mini | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:ca",
"language:ce",
"language:ceb",
"language:ckb",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lb",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nds",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:pnb",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:sa",
"language:sah",
"language:sd",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:yi",
"language:zh",
"license:cc0-1.0",
"arxiv:2010.14571",
"region:us"
] | 2022-03-09T14:18:51+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["af", "am", "ar", "arz", "as", "az", "azb", "ba", "be", "bg", "bn", "bo", "br", "ca", "ce", "ceb", "ckb", "cs", "cv", "cy", "da", "de", "dv", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gl", "gu", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mhr", "mk", "ml", "mn", "mr", "ms", "mt", "my", "nds", "ne", "nl", "nn", "no", "or", "os", "pa", "pl", "pnb", "ps", "pt", "ro", "ru", "sa", "sah", "sd", "sh", "si", "sk", "sl", "sq", "sr", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "yi", "zh"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "source_datasets": ["oscar"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "oscar", "pretty_name": "OSCAR"} | 2022-12-06T11:05:51+00:00 |
f1cb70125a6b1ad5dd0cc97501476309cf540b3d | # Dataset Card for Contextualized CommonGen(C2Gen)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Initial Data Collection and Normalization](#initial-cata-collection-and-normalization)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** [Non-Residual Prompting](https://github.com/FreddeFrallan/Non-Residual-Prompting)
- **Paper:** [Fine-Grained Controllable Text Generation Using Non-Residual Prompting](https://aclanthology.org/2022.acl-long.471)
- **Point of Contact:** [Fredrik Carlsson](mailto:Fredrik.Carlsson@ri.se)
### Dataset Summary
CommonGen [Lin et al., 2020](https://arxiv.org/abs/1911.03705) is a dataset for the constrained text generation task of word inclusion. But the task does not allow to include context. Therefore, to complement CommonGen, we provide an extended test set C2Gen [Carlsson et al., 2022](https://aclanthology.org/2022.acl-long.471) where an additional context is provided for each set of target words. The task is therefore reformulated to both generate commonsensical text which include the given words, and also have the generated text adhere to the given context.
### Languages
English
## Dataset Structure
### Data Instances
{"Context": "The show came on the television with people singing. The family all gathered to watch. They all became silent when the show came on.", "Words": ["follow", "series", "voice"]}
### Data Fields
- context: the generated text by the model should adhere to this text
- words: the words that should be included in the generated continuation
### Data Splits
Test
## Dataset Creation
### Curation Rationale
C2Gen was created because the authors of the paper believed that the task formulation of CommonGen is too narrow, and that it needlessly incentivizes researchers
to focus on methods that do not support context. Which is orthogonal to their belief that many application areas necessitates the consideration of surrounding context. Therefore, to complement CommonGen, they provide an extended test set where an additional context is provided for each set of target words.
### Initial Data Collection and Normalization
The dataset was constructed with the help the crowd sourcing platform MechanicalTurk. Each remaining concept set manually received a textual context. To assure the quality of the data generation, only native English speakers with a recorded high acceptance were allowed to participate. Finally, all contexts were manually verified, and fixed in terms of typos and poor quality. Furthermore we want to raise awareness that C2GEN can contain personal data or offensive content. If you would encounter such a sample, please reach out to us.
## Licensing Information
license: cc-by-sa-4.0
| Non-Residual-Prompting/C2Gen | [
"task_categories:text-generation",
"size_categories:<100K",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:1911.03705",
"region:us"
] | 2022-03-09T16:09:50+00:00 | {"language": ["en"], "license": ["cc-by-sa-4.0"], "size_categories": ["<100K"], "task_categories": ["text-generation"]} | 2022-10-25T09:02:58+00:00 |
a8158d1fac10864c3424d53662fe63bf7d82dd87 |
# Dataset Card for CLUTRR
## Table of Contents
## Dataset Description
### Dataset Summary
**CLUTRR** (**C**ompositional **L**anguage **U**nderstanding and **T**ext-based **R**elational **R**easoning), a diagnostic benchmark suite, is first introduced in (https://arxiv.org/abs/1908.06177) to test the systematic generalization and inductive reasoning capabilities of NLU systems.
The CLUTRR benchmark allows us to test a model’s ability for **systematic generalization** by testing on stories that contain unseen combinations of logical rules, and test for the various forms of **model robustness** by adding different kinds of superfluous noise facts to the stories.
### Dataset Task
CLUTRR contains a large set of semi-synthetic stories involving hypothetical families. The task is to infer the relationship between two family members, whose relationship is not explicitly mentioned in the given story.
Join the CLUTRR community in https://www.cs.mcgill.ca/~ksinha4/clutrr/
## Dataset Structure
We show detailed information for all 14 configurations of the dataset.
### configurations:
**id**: a unique series of characters and numbers that identify each instance <br>
**story**: one semi-synthetic story involving hypothetical families<br>
**query**: the target query/relation which contains two names, where the goal is to classify the relation that holds between these two entities<br>
**target**: indicator for the correct relation for the query <br>
**target_text**: text for the correct relation for the query <br>
the indicator follows the rule as follows: <br> "aunt": 0, "son-in-law": 1, "grandfather": 2, "brother": 3,
"sister": 4,
"father": 5,
"mother": 6,
"grandmother": 7,
"uncle": 8,
"daughter-in-law": 9,
"grandson": 10,
"granddaughter": 11,
"father-in-law": 12,
"mother-in-law": 13,
"nephew": 14,
"son": 15,
"daughter": 16,
"niece": 17,
"husband": 18,
"wife": 19,
"sister-in-law": 20 <br>
**clean\_story**: the story without noise factors<br>
**proof\_state**: the logical rule of the kinship generation <br>
**f\_comb**: the kinships of the query followed by the logical rule<br>
**task\_name**: the task of the sub-dataset in a form of "task_[num1].[num2]"<br>
The first number [num1] indicates the status of noise facts added in the story: 1- no noise facts; 2- Irrelevant facts*; 3- Supporting facts*; 4- Disconnected facts*.<br>
The second number [num2] directly indicates the length of clauses for the task target.<br>
*for example:*<br>
*task_1.2 -- task requiring clauses of length 2 without adding noise facts*<br>
*task_2.3 -- task requiring clauses of length 3 with Irrelevant noise facts added in the story*<br>
**story\_edges**: all the edges in the kinship graph<br>
**edge\_types**: similar to the f\_comb, another form of the query's kinships followed by the logical rule <br>
**query\_edge**: the corresponding edge of the target query in the kinship graph<br>
**genders**: genders of names appeared in the story<br>
**task\_split**: train,test <br>
*Further explanation of Irrelevant facts, Supporting facts and Disconnected facts can be found in the 3.5 Robust Reasoning section in https://arxiv.org/abs/1908.06177
### Data Instances
An example of 'train'in Task 1.2 looks as follows.
```
{
"id": b2b9752f-d7fa-46a9-83ae-d474184c35b6,
"story": "[Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday.",
"query": ('April', 'Ashley'),
"target": 7,
"target_text": "grandmother",
"clean_story": [Lillian] and her daughter [April] went to visit [Lillian]'s mother [Ashley] last Sunday.,
"proof_state": [{('April', 'grandmother', 'Ashley'): [('April', 'mother', 'Lillian'), ('Lillian', 'mother', 'Ashley')]}],
"f_comb": "mother-mother",
"task_name": "task_1.2",
"story_edges": [(0, 1), (1, 2)],
"edge_types": ['mother', 'mother'],
"query_edge": (0, 2),
"genders": "April:female,Lillian:female,Ashley:female",
"task_split": trian
}
```
### Data Splits
#### Data Split Name
(corresponding with the name used in the paper)
| task_split | split name in paper | train &validation task |test task |
| :---: | :---: | :-: | :-: |
| gen_train23_test2to10 | data_089907f8 | 1.2, 1.3 | 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 |
| gen_train234_test2to10 | data_db9b8f04 | 1.2, 1.3, 1.4| 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 1.10 |
| rob_train_clean_23_test_all_23 | data_7c5b0e70 | 1.2,1.3 | 1.2, 1.3, 2.3, 3.3, 4.3 |
| rob_train_sup_23_test_all_23 | data_06b8f2a1 | 2.2, 2.3 | 2.2, 2.3, 1.3, 3.3, 4.3 |
| rob_train_irr_23_test_all_23 | data_523348e6 | 3.2, 3.3 | 3.2, 3.3, 1.3, 2.3, 4.3 |
| rob_train_disc_23_test_all_23 | data_d83ecc3e | 4.2, 4.3 | 4.2, 4.3, 1.3, 2.3, 3.3 |
#### Data Split Summary
Number of Instances in each split
| task_split | train | validation | test |
| :-: | :---: | :---: | :---: |
| gen_train23_test2to10 | 9074 | 2020 | 1146 |
| gen_train234_test2to10 | 12064 | 3019 | 1048 |
| rob_train_clean_23_test_all_23 | 8098 | 2026 | 447 |
| rob_train_disc_23_test_all_23 | 8080 | 2020 | 445 |
| rob_train_irr_23_test_all_23 | 8079 | 2020 | 444 |
| rob_train_sup_23_test_all_23 | 8123 | 2031 | 447 |
## Citation Information
```
@article{sinha2019clutrr,
Author = {Koustuv Sinha and Shagun Sodhani and Jin Dong and Joelle Pineau and William L. Hamilton},
Title = {CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text},
Year = {2019},
journal = {Empirical Methods of Natural Language Processing (EMNLP)},
arxiv = {1908.06177}
}
``` | CLUTRR/v1 | [
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:unknown",
"arxiv:1908.06177",
"region:us"
] | 2022-03-09T19:33:00+00:00 | {"language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"]} | 2022-10-25T09:03:19+00:00 |
095f98c5853b271b00c05bbe4f2167ecdbe8951f |
# Dataset Description
## Dataset Summary
This dataset is a mirror of the Uniprot/SwissProt database. It contains the names and sequences of >500K proteins.
This dataset was parsed from the FASTA file at https://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.fasta.gz.
Supported Tasks and Leaderboards: None
Languages: English
## Dataset Structure
### Data Instances
Data Fields: id, description, sequence
Data Splits: None
## Dataset Creation
The dataset was downloaded and parsed into a `dataset` object and uploaded unchanged.
Initial Data Collection and Normalization: Dataset was downloaded and curated on 03/09/2022.
## Considerations for Using the Data
Social Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV.
Protease inhibitors are a class of drugs that HIV is known to develop resistance via mutations.
Thus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations.
Discussion of Biases: Due to the sampling nature of this database, it is predominantly composed genes from "well studied" genomes. This may impact the "broadness" of the genes contained.
## Additional Information:
- Dataset Curators: Will Dampier
- Citation Information: TBA
| damlab/uniprot | [
"region:us"
] | 2022-03-09T20:00:12+00:00 | {"liscence": "mit"} | 2022-03-12T12:08:29+00:00 |
4887946743ee9325f7597ddadb72ece8b74a8105 | annotations_creators:
- Parth Parekh
languages:
- en
licenses:
- MIT
multilinguality:
- monolingual
size_categories:
- 0<n<100
source_datasets:
- original
task_categories:
- sentence-categorization
# Dataset Card for spotifinders
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
[Needs More Information]
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | juched/spotifinders | [
"region:us"
] | 2022-03-10T01:44:44+00:00 | {} | 2022-03-10T01:46:51+00:00 |
29429c80610b9f235148694561358a1bd092c927 | juched/spotifinders-dataset | [
"license:mit",
"region:us"
] | 2022-03-10T04:39:56+00:00 | {"license": "mit"} | 2022-03-28T23:42:18+00:00 |
|
142e3e33e59f6c13239b5b743f16e5bfcfbc9abf | PaddlePaddle/dureader_robust | [
"license:apache-2.0",
"region:us"
] | 2022-03-10T04:46:26+00:00 | {"license": "apache-2.0"} | 2022-03-10T05:14:18+00:00 |
|
51f31e2aa96a98b68b3595acca660904a3ffca33 | # AutoNLP Dataset for project: cat33
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project cat33.
### Languages
The BCP-47 code for the dataset's language is zh.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\"\u5341\u56db\u4e94\"\u65f6\u671f\uff0c\u4f9d\u6258\u6d77\u5357\u5730\u7406\u533a\u4f4d\u4f18\u52bf\u548c\u6d77\u6d0b\u8d44\u6e90\u4f18\u52bf\uff0c\u52a0\u5feb\u57f9\u80b2\u58ee\u5927\u6d77\u6d0b\u7ecf\u6d4e\uff0c\u62d3\u5c55\u6d77\u5357\u7ecf\u6d4e\u53d1\u5c55\u84dd\u8272\u7a7a\u95f4\uff0c\u5bf9\u670d\u52a1\u6d77\u6d0b\u5f3a\u56fd\u6218\u7565\u3001\u63a8\u52a8\u6d77\u5357\u81ea\u7531\u8d38\u6613\u6e2f\u5efa\u8bbe\u53ca\u5b9e\u73b0\u81ea\u8eab\u53d1\u5c55\u5177\u6709\u91cd\u8981\u610f\u4e49",
"target": 9
},
{
"text": "\u9010\u6b65\u5b9e\u65bd\u533b\u7597\u5668\u68b0\u552f\u4e00\u6807\u8bc6\uff0c\u52a0\u5f3a\u4e0e\u533b\u7597\u7ba1\u7406\u3001\u533b\u4fdd\u7ba1\u7406\u7b49\u8854\u63a5",
"target": 8
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=32, names=['\u4e92\u8054\u7f51\u670d\u52a1', '\u4ea4\u901a\u8fd0\u8f93', '\u4f11\u95f2\u670d\u52a1', '\u4f20\u5a92', '\u4fe1\u606f\u6280\u672f', '\u516c\u7528\u4e8b\u4e1a', '\u519c\u4e1a', '\u5316\u5de5\u5236\u9020', '\u533b\u836f\u751f\u7269', '\u5546\u4e1a\u8d38\u6613', '\u56fd\u9632\u519b\u5de5', '\u5bb6\u7528\u7535\u5668', '\u5efa\u7b51\u4e1a', '\u623f\u5730\u4ea7', '\u6559\u80b2', '\u6587\u5316', '\u6709\u8272\u91d1\u5c5e', '\u673a\u68b0\u88c5\u5907\u5236\u9020', '\u6797\u4e1a', '\u6c7d\u8f66\u5236\u9020', '\u6e14\u4e1a', '\u7535\u5b50\u5236\u9020', '\u7535\u6c14\u8bbe\u5907', '\u755c\u7267\u4e1a', '\u7eba\u7ec7\u670d\u88c5\u5236\u9020', '\u8f7b\u5de5\u5236\u9020', '\u901a\u4fe1', '\u91c7\u77ff\u4e1a', '\u94a2\u94c1', '\u94f6\u884c', '\u975e\u94f6\u91d1\u878d', '\u98df\u54c1\u996e\u6599'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1836 |
| valid | 460 |
| kyleinincubated/autonlp-data-cat33 | [
"task_categories:text-classification",
"language:zh",
"region:us"
] | 2022-03-10T05:59:36+00:00 | {"language": ["zh"], "task_categories": ["text-classification"]} | 2022-10-25T09:03:04+00:00 |
ad1f65afa83d161c5860ad126ab75c4287fb6cbe | en poems and genres test | Georgii/poetry-genre | [
"region:us"
] | 2022-03-10T08:09:08+00:00 | {} | 2022-03-10T08:12:23+00:00 |
d9845634dc0f9cb48d4a26c9f6d8986fb87d2027 |
# Dataset Card for "IndicHeadlineGeneration"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicHeadlineGeneration is the news headline generation dataset released as part of IndicNLG Suite. Each
input document is paired with an output as title. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 1.4M.
### Supported Tasks and Leaderboards
**Tasks:** Headline Generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '14',
'input': "अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।अरियाना ग्रांडे नई दिल्लीः अमेरिकी सिंगर अरियाना ग्रांडे का नया म्यूजिक एल्बम 'थैंक यू नेक्स्ट' रिलीज हो गया है।एक दिन पहले ही रिलीज हुए इस गाने को देखने वालों की संख्या 37,663,702 पहुंच गई है।यूट्यूब पर अपलोड इस गाने को 24 घंटे के भीतर 3.8 मिलियन लोगों ने पसंद किया है।वहीं इस वीडियो पर कमेंट्स की बाढ़ आ गई है।गाने में मीन गर्ल्स, ब्रिंग इट ऑन, लीगली ब्लॉंड और 13 गोइंग 30 के कुछ फेमस सीन्स को दिखाया गया है।गाने में क्रिस जैनर का कैमियो भी है।बता दें अभी कुछ महीने पहले ही अरियाना के एक्स ब्वॉयफ्रेंड मैक मिलर का 26 साल की उम्र में निधन हो गया था।इस खबर को सुनकर अरियाना टूट सी गई थीं।उन्होंने सोशल मीडिया पर पोस्ट कर कई बार अपनी भावनाएं व्यक्त की।अरियाना ग्रांडे और रैपर मैक मिलर ने करीब 2 साल तक एक दूसरे को डेट किया।मैक के निधन की वजह ड्रग्स की ओवरडोज बताई गई।दोनों की मुलाकात साल 2012 में हुई थी।दोनों ने एक कंसर्ट में साथ कई गानों पर परफॉर्म भी किया था।जिसके बाद दोनों एक दूसरे को डेट करने लगे लेकिन नशे की लत के कारण अरियाना ने उनसे ब्रेकअप कर लिया।पर देश-विदेश की ताजा और स्पेशल स्टोरी पढ़ते हुए अपने आप को रखिए अप-टू-डेट।के लिए क्लिक करें सिनेमा सेक्शन",
'target': 'अरियाना ग्रांडे का नया गाना रिलीज, सोशल मीडिया पर वायरल',
'url': 'https://www.indiatv.in/entertainment/hollywood-ariana-grande-shatters-24-hour-views-record-612835'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: News article as input.
- `target (strings)`: Output as headline of the news article.
- `url (string)`: Source web link of the news article.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 29,631 | 14,592 | 14,808 |
Bengali | bn | 113,424 | 14,739 | 14,568 |
Gujarati | gu | 199,972 | 31,270 | 31,215 |
Hindi | hi | 208,221 | 44,738 | 44,514 |
Kannada | kn | 132,380 | 19,416 | 3,261 |
Malayalam | ml | 10,358 | 5,388 | 5,220 |
Marathi | mr | 114,042 | 14,253 | 14,340 |
Oriya | or | 58,225 | 7,484 | 7,137 |
Punjabi | pa | 48,441 | 6,108 | 6,086 |
Tamil | ta | 60,650 | 7,616 | 7,688 |
Telugu | te | 21,352 | 2,690 | 2,675 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
For hindi, web sources like [Dainik Bhaskar](https://www.bhaskar.com), [Naidunia](https://www.naidunia.com/), [NDTV](https://ndtv.in/), [Business Standard](https://hindi.business-standard.com/) and [IndiaTV](https://www.indiatv.in/). For other languages, modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | ai4bharat/IndicHeadlineGeneration | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:27K<n<341K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-10T09:58:27+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["27K<n<341K"], "source_datasets": ["original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages."], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-headline-generation"], "pretty_name": "IndicHeadlineGeneration"} | 2022-10-13T05:08:20+00:00 |
53cfce5e0ca8da828ee1b6223dcf3ea986582812 |
# Dataset Card for "IndicSentenceSummarization"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicSentenceSummarization is the sentence summarization dataset released as part of IndicNLG Suite. Each
input sentence is paired with an output as summary. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 431K.
### Supported Tasks and Leaderboards
**Tasks:** Sentence Summarization
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{'id': '5',
'input': 'जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया।',
'target': 'जम्मू-कश्मीर : सुरक्षाबलों के साथ मुठभेड़ में 2 आतंकवादी ढेर',
'url': 'https://www.indiatv.in/india/national-jammu-kashmir-two-millitant-killed-in-encounter-with-security-forces-574529'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `input (string)`: Input sentence.
- `target (strings)`: Output summary.
- `url (string)`: Source web link of the sentence.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 10,812 | 5,232 | 5,452 |
Bengali | bn | 17,035 | 2,355 | 2,384 |
Gujarati | gu | 54,788 | 8,720 | 8,460 |
Hindi | hi | 78,876 | 16,935 | 16,835 |
Kannada | kn | 61,220 | 9,024 | 1,485 |
Malayalam | ml | 2,855 | 1,520 | 1,580 |
Marathi | mr | 27,066 | 3,249 | 3,309 |
Oriya | or | 12,065 | 1,539 | 1,440 |
Punjabi | pa | 31,630 | 4,004 | 3,967 |
Tamil | ta | 23,098 | 2,874 | 2,948 |
Telugu | te | 7,119 | 878 | 862 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
It is a modified subset of [IndicHeadlineGeneration](https://huggingface.co/datasets/ai4bharat/IndicHeadlineGeneration) dataset.
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | ai4bharat/IndicSentenceSummarization | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:5K<n<112K",
"source_datasets:original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages.",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-10T09:59:05+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["5K<n<112K"], "source_datasets": ["original for Hindi, and modified [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) for other languages."], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-sentence-summarization"], "pretty_name": "IndicSentenceSummarization"} | 2022-10-13T05:08:31+00:00 |
9b177ff8d3eeaf8d07d2918546e9b79ee655e29b |
# Dataset Card for "IndicWikiBio"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
The WikiBio dataset released as part of IndicNLG Suite. Each
example has four fields: id, infobox, serialized infobox and summary. We create this dataset in nine
languages including as, bn, hi, kn, ml, or, pa, ta, te. The total
size of the dataset is 57,426.
### Supported Tasks and Leaderboards
**Tasks:** WikiBio
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{
"id": 26,
"infobox": "name_1:सी॰\tname_2:एल॰\tname_3:रुआला\toffice_1:सांसद\toffice_2:-\toffice_3:मिजोरम\toffice_4:लोक\toffice_5:सभा\toffice_6:निर्वाचन\toffice_7:क्षेत्र\toffice_8:।\toffice_9:मिजोरम\tterm_1:2014\tterm_2:से\tterm_3:2019\tnationality_1:भारतीय",
"serialized_infobox": "<TAG> name </TAG> सी॰ एल॰ रुआला <TAG> office </TAG> सांसद - मिजोरम लोक सभा निर्वाचन क्षेत्र । मिजोरम <TAG> term </TAG> 2014 से 2019 <TAG> nationality </TAG> भारतीय",
"summary": "सी॰ एल॰ रुआला भारत की सोलहवीं लोक सभा के सांसद हैं।"
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `infobox (string)`: Raw Infobox.
- `serialized_infobox (string)`: Serialized Infobox as input.
- `summary (string)`: Summary of Infobox/First line of Wikipedia page.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Test | Val |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 1,300 | 391 | 381 |
Bengali | bn | 4,615 | 1,521 | 1,567 |
Hindi | hi | 5,684 | 1,919 | 1,853 |
Kannada | kn | 1,188 | 389 | 383 |
Malayalam | ml | 5,620 | 1,835 | 1,896 |
Oriya | or | 1,687 | 558 | 515 |
Punjabi | pa | 3,796 | 1,227 | 1,331 |
Tamil | ta | 8,169 | 2,701 | 2,632 |
Telugu | te | 2,594 | 854 | 820 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
None
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
| ai4bharat/IndicWikiBio | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1960<n<11,502",
"source_datasets:none. Originally generated from www.wikimedia.org.",
"language:as",
"language:bn",
"language:hi",
"language:kn",
"language:ml",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-10T09:59:23+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "hi", "kn", "ml", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1960<n<11,502"], "source_datasets": ["none. Originally generated from www.wikimedia.org."], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-wikibio"], "pretty_name": "IndicWikiBio"} | 2022-10-13T05:08:34+00:00 |
3c9cfa7c513097aa3e475ad34d8578c52b48514f |
# Dataset Card for "IndicQuestionGeneration"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicQuestionGeneration is the question generation dataset released as part of IndicNLG Suite. Each
example has five fields: id, squad_id, answer, context and question. We create this dataset in eleven
languages, including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is translated data. The examples in each language are exactly similar but in different languages.
The number of examples in each language is 98,027.
### Supported Tasks and Leaderboards
**Tasks:** Question Generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{
"id": 8,
"squad_id": "56be8e613aeaaa14008c90d3",
"answer": "अमेरिकी फुटबॉल सम्मेलन",
"context": "अमेरिकी फुटबॉल सम्मेलन (एएफसी) के चैंपियन डेनवर ब्रोंकोस ने नेशनल फुटबॉल कांफ्रेंस (एनएफसी) की चैंपियन कैरोलिना पैंथर्स को 24-10 से हराकर अपना तीसरा सुपर बाउल खिताब जीता।",
"question": "एएफसी का मतलब क्या है?"
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `squad_id (string)`: Unique identifier in Squad dataset.
- `answer (strings)`: Answer as one of the two inputs.
- `context (string)`: Context, the other input.
- `question (string)`: Question, the output.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 69,979 | 17,495 | 10,553 |
Bengali | bn | 69,979 | 17,495 | 10,553 |
Gujarati | gu | 69,979 | 17,495 | 10,553 |
Hindi | hi | 69,979 | 17,495 | 10,553 |
Kannada | kn | 69,979 | 17,495 | 10,553 |
Malayalam | ml | 69,979 | 17,495 | 10,553 |
Marathi | mr | 69,979 | 17,495 | 10,553 |
Oriya | or | 69,979 | 17,495 | 10,553 |
Punjabi | pa | 69,979 | 17,495 | 10,553 |
Tamil | ta | 69,979 | 17,495 | 10,553 |
Telugu | te | 69,979 | 17,495 | 10,553 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
Squad Dataset(https://rajpurkar.github.io/SQuAD-explorer/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | ai4bharat/IndicQuestionGeneration | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:98K<n<98K",
"source_datasets:we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages.",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:cc-by-nc-4.0",
"arxiv:2203.05437",
"region:us"
] | 2022-03-10T09:59:41+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["98K<n<98K"], "source_datasets": ["we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages."], "task_categories": ["conditional-text-generation"], "task_ids": ["conditional-text-generation-other-question-generation"], "pretty_name": "IndicQuestionGeneration"} | 2022-10-13T05:08:25+00:00 |
bb3d15353a87a2b256ffb6abc5fa0436b4333b30 | aasd291809733/myself | [
"license:apache-2.0",
"region:us"
] | 2022-03-10T13:46:37+00:00 | {"license": "apache-2.0"} | 2022-03-10T13:46:37+00:00 |
|
9e3533eec643aebede8aaa7ea781c9b58f721dd8 |
Singapore's holiday data from 2017 to 2022. | Mulin/sg-holiday | [
"license:mit",
"region:us"
] | 2022-03-10T14:22:27+00:00 | {"license": "mit"} | 2022-03-14T10:44:11+00:00 |
153f48ba973d1b1f88cf97ec4d986bc13ffc9e63 |
<p align="center">
<br>
<img src="https://orca.dlnlp.ai/assets/orca_logo.png" width="55%"/>
<br>
<p>
<p align="center">
<!-- <a href="https://github.com/UBC-NLP/orca/releases"> -->
<!-- <img alt="GitHub release" src="https://img.shields.io/github/release/UBC-NLP/orca.svg"> </a>-->
<a href="https://orca.dlnlp.ai/">
<img alt="Documentation" src="https://img.shields.io/website.svg?down_color=red&down_message=offline&up_message=online&url=https://orca.dlnlp.ai">
</a>
<!-- <a href="https://github.com/UBC-NLP/orca/blob/main/LICENSE"><img alt="GitHub license" src="https://img.shields.io/github/license/UBC-NLP/orca?logoColor=blue"></a> -->
<!-- <a href='https://orca.readthedocs.io/en/latest/?badge=latest'><img src='https://readthedocs.org/projects/orca/badge/?version=latest' alt='Documentation Status' /></a> -->
<!-- <a href="https://github.com/UBC-NLP/orca/stargazers"><img alt="GitHub stars" src="https://img.shields.io/github/stars/UBC-NLP/orca"></a>
<!-- <a href="https://github.com/UBC-NLP/orca/network"><img alt="GitHub forks" src="https://img.shields.io/github/forks/UBC-NLP/orca"></a> -->
</p>
In this work, we introduce [**ORCA**](https://arxiv.org/abs/2212.10758), a publicly available benchmark for Arabic language understanding evaluation. ORCA is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets across seven NLU task clusters. To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic language models.
# ORCA Task Cluster
We arrange [**ORCA**](https://arxiv.org/abs/2212.10758), into seven NLU task clusters. These are (1) sentence classification, (2) structured prediction (3) semantic textual similarity and paraphrase, (4) text classification, (5) natural language inference, (6) word sense disambiguation, and (7) question answering.
### (1) Natural Language Inference (NLI)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|------|
|[ANS Stance](https://aclanthology.org/2020.fever-1.2/) |MSA | Macro F1 | [(Khouja, 2020)](https://aclanthology.org/2020.fever-1.2/) |
|[Baly Stance](https://aclanthology.org/N18-2004/) |MSA | Macro F1 | [(Balyet al., 2018)](https://aclanthology.org/N18-2004/) |
|[XLNI](https://github.com/facebookresearch/XNLI) |MSA | Macro F1 | [(Conneau et al., 2018)](https://github.com/facebookresearch/XNLI)|
### (2) Question Answering (QA)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|------|
|[Question Answering](https://aclanthology.org/2021.acl-long.551/) |MSA | Macro F1 | [(Abdul-Mageed et al., 2020a)](https://aclanthology.org/2021.acl-long.551/) |
### (3) Semantic Textual Similarity and Paraphrase (STSP)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Emotion Regression](https://aclanthology.org/S18-1001/) |MSA | Spearman Correlation| [(Saif et al., 2018)](https://aclanthology.org/S18-1001/) |
|[MQ2Q](https://aclanthology.org/2019.nsurl-1.1) |MSA | Macro F1 | [(Seelawi al., 2019)](https://aclanthology.org/2019.nsurl-1.1) |
|[STS](https://aclanthology.org/S17-2001/) |MSA | Macro F1 | [(Cer et al., 2017)](https://aclanthology.org/S17-2001/) |
### (4) Sentence Classification (SC)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Abusive](https://aclanthology.org/W19-3512/) |DA | Macro F1 | [(Mulki et al., 2019)](https://aclanthology.org/W19-3512/) |
|[Adult](https://aclanthology.org/2021.wanlp-1.14) |DA | Macro F1 | [(Mubarak et al., 2021)](https://aclanthology.org/2021.wanlp-1.14) |
|[Age](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) |
|[ANS Claim](https://aclanthology.org/2020.fever-1.2/) |MSA | Macro F1 | [(Khouja, 2020)](https://aclanthology.org/2020.fever-1.2/) |
|[Dangerous ](https://aclanthology.org/N18-2004/) |DA | Macro F1 | [(Alshehri et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.6)|
|[Dialect Binary](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). |
|[Dialect Country](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). |
|[Dialect Region](https://github.com/facebookresearch/XNLI) |DA | Macro F1 | [(Farha, 2020)](https://aclanthology.org/2020.osact-1.5/), [(Zaidan, 2014)](https://www.aclweb.org/anthology/J14-1006), [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/), [(Bouamor et al., 2019)](https://www.aclweb.org/anthology/W19-4622), [(Abdelaliet al., 2020)](https://aclanthology.org/2021.wanlp-1.1), [(El-Haj, 2020)](https://aclanthology.org/2020.lrec-1.165/). |
|[Emotion](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) |
|[Gender](https://www.aclweb.org/anthology/2020.osact-1.3) |DA | Macro F1 | [(Abdul-Mageed et al., 2020b)]( https://aclanthology.org/2020.osact-1.3/) |
|[Hate Speech](https://www.aclweb.org/anthology/2020.osact-1.7) |DA | Macro F1 | [(Mubarak et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.7)|
|[Irony](https://dl.acm.org/doi/10.1145/3368567.3368585) |DA | Macro F1 | [(Ghanem al., 2019)](https://dl.acm.org/doi/10.1145/3368567.3368585) |
|[Machine Generation](https://aclanthology.org/2020.wanlp-1.7/) |MSA | Macro F1 | [(Nagoudi et al., 2020)](https://aclanthology.org/2020.wanlp-1.7/) |
|[Offensive](https://aclanthology.org/2020.osact-1.8/) |DA | Macro F1 | [(Mubarak et al., 2020)](https://www.aclweb.org/anthology/2020.osact-1.7)|
|[Sarcasm](https://aclanthology.org/N18-2004/) |DA | Macro F1 | [(Farha and Magdy, 2020)](https://aclanthology.org/2020.osact-1.5/) |
|[Sentiment Analysis](https://aclanthology.org/2021.acl-long.551/) |DA | Macro F1 | [(Abdul-Mageed et al., 2020c)](https://aclanthology.org/2021.acl-long.551/) |
### (5) Structure Predictions (SP)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Aqmar NER](https://www.cs.cmu.edu/~ark/ArabicNER/) |MSA | Macro F1 | [(Mohit, 2012)](https://www.cs.cmu.edu/~ark/ArabicNER/) |
|[Arabic NER Corpus](http://www.dsic.upv.es/~prosso/resources/BenajibaRosso_IICAI07.pdf) |MSA | Macro F1 | [(Benajiba and Rosso, 2007)](http://www.dsic.upv.es/~prosso/resources/BenajibaRosso_IICAI07.pdf) |
|[Dialect Part Of Speech](https://aclanthology.org/L18-1015.pdf) |DA | Macro F1 | [(Darwish et al., 2018)](https://aclanthology.org/L18-1015.pdf) |
|[MSA Part Of Speech](https://arxiv.org/abs/2004.01401) |MSA | Macro F1 | [(Liang et al., 2020)](https://arxiv.org/abs/2004.01401) |
### (6) Topic Classification (TC)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Topic](https://aclanthology.org/2021.acl-long.551/) |MSA | Macro F1 | [(Abbas et al.,2011)](https://www.dline.info/fpaper/jdim/v9i5/1.pdf), [(Chouigui et al.,2017)](https://www.researchgate.net/publication/320871871_Poster_ANT_Corpus_An_Arabic_News_Text_Collection_for_Textual_Classification), [(Saad, 2010)](http://site.iugaza.edu.ps/wp-content/uploads/mksaad-OSAC-OpenSourceArabicCorpora-EECS10-rev9(1).pdf). |
### (7) Word Sense Disambiguation (WSD)
|**Task**| **Variation** | **Metric** | **Reference** |
|---------|--------|--------|-------|
|[Word Sense Disambiguation](https://www.mdpi.com/2076-3417/11/6/2567) |MSA | Macro F1 | [(El-Razzaz, 2021)](https://www.mdpi.com/2076-3417/11/6/2567) |
# How to Use ORCA
### Request Access ###
To obtain access to the ORCA benchmark on Huggingface, follow the following steps:
- Login on your Haggingface account
<img src="https://raw.githubusercontent.com/UBC-NLP/orca/main/orca_request1.png" width="70%"/>
- Request access
<img src="https://raw.githubusercontent.com/UBC-NLP/orca/main/orca_request2.png" width="70%"/>
### Install Requirments
```shell
pip install datasets transformers seqeval
```
### Login with your Huggingface CLI ###
You can get/manage your access tokens in your [settings](https://huggingface.co/docs/hub/security-tokens).
```shell
export HUGGINGFACE_TOKEN=""
huggingface-cli login --token $HUGGINGFACE_TOKEN
```
### Fine-tuning a model on ORCA tasks
We provide a Google Colab Notebook that includes instructions for fine-tuning any model on ORCA tasks. <a href="https://colab.research.google.com/github/UBC-NLP/orca/blob/main/Finetuning_ORCA.ipynb"><img alt="colab" src="https://colab.research.google.com/assets/colab-badge.svg">
### Submitting your results on ORCA test
We design a public leaderboard for scoring PLMs on ORCA. Our leaderboard is interactive and offers rich meta-data about the various datasets involved as well as the language models we evaluate.
You can evalute your models using **ORCA** leaderboard: **[https://orca.dlnlp.ai](https://orca.dlnlp.ai/)**
---
## Citation
If you use ORCA for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows:
```
@inproceedings{elmadany-etal-2023-orca,
title = "{ORCA}: A Challenging Benchmark for {A}rabic Language Understanding",
author = "Elmadany, AbdelRahim and
Nagoudi, ElMoatez Billah and
Abdul-Mageed, Muhammad",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-acl.609",
pages = "9559--9586",
}
```
---
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
| UBC-NLP/orca | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"language:ara",
"Arabic",
"NLU Benchmark",
"Natural Language Inference (NLI)",
"Question Answering (QA)",
"Semantic Textual Similarity and and Paraphrase (STSP)",
"Sentence Classification (SC)",
"Structure Predictions (SP)",
"Topic Classification (TC)",
"Word Sense Disambiguation (WSD)",
"arxiv:2212.10758",
"arxiv:2004.01401",
"region:us"
] | 2022-03-10T19:45:30+00:00 | {"language": ["ara"], "task_categories": ["text-classification", "token-classification", "question-answering"], "viewer": false, "tags": ["Arabic", "NLU Benchmark", "Natural Language Inference (NLI)", "Question Answering (QA)", "Semantic Textual Similarity and and Paraphrase (STSP)", "Sentence Classification (SC)", "Structure Predictions (SP)", "Topic Classification (TC)", "Word Sense Disambiguation (WSD)"], "extra_gated_fields": {"Name": "text", "Official Email (email of your organization)": "text", "Affilation": "text", "Country": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox", "I agree to cite the ORCA paper and all original papers": "checkbox"}} | 2023-11-22T17:56:13+00:00 |
f5ee87052fbba38c7e0a49a4dad24724ed97302f | Biomedical-TeMU/ProfNER_corpus_classification | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-10T20:28:10+00:00 | {"license": "cc-by-4.0"} | 2022-03-10T21:24:30+00:00 |
|
de9bf1404880f4b7225e1cc0e9268192e57fefca |
## Description
**Gold standard annotations for profession detection in Spanish COVID-19 tweets**
The entire corpus contains 10,000 annotated tweets. It has been split into training, validation, and test (60-20-20). The current version contains the training and development set of the shared task with Gold Standard annotations. In addition, it contains the unannotated test, and background sets will be released.
For Named Entity Recognition, profession detection, annotations are distributed in 2 formats: Brat standoff and TSV. See the Brat webpage for more information about the Brat standoff format (https://brat.nlplab.org/standoff.html).
The TSV format follows the format employed in SMM4H 2019 Task 2:
tweet_id | begin | end | type | extraction
In addition, we provide a tokenized version of the dataset. It follows the BIO format (similar to CONLL). The files were generated with the brat_to_conll.py script (included), which employs the es_core_news_sm-2.3.1 Spacy model for tokenization.
## Files of Named Entity Recognition subtask.
Content:
- One TSV file per corpus split (train and valid).
- brat: folder with annotations in Brat format. One sub-directory per corpus split (train and valid)
- BIO: folder with corpus in BIO tagging. One file per corpus split (train and valid)
- train-valid-txt-files: folder with training and validation text files. One text file per tweet. One sub-- directory per corpus split (train and valid)
- train-valid-txt-files-english: folder with training and validation text files Machine Translated to English.
- test-background-txt-files: folder with the test and background text files. You must make your predictions for these files and upload them to CodaLab. | Biomedical-TeMU/ProfNER_corpus_NER | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-10T21:34:00+00:00 | {"license": "cc-by-4.0"} | 2022-03-10T21:50:30+00:00 |
41ea0e39f062f9ca791fd5ec95c364a22150b56e |
# Dataset Card for FeedbackQA
[📄 Read](https://arxiv.org/pdf/2204.03025.pdf)<br>
[💾 Code](https://github.com/McGill-NLP/feedbackqa)<br>
[🔗 Webpage](https://mcgill-nlp.github.io/feedbackqa/)<br>
[💻 Demo](http://206.12.100.48:8080/)<br>
[🤗 Huggingface Dataset](https://huggingface.co/datasets/McGill-NLP/feedbackQA)<br>
[💬 Discussions](https://github.com/McGill-NLP/feedbackqa/discussions)
## Dataset Description
- **Homepage: https://mcgill-nlp.github.io/feedbackqa-data/**
- **Repository: https://github.com/McGill-NLP/feedbackqa-data/**
- **Paper:**
- **Leaderboard:**
- **Tasks: Question Answering**
### Dataset Summary
FeedbackQA is a retrieval-based QA dataset that contains interactive feedback from users.
It has two parts: the first part contains a conventional RQA dataset,
whilst this repo contains the second part, which contains feedback(ratings and natural language explanations) for QA pairs.
### Languages
English
## Dataset Creation
For each question-answer pair, we collected multiple feedback, each of which consists of a rating, selected
from excellent, good, could be improved, bad, and a natural language explanation
elaborating on the strengths and/or weaknesses of the answer.
#### Initial Data Collection and Normalization
We scraped Covid-19-related content from official websites.
### Annotations
#### Who are the annotators?
Crowd-workers
### Licensing Information
Apache 2.0
### Contributions
[McGill-NLP](https://github.com/McGill-NLP)
| McGill-NLP/feedbackQA | [
"license:apache-2.0",
"arxiv:2204.03025",
"region:us"
] | 2022-03-10T23:50:07+00:00 | {"license": "apache-2.0"} | 2023-06-14T16:27:23+00:00 |
393badffe34773d1536cfedfdc2abe14317d38e7 |
# The Sentence Splitter (SS) for Clinical Cases Written in Spanish
## Introduction
This repository contains the sentence splitting model trained using the SPACCC_SPLIT corpus (https://github.com/PlanTL-SANIDAD/SPACCC_SPLIT). The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to split sentences in biomedical documents, specially clinical cases written in Spanish. This model obtains a F-Measure of 98.75%.
This model was created using the Apache OpenNLP machine learning toolkit (https://opennlp.apache.org/), with the release number 1.8.4, released in December 2017.
This repository contains the model, training set, testing set, Gold Standard, executable file, and the source code.
## Prerequisites
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: https://www.java.com/en/download
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: https://opennlp.apache.org/download.html
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
## Directory structure
<pre>
exec/
An executable file that can be used to apply the sentence splitter to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The sentence splitting model, "es-sentence-splitter-model-spaccc.bin", a binary file.
src/
The source code to create the model (CreateModelSS.java) and evaluate it (EvaluateModelSS.java).
The directory includes an example about how to use the model inside your code (SentenceSplitter.java).
File "abbreviations.dat" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
</pre>
## Usage
The executable file *SentenceSplitter.jar* is the program you need to split the sentences of the document. For this program, two arguments are needed: (1) the text file to split the sentences, and (2) the model file (*es-sentence-splitter-model-spaccc.bin*). The program will display all sentences splitted in the terminal, with one sentence per line.
From the `exec` folder, type the following command in your terminal:
<pre>
$ java -jar SentenceSplitter.jar INPUT_FILE MODEL_FILE
</pre>
## Examples
Assuming you have the executable file, the input file and the model file in the same directory:
<pre>
$ java -jar SentenceSplitter.jar file_with_sentences_not_splitted.txt es-sentence-splitter-model-spaccc.bin
</pre>
## Model creation
To create this sentence splitting model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
- Number of iterations: 4000.
- Cutoff parameter: 3.
- Trainer type parameter: *EventTrainer.EVENT_VALUE*.
- Algorithm: Maximum Entropy (*ModelType.MAXENT.name()*).
Meanwhile, we used the following parameters for the sentence split builder (class *SentenceDetectorFactory* in OpenNLP) to get the best performance:
- Subclass name: null value.
- Language code: *es* (for Spanish).
- Use token end: true.
- Abbreviation dictionary: file "abbreviations.dat" (included in the `src/` directory).
- End of file characters: ".", "?" and "!".
## Model evaluation
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
| | Value |
| ----------------------------------------: | :------ |
| Number of sentences in the gold standard | 1445 |
| Number of sentences generated | 1447 |
| Number of sentences correctly splitted | 1428 |
| Number of sentences wrongly splitted | 12 |
| Number of sentences missed | 5 |
| **Precision** | **98.69%** |
| **Recall** | **98.82%** |
| **F-Measure** | **98.75%**|
Table 1: Evaluation statistics for the sentence splitting model.
## Contact
Ander Intxaurrondo (ander.intxaurrondo@bsc.es)
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Copyright (c) 2018 Secretaría de Estado para el Avance Digital (SEAD)
| Biomedical-TeMU/SPACCC_Sentence-Splitter | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-11T01:59:57+00:00 | {"license": "cc-by-4.0"} | 2022-03-11T02:09:00+00:00 |
b80bc1594c34c07cee7888a0c741ae41ac06b274 | # The Tokenizer for Clinical Cases Written in Spanish
## Introduction
This repository contains the tokenization model trained using the SPACCC_TOKEN corpus (https://github.com/PlanTL-SANIDAD/SPACCC_TOKEN). The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to tokenize biomedical documents, specially clinical cases written in Spanish.
This model was created using the Apache OpenNLP machine learning toolkit (https://opennlp.apache.org/), with the release number 1.8.4, released in December 2017.
This repository contains the training set, testing set, Gold Standard.
## Prerequisites
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: https://www.java.com/en/download
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: https://opennlp.apache.org/download.html
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
## Directory structure
<pre>
exec/
An executable file that can be used to apply the tokenization to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The tokenizationint model, "es-tokenization-model-spaccc.bin", a binary file.
src/
The source code to create the model (CreateModelTok.java) and evaluate it (EvaluateModelTok.java).
The directory includes an example about how to use the model inside your code (Tokenization.java).
File "abbreviations.dat" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
</pre>
## Usage
The executable file *Tokenizer.jar* is the program you need to tokenize the text in your document. For this program, two arguments are needed: (1) the text file to tokenize, and (2) the model file (*es-tokenization-model-spaccc.bin*). The program will display all tokens in the terminal, with one token per line.
From the `exec` folder, type the following command in your terminal:
<pre>
$ java -jar Tokenizer.jar INPUT_FILE MODEL_FILE
</pre>
## Examples
Assuming you have the executable file, the input file and the model file in the same directory:
<pre>
$ java -jar Tokenizer.jar file.txt es-tokenizer-model-spaccc.bin
</pre>
## Model creation
To create this tokenization model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
- Number of iterations: 1500.
- Cutoff parameter: 4.
- Trainer type parameter: *EventTrainer.EVENT_VALUE*.
- Algorithm: Maximum Entropy (*ModelType.MAXENT.name()*).
Meanwhile, we used the following parameters for the tokenizer builder (class *TokenizerFactory* in OpenNLP) to get the best performance:
- Language code: *es* (for Spanish).
- Abbreviation dictionary: file "abbreviations.dat" (included in the `src/` directory).
- Use alphanumeric optimization: false
- Alphanumeric pattern: null
## Model evaluation
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
| | Value |
| ----------------------------------------: | :------ |
| Number of tokens in the gold standard | 38247 |
| Number of tokens generated | 38227 |
| Number of words correctly tokenized | 38182 |
| Number of words wrongly tokenized | 35 |
| Number of tokens missed | 30 |
| **Precision** | **99.88%** |
| **Recall** | **99.83%** |
| **F-Measure** | **99.85%**|
Table 1: Evaluation statistics for the tokenization model.
## Contact
Ander Intxaurrondo (ander.intxaurrondo@bsc.es)
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Copyright (c) 2018 Secretaría de Estado para el Avance Digital (SEAD)
| Biomedical-TeMU/SPACCC_Tokenizer | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-11T02:14:02+00:00 | {"license": "cc-by-4.0"} | 2022-03-11T02:18:16+00:00 |
5ff2b006ea74699eccd393a5a0f3b99396d01e0c |
## Introduction
These are the train, development, test and background sets of the CodiEsp corpus. Train and development have gold standard annotations. The unannotated background and test sets are distributed together. All documents are released in the context of the CodiEsp track for CLEF ehealth 2020 (http://temu.bsc.es/codiesp/).
The CodiEsp corpus contains manually coded clinical cases. All documents are in Spanish language and CIE10 is the coding terminology (it is the Spanish version of ICD10-CM and ICD10-PCS). The CodiEsp corpus has been randomly sampled into three subsets: the train, the development, and the test set. The train set contains 500 clinical cases, and the development and test set 250 clinical cases each. The test set contains 250 clinical cases and it is released together with the background set (with 2751 clinical cases). CodiEsp participants must submit predictions for the test and background set, but they will only be evaluated on the test set.
## Structure
Three folders: train, dev and test. Each one of them contains the files for the train, development and test corpora, respectively.
+ train and dev folders have:
+ 3 tab-separated files with the annotation information relevant for each of the 3 sub-tracks of CodiEsp.
+ A subfolder named text_files with the plain text files of the clinical cases.
+ A subfolder named text_files_en with the plain text files machine-translated to English. Due to the translation process, the text files are sentence-splitted.
+ The test folder has only text_files and text_files_en subfolders with the plain text files.
## Corpus format description
The CodiEsp corpus is distributed in plain text in UTF8 encoding, where each clinical case is stored as a single file whose name is the clinical case identifier. Annotations are released in a tab-separated file. Since the CodiEsp track has 3 sub-tracks, every set of documents (train and test) has 3 tab-separated files associated with it.
For the sub-tracks CodiEsp-D and CodiEsp-P, the file has the following fields:
articleID ICD10-code
Tab-separated files for the sub-track CodiEsp-X contain extra fields that provide the text-reference and its position:
articleID label ICD10-code text-reference reference-position
## Corpus summary statistics
The final collection of 1000 clinical cases that make up the corpus had a total of 16504 sentences, with an average of 16.5 sentences per clinical case. It contains a total of 396,988 words, with an average of 396.2 words per clinical case.
For more information, visit the track webpage: http://temu.bsc.es/codiesp/ | Biomedical-TeMU/CodiEsp_corpus | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-11T02:19:32+00:00 | {"license": "cc-by-4.0"} | 2022-03-11T02:24:53+00:00 |
b80b8e1442d843ab1f02050ef297b13be4fb4a72 | Mulin/weather-data | [
"license:mit",
"region:us"
] | 2022-03-11T02:48:43+00:00 | {"license": "mit"} | 2022-03-11T06:41:03+00:00 |
|
7e37d9d97bbdc47fbd710913a75c355e878b343e | lstynerl/M1a1d | [
"license:apache-2.0",
"region:us"
] | 2022-03-11T03:32:56+00:00 | {"license": "apache-2.0"} | 2022-03-11T03:32:56+00:00 |
|
38ccb945600346d52580891d6d77f5c2bfaae069 | # PersianNER
Named-Entity Recognition in Persian Language
## ArmanPersoNERCorpus
This is the first manually-annotated Persian named-entity (NE) dataset (ISLRN 399-379-640-828-6). We are releasing it only for academic research use.
The dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. Each file contains one token, along with its manually annotated named-entity tag, per line. Each sentence is separated with a newline. The NER tags are in IOB format.
According to the instructions provided to the annotators, NEs are categorized into six classes: person, organization (such as banks, ministries, embassies, teams, nationalities, networks and publishers), location (such as cities, villages, rivers, seas, gulfs, deserts and mountains), facility (such as schools, universities, research centers, airports, railways, bridges, roads, harbors, stations, hospitals, parks, zoos and cinemas), product (such as books, newspapers, TV shows, movies, airplanes, ships, cars, theories, laws, agreements and religions), and event (such as wars, earthquakes, national holidays, festivals and conferences); other are the remaining tokens.
| Khedesh/ArmanNER | [
"region:us"
] | 2022-03-11T08:13:29+00:00 | {} | 2022-03-11T10:42:30+00:00 |
04bb1414d14d63bffc026c6f12d047b7a3232930 | ## Dataset Description
- **Homepage:** https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
- **Paper:** https://arxiv.org/abs/1703.10593
### Dataset Summary
This dataset was obtained from the original CycleGAN Datasets directory available on [Berkeley's website](https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/).
For more details about the dataset you can refer to the [original CycleGAN publication](https://arxiv.org/abs/1703.10593).
### How to use
You can easily load the dataset with the following lines :
```python
from datasets import load_dataset
data_horses = load_dataset("gigant/horse2zebra", name="horse", split="train")
data_zebras = load_dataset("gigant/horse2zebra", name="zebra", split="train")
```
Two splits are available, `"train"` and `"test"`
### Citation Information
```
@inproceedings{CycleGAN2017,
title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},
year={2017}
}
``` | gigant/horse2zebra | [
"task_categories:image-to-image",
"license:cc",
"GAN",
"unpaired-image-to-image-translation",
"arxiv:1703.10593",
"region:us"
] | 2022-03-11T09:59:03+00:00 | {"license": "cc", "task_categories": ["image-to-image"], "task_ids": [], "pretty_name": "Horse2Zebra", "tags": ["GAN", "unpaired-image-to-image-translation"]} | 2022-10-24T16:37:53+00:00 |
f90b0fced2b6b7d1fb3fcdb04cb5b754eafab378 | # GEM Submission
Submission name: Macro
| GEM-submissions/ratishsp__macro__1646998904 | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-11T11:41:45+00:00 | {"benchmark": "gem", "type": "prediction", "submission_name": "Macro", "tags": ["evaluation", "benchmark"]} | 2022-03-11T11:41:47+00:00 |
b8e66595f3f7e20f5c2a6f69be3504d2e97d790b |
# Dataset Card for Zeel/common
| Zeel/common | [
"language:en",
"region:us"
] | 2022-03-12T00:18:08+00:00 | {"language": ["en"], "pretty_name": "common"} | 2022-10-25T09:22:40+00:00 |
ce7b8f1a30bfae5184e554a5bf44b76b9e8fc011 |
# CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
This repo contains the data for the NeurIPS 2021 benchmark [Constrained Language Understanding Evaluation Standard (CLUES)](https://openreview.net/pdf?id=VhIIQBm00VI).
## Leaderboard
We maintain a [Leaderboard](https://github.com/microsoft/CLUES) allowing researchers to submit their results as entries.
### Submission Instructions
- Each submission must be submitted as a pull request modifying the markdown file underlying the leaderboard.
- The submission must attach an accompanying public paper and public source code for reproducing their results on our dataset.
- A submission can be toward any subset of tasks in our benchmark, or toward the aggregate leaderboard.
- For any task targeted by the submission, we require evaluation on (1) 10, 20, *and* 30 shots, and (2) all 5 splits of the corresponding dataset and a report of their mean and standard deviation.
- Each leaderboard will be sorted by the 30-shot mean S1 score (where S1 score is a variant of F1 score defined in our paper).
- The submission should not use data from the 4 other splits during few-shot finetuning of any 1 split, either as extra training set or as validation set for hyperparameter tuning.
- However, we allow external data, labeled or unlabeled, to be used for such purposes.
Each submission using external data must mark the corresponding columns "external labeled" and/or "external unlabeled".
Note, in this context, "external data" refers to data used *after pretraining* (e.g., for task-specific tuning); in particular, methods using existing pretrained models only, without extra data, should not mark either column. For obvious reasons, models cannot be trained on the original labeled datasets from where we sampled the few-shot CLUES data.
- In the table entry, the submission should include a method name and a citation, hyperlinking to their publicly released source code reproducing the results. See the last entry of the table below for an example.
### Abbreviations
- FT = (classic) finetuning
- PT = prompt based tuning
- ICL = in-context learning, in the style of GPT-3
- μ±σ = mean μ and standard deviation σ across our 5 splits. Aggregate standard deviation is calculated using the sum-of-variance formula from individual tasks' standard deviations.
### Benchmarking CLUES for Aggregate 30-shot Evaluation
| Shots (K=30) | external labeled | external unlabeled | Average ▼ | SST-2 | MNLI | CoNLL03 | WikiANN | SQuAD-v2 | ReCoRD |
|-----------------------------------------------------------|-------------|---------------|-----------|-----------|----------|----------|----------|----------|----------|
| **Human** | N | N | 81.4 | 83.7 | 69.4 | 87.4 | 82.6 | 73.5 | 91.9 |
| T5-Large-770M-FT | N | N | 43.1±6.7 | 52.3±2.9 | 36.8±3.8 | 51.2±0.1 | 62.4±0.6 | 43.7±2.7 | 12±3.8 |
| BERT-Large-336M-FT | N | N | 42.1±7.8 | 55.4±2.5 | 33.3±1.4 | 51.3±0 | 62.5±0.6 | 35.3±6.4 | 14.9±3.4 |
| BERT-Base-110M-FT | N | N | 41.5±9.2 | 53.6±5.5 | 35.4±3.2 | 51.3±0 | 62.8±0 | 32.6±5.8 | 13.1±3.3 |
| DeBERTa-Large-400M-FT | N | N | 40.1±17.8 | 47.7±9.0 | 26.7±11 | 48.2±2.9 | 58.3±6.2 | 38.7±7.4 | 21.1±3.6 |
| RoBERTa-Large-355M-FT | N | N | 40.0±10.6 | 53.2±5.6 | 34.0±1.1 | 44.7±2.6 | 48.4±6.7 | 43.5±4.4 | 16±2.8 |
| RoBERTa-Large-355M-PT | N | N | | 90.2±1.8 | 61.6±3.5 | | | | |
| DeBERTa-Large-400M-PT | N | N | | 88.4±3.3 | 62.9±3.1 | | | | |
| BERT-Large-336M-PT | N | N | | 82.7±4.1 | 45.3±2.0 | | | | |
| GPT3-175B-ICL | N | N | | 91.0±1.6 | 33.2±0.2 | | | | |
| BERT-Base-110M-PT | N | N | | 79.4±5.6 | 42.5±3.2 | | | | |
| [LiST (Wang et al.)](https://github.com/microsoft/LiST) | N | Y | | 91.3 ±0.7 | 67.9±3.0 | | | | |
| [Example (lastname et al.)](link2code) | Y/N | Y/N | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 |
### Individual Task Performance over Multiple Shots
#### SST-2
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|----------------------------------------|------------------|--------------------|-----------|-----------|----------|------|
| GPT-3 (175B) ICL | N | N | 85.9±3.7 | 92.0±0.7 | 91.0±1.6 | - |
| RoBERTa-Large PT | N | N | 88.8±3.9 | 89.0±1.1 | 90.2±1.8 | 93.8 |
| DeBERTa-Large PT | N | N | 83.4±5.3 | 87.8±3.5 | 88.4±3.3 | 91.9 |
| **Human** | N | N | 79.8 | 83 | 83.7 | - |
| BERT-Large PT | N | N | 63.2±11.3 | 78.2±9.9 | 82.7±4.1 | 91 |
| BERT-Base PT | N | N | 63.9±10.0 | 76.7±6.6 | 79.4±5.6 | 91.9 |
| BERT-Large FT | N | N | 46.3±5.5 | 55.5±3.4 | 55.4±2.5 | 99.1 |
| BERT-Base FT | N | N | 46.2±5.6 | 54.0±2.8 | 53.6±5.5 | 98.1 |
| RoBERTa-Large FT | N | N | 38.4±21.7 | 52.3±5.6 | 53.2±5.6 | 98.6 |
| T5-Large FT | N | N | 51.2±1.8 | 53.4±3.2 | 52.3±2.9 | 97.6 |
| DeBERTa-Large FT | N | N | 43.0±11.9 | 40.8±22.6 | 47.7±9.0 | 100 |
| [Example (lastname et al.)](link2code) | Y/N | Y/N | 0±0 | 0±0 | 0±0 | - |
#### MNLI
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|---------------------------------------------------------|------------------|--------------------|-----------|-----------|-----------|------|
| **Human** | N | Y | 78.1 | 78.6 | 69.4 | - |
| [LiST (wang et al.)](https://github.com/microsoft/LiST) | N | N | 60.5±8.3 | 67.2±4.5 | 67.9±3.0 | - |
| DeBERTa-Large PT | N | N | 44.5±8.2 | 60.7±5.3 | 62.9±3.1 | 88.1 |
| RoBERTa-Large PT | N | N | 57.7±3.6 | 58.6±2.9 | 61.6±3.5 | 87.1 |
| BERT-Large PT | N | N | 41.7±1.0 | 43.7±2.1 | 45.3±2.0 | 81.9 |
| BERT-Base PT | N | N | 40.4±1.8 | 42.1±4.4 | 42.5±3.2 | 81 |
| T5-Large FT | N | N | 39.8±3.3 | 37.9±4.3 | 36.8±3.8 | 85.9 |
| BERT-Base FT | N | N | 37.0±5.2 | 35.2±2.7 | 35.4±3.2 | 81.6 |
| RoBERTa-Large FT | N | N | 34.3±2.8 | 33.4±0.9 | 34.0±1.1 | 85.5 |
| BERT-Large FT | N | N | 33.7±0.4 | 28.2±14.8 | 33.3±1.4 | 80.9 |
| GPT-3 (175B) ICL | N | N | 33.5±0.7 | 33.1±0.3 | 33.2±0.2 | - |
| DeBERTa-Large FT | N | N | 27.4±14.1 | 33.6±2.5 | 26.7±11.0 | 87.6 |
#### CoNLL03
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 87.7 | 89.7 | 87.4 | - |
| BERT-Base FT | N | N | 51.3±0 | 51.3±0 | 51.3±0 | - |
| BERT-Large FT | N | N | 51.3±0 | 51.3±0 | 51.3±0 | 89.3 |
| T5-Large FT | N | N | 46.3±6.9 | 50.0±0.7 | 51.2±0.1 | 92.2 |
| DeBERTa-Large FT | N | N | 50.1±1.2 | 47.8±2.5 | 48.2±2.9 | 93.6 |
| RoBERTa-Large FT | N | N | 50.8±0.5 | 44.6±5.1 | 44.7±2.6 | 93.2 |
#### WikiANN
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 81.4 | 83.5 | 82.6 | - |
| BERT-Base FT | N | N | 62.8±0 | 62.8±0 | 62.8±0 | 88.8 |
| BERT-Large FT | N | N | 62.8±0 | 62.6±0.4 | 62.5±0.6 | 91 |
| T5-Large FT | N | N | 61.7±0.7 | 62.1±0.2 | 62.4±0.6 | 87.4 |
| DeBERTa-Large FT | N | N | 58.5±3.3 | 57.9±5.8 | 58.3±6.2 | 91.1 |
| RoBERTa-Large FT | N | N | 58.5±8.8 | 56.9±3.4 | 48.4±6.7 | 91.2 |
#### SQuAD v2
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|-----------|----------|------|
| **Human** | N | N | 71.9 | 76.4 | 73.5 | - |
| T5-Large FT | N | N | 43.6±3.5 | 28.7±13.0 | 43.7±2.7 | 87.2 |
| RoBERTa-Large FT | N | N | 38.1±7.2 | 40.1±6.4 | 43.5±4.4 | 89.4 |
| DeBERTa-Large FT | N | N | 41.4±7.3 | 44.4±4.5 | 38.7±7.4 | 90 |
| BERT-Large FT | N | N | 42.3±5.6 | 35.8±9.7 | 35.3±6.4 | 81.8 |
| BERT-Base FT | N | N | 46.0±2.4 | 34.9±9.0 | 32.6±5.8 | 76.3 |
#### ReCoRD
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 94.1 | 94.2 | 91.9 | - |
| DeBERTa-Large FT | N | N | 15.7±5.0 | 16.8±5.7 | 21.1±3.6 | 80.7 |
| RoBERTa-Large FT | N | N | 12.0±1.9 | 9.9±6.2 | 16.0±2.8 | 80.3 |
| BERT-Large FT | N | N | 9.9±5.2 | 11.8±4.9 | 14.9±3.4 | 66 |
| BERT-Base FT | N | N | 10.3±1.8 | 11.7±2.4 | 13.1±3.3 | 54.4 |
| T5-Large FT | N | N | 11.9±2.7 | 11.7±1.5 | 12.0±3.8 | 77.3 |
## How do I cite CLUES?
```
@article{cluesteam2021,
title={Few-Shot Learning Evaluation in Natural Language Understanding},
author={Mukherjee, Subhabrata and Liu, Xiaodong and Zheng, Guoqing and Hosseini, Saghar and Cheng, Hao and Yang, Greg and Meek, Christopher and Awadallah, Ahmed Hassan and Gao, Jianfeng},
booktitle = {NeurIPS 2021},
year = {2021},
month = {December},
url = {https://www.microsoft.com/en-us/research/publication/clues-few-shot-learning-evaluation-in-natural-language-understanding/},
}
```
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
| microsoft/CLUES | [
"license:mit",
"region:us"
] | 2022-03-12T01:26:23+00:00 | {"license": "mit"} | 2022-03-25T22:05:58+00:00 |
3c70f2fe25f7c73d2460f77a4c3f8b1aa8a6e819 |
# Review Hotel in Indonesia
### Dataset Summary
Data about reviews of hotels in Indonesia
### Languages
Indonesia
## Dataset Structure
### Data Fields
- review_id : unique identification code of each review
- review_text : the main review of text
- category : label for each review, positive (1) or negative (0)
| rakkaalhazimi/hotel-review | [
"license:gpl-3.0",
"region:us"
] | 2022-03-12T05:52:57+00:00 | {"license": "gpl-3.0"} | 2022-03-12T07:23:47+00:00 |
67f9dbf9e17ada0dcdc47e05ad9b37ed01f8e82f |
## How to use the data sets
This dataset contains more than 16,000 unique pairs of protein sequences and ligand SMILES with experimentally determined
binding affinities and protein-ligand contacts (ligand atom/SMILES token vs. Calpha within 5 Angstrom). These
are represented by a list that contains the positions of non-zero elements of the flattened, sparse
sequence x smiles tokens (2048x512) matrix. The first and last entries in both dimensions
are padded to zero, they correspond to [CLS] and [SEP].
It can be used for fine-tuning a language model.
The data solely uses data from PDBind-cn.
Contacts are calculated at four cut-off distances: 5, 8, 11A and 15A.
### Use the already preprocessed data
Load a test/train split using
```
from datasets import load_dataset
train = load_dataset("jglaser/protein_ligand_contacts",split='train[:90%]')
validation = load_dataset("jglaser/protein_ligand_contacts",split='train[90%:]')
```
### Pre-process yourself
To manually perform the preprocessing, download the data sets from P.DBBind-cn
Register for an account at <https://www.pdbbind.org.cn/>, confirm the validation
email, then login and download
- the Index files (1)
- the general protein-ligand complexes (2)
- the refined protein-ligand complexes (3)
Extract those files in `pdbbind/data`
Run the script `pdbbind.py` in a compute job on an MPI-enabled cluster
(e.g., `mpirun -n 64 pdbbind.py`).
Perform the steps in the notebook `pdbbind.ipynb`
| jglaser/protein_ligand_contacts | [
"molecules",
"chemistry",
"SMILES",
"region:us"
] | 2022-03-12T07:09:53+00:00 | {"tags": ["molecules", "chemistry", "SMILES"]} | 2022-03-15T21:17:32+00:00 |
eeaa09638c5722e13fffd2daeaba4c2bec824d41 | # Dataset Card for Genecorpus-30M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Species](#species)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
<!---
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
--->
## Dataset Description
<!--- **Paper:**
--->
- **Point of Contact:** christina.theodoris@gladstone.ucsf.edu
### Dataset Summary
We assembled a large-scale pretraining corpus, Genecorpus-30M, comprised of ~30 million human single cell transcriptomes from a broad range of tissues from publicly available data. This corpus was used for pretraining [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
See [our manuscript](https://rdcu.be/ddrx0) for details.
### Supported Tasks
This corpus was used for pretraining [Geneformer](https://rdcu.be/ddrx0) and is compatible with pretraining or fine-tuning Geneformer or similar models.
### Species
Homo sapiens
## Dataset Structure
### Data Instances
Genecorpus-30M is provided as tokenized data in the Huggingface Datasets structure, which is based on the Apache Arrow format. Each example within the dataset is composed of the rank value encoding for a single cell within the corpus. Rank value encodings provide a nonparametric representation of each single cell’s transcriptome, ranking genes by their expression within that cell normalized by their expression across the entire Genecorpus-30M. This method takes advantage of the many observations of each gene’s expression across Genecorpus-30M to prioritize genes that distinguish cell state. Specifically, this method will deprioritize ubiquitously highly-expressed housekeeping genes by normalizing them to a lower rank. Conversely, genes such as transcription factors that may be lowly expressed when they are expressed but highly distinguish cell state will move to a higher rank within the encoding. Furthermore, this rank-based approach may be more robust against technical artifacts that may systematically bias the absolute transcript counts value while the overall relative ranking of genes within each cell remains more stable.
To accomplish this, we first calculated the nonzero median value of expression of each detected gene across all cells from the entire Genecorpus-30M. We aggregated the transcript count distribution for each gene, normalizing the gene transcript counts in each cell by the total transcript count of that cell to account for varying sequencing depth. We then normalized the genes in each single cell transcriptome by that gene’s nonzero median value of expression across Genecorpus-30M and ordered the genes by the rank of their normalized expression in that specific cell. Of note, we opted to use the nonzero median value of expression rather than include zeros in the distribution so as not to weight the value by tissue representation within Genecorpus-30M, assuming that a representative range of transcript values would be observed within the cells in which each gene was detected.
The rank value encodings for each single cell transcriptome were then tokenized based on a total vocabulary of 25,424 protein-coding or miRNA genes detected within Geneformer-30M. The token dictionary mapping each token ID to special tokens (pad and mask) or Ensembl IDs for each gene is included within the repository as a pickle file (token_dictionary.pkl).
### Data Fields
- `input_ids`: rank value encoding for an example cell
- `lengths`: length of rank value encoding for that example cell
### Data Splits
The dataset does not contain any predefined splits.
## Dataset Creation
### Curation Rationale
Mapping the gene regulatory networks that drive disease progression enables screening for molecules that correct the network by normalizing core regulatory elements, rather than targeting peripheral downstream effectors that may not be disease modifying. However, mapping the gene network architecture requires large amounts of transcriptomic data to learn the connections between genes, which impedes network-correcting drug discovery in settings with limited data, including rare diseases and diseases affecting clinically inaccessible tissues. Although data remains limited in these settings, recent advances in sequencing technologies have driven a rapid expansion in the amount of transcriptomic data available from human tissues more broadly. Furthermore, single cell technologies have facilitated the observation of transcriptomic states without averaging genes’ expression across multiple cells, potentially providing more precise data for inference of network interactions, especially in diseases driven by dysregulation of multiple cell types. Recently, the concept of transfer learning has revolutionized fields such as natural language understanding and computer vision by leveraging deep learning models pretrained on large-scale general datasets that can then be fine-tuned towards a vast array of downstream tasks with limited task-specific data that would be insufficient to yield meaningful predictions when used in isolation. We therefore assembled Genecorpus-30M to allow the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a pretrained transformer model that enables context-aware predictions in settings with limited data in network biology.
### Source Data
#### Initial Data Collection and Normalization
Source data included 29.9 million (29,900,531) human single cell transcriptomes from a broad range of tissues from 561 publicly available datasets from original studies cited in the Methods of Theodoris et al, Nature 2023. Datasets were filtered to retain cells with total read counts within three standard deviations of the mean within that dataset and mitochondrial reads within three standard deviations of the mean within that dataset. Ensembl-annotated protein-coding and miRNA genes were used for downstream analysis. Cells with less than seven detected Ensembl-annotated protein-coding or miRNA genes were excluded as the 15% masking used for the pretraining learning objective would not reliably mask a gene in cells with fewer detected genes. Ultimately, 27.4 million (27,406,217) cells passed the defined quality filters. Cells were then represented as rank value encodings as discussed above in [Data Instances](#data-instances).
#### Who are the source data producers?
Publicly available datasets containing raw counts were collected from National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO), NCBI Sequence Read Archive (SRA), Human Cell Atlas, European Molecular Biology Laboratory-European Bioinformatics Institute (EMBL-EBI) Single Cell Expression Atlas, Broad Institute Single Cell Portal, Brotman Baty Institute (BBI)-Allen Single Cell Atlases, Tumor Immune Single-cell Hub (TISCH) (excluding malignant cells), Panglao Database, 10x Genomics, University of California, Santa Cruz Cell Browser, European Genome-phenome Archive, Synapse, Riken, Zenodo, National Institutes of Health (NIH) Figshare Archive, NCBI dbGap, Refine.bio, China National GeneBank Sequence Archive, Mendeley Data, and individual communication with authors of the original studies as cited in the Methods of Theodoris et al, Nature 2023.
### Annotations
#### Annotation process
Geneformer-30M does not contain annotations.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
There is no personal or sensitive information included in the dataset. The dataset is composed of rank value encodings, so there are no traceable sequencing reads included.
## Considerations for Using the Data
### Social Impact of Dataset
Genecorpus-30M enabled the large-scale pretraining of [Geneformer](https://huggingface.co/ctheodoris/Geneformer), a foundation model that enables context-aware predictions in settings with limited data in network biology. Within our publication, we demonstrated that during pretraining, Geneformer gained a fundamental understanding of network dynamics, encoding network hierarchy in the model’s attention weights in a completely self-supervised manner. Fine-tuning Geneformer towards a diverse panel of downstream tasks relevant to chromatin and network dynamics using limited task-specific data demonstrated that Geneformer consistently boosted predictive accuracy. Applied to disease modeling with limited patient data, Geneformer identified candidate therapeutic targets for cardiomyopathy. Overall, Geneformer represents a pretrained foundation model from which fine-tuning towards a broad range of downstream applications can be pursued to accelerate discovery of key network regulators and candidate therapeutic targets.
### Discussion of Biases
We excluded cells with high mutational burdens (e.g. malignant cells and immortalized cell lines) that could lead to substantial network rewiring without companion genome sequencing to facilitate interpretation. We only included droplet-based sequencing platforms to assure expression value unit comparability. Although we assembled the dataset to represent as diverse a set of human tissues and cell types as possible, particular tissues and cell types are not represented due to unavailability of public data at the time of dataset assembly. In our manuscript, we demonstrated that pretraining with larger and more diverse corpuses consistently improved Geneformer’s predictive power, consistent with observations that large-scale pretraining allows training of deeper models that ultimately have greater predictive potential in fields including NLU, computer vision, and mathematical problem-solving. Additionally, exposure to hundreds of experimental datasets during pretraining also appeared to promote robustness to batch-dependent technical artifacts and individual variability that commonly impact single cell analyses in biology. These findings suggest that as the amount of publicly available transcriptomic data continues to expand, future models pretrained on even larger-scale corpuses may open opportunities to achieve meaningful predictions in even more elusive tasks with increasingly limited task-specific data.
### Other Known Limitations
Genecorpus-30M was intended to be used for self-supervised pretraining. To achieve the best possible predictions in downstream tasks, Geneformer should be fine-tuned with labeled datasets relevant to the task at hand.
## Additional Information
### Dataset Curators
Christina Theodoris, MD, PhD
### Citation Information
Theodoris CV*, Xiao L, Chopra A, Chaffin MD, Al Sayed ZR, Hill MC, Mantineo H, Brydon EM, Zeng Z, Liu XS, Ellinor PT*. Transfer learning enables predictions in network biology. Nature. 2023 May 31; Epub ahead of print.
(*co-corresponding authors)
<!--- ### Licensing Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
---> | ctheodoris/Genecorpus-30M | [
"license:apache-2.0",
"region:us"
] | 2022-03-12T21:21:46+00:00 | {"license": "apache-2.0"} | 2023-11-11T06:42:26+00:00 |
9d24e08b068f24f80d9b3679e3806fe1c1be8fb3 | #catalonian independence tweet dataset
This dataset is a port of the official ['catalonia_independence' dataset] (https://huggingface.co/datasets/catalonia_independence) on the Hub. It has just the Catalan language version. | SetFit/catalonia_independence_ca | [
"region:us"
] | 2022-03-13T02:43:15+00:00 | {} | 2022-03-13T09:10:29+00:00 |
4d0ae2a3df2769cd4eff981ae8184b9fd72b0798 | #catalonian independence tweet dataset
This dataset is a port of the official ['catalonia_independence' dataset] (https://huggingface.co/datasets/catalonia_independence) on the Hub. It has just the Spanish language version. | SetFit/catalonia_independence_es | [
"region:us"
] | 2022-03-13T02:44:02+00:00 | {} | 2022-03-13T09:11:31+00:00 |
7fa32cf76b45dceb224903152c34dfa13718dfb2 | #xglue nc
This dataset is a port of the official ['xglue' dataset] (https://huggingface.co/datasets/xglue) on the Hub. It has just the news category classification section. It has been reduced to just 3 columns (plus text label) that are relevant to the SetFit task. Validation and test in English, Spanish, French, Russian, and German. | SetFit/xglue_nc | [
"region:us"
] | 2022-03-13T02:44:23+00:00 | {} | 2022-03-14T03:27:58+00:00 |
bb25d49f17c86f7affb193c18e0511afcd51b933 | #amazon reviews multi german
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the German language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_de | [
"region:us"
] | 2022-03-13T02:45:18+00:00 | {} | 2022-03-23T15:34:53+00:00 |
16015418b488c9186fce74b058877ea939ca934d | #amazon reviews multi spanish
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Spanish language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_es | [
"region:us"
] | 2022-03-13T02:45:47+00:00 | {} | 2022-03-23T15:43:09+00:00 |
77676678b2e9e03265aae02823ba2f77b531d11a | #amazon reviews multi japanese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Japanese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_ja | [
"region:us"
] | 2022-03-13T02:46:28+00:00 | {} | 2022-03-23T15:40:06+00:00 |
184ac90d5511a7f6801cba99688892f440ece660 | #amazon reviews multi chinese
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the Chinese language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_zh | [
"region:us"
] | 2022-03-13T02:46:40+00:00 | {} | 2022-03-23T15:30:49+00:00 |
3a43b31171a667fb0bb7a298e143fd022266f78b | #amazon reviews multi french
This dataset is a port of the official ['amazon_reviews_multi' dataset] (https://huggingface.co/datasets/amazon_reviews_multi) on the Hub. It has just the French language version. It has been reduced to just 3 columns (and 4th "label_text") that are relevant to the SetFit task. | SetFit/amazon_reviews_multi_fr | [
"region:us"
] | 2022-03-13T02:48:20+00:00 | {} | 2022-03-23T15:45:44+00:00 |
7c9a79666d13e6d27ee74279fccdca11decbfb5d | #toy dataset
This is a small portion of the full dataset, used for testing and formatting purposes.
| multiIR/toy_data | [
"region:us"
] | 2022-03-13T04:08:34+00:00 | {} | 2022-03-14T10:33:27+00:00 |
f9dd0d78228c6840ae9d97ffb7b8d6dfbbbc8634 |
The `post-data-by-subreddit.tar` file contains 5000 gzipped json files - one for each of the top 5000 subreddits (as roughly measured by subscriber count and comment activity). Each of those json files (e.g. `askreddit.json`) contains an array of the data for the top 1000 posts of all time.
Notes:
* I stopped crawling a subreddit's top-posts list if I reached a batch that had a post with a score less than 5, so some subreddits won't have the full 1000 posts.
* No posts comments are included. Only the posts themselves.
* See the example file `askreddit.json` in this repo if you want to see what you're getting before downloading all the data.
* The list of subreddits included are listed in `top-5k-subreddits.json`.
* NSFW subreddits have been included in the crawl, so you might have to filter them out depending on your use case.
* The Deno scraping/crawling script is included as `crawl.js`, and can be started with `deno run --allow-net --allow-read=. --allow-write=. crawl.js` once you've [installed Deno](https://deno.land/manual/getting_started/installation) and have downloaded `top-5k-subreddits.json` into the same folder as `crawl.js`. | rocca/top-reddit-posts | [
"license:mit",
"region:us"
] | 2022-03-13T05:06:55+00:00 | {"license": "mit"} | 2022-03-23T05:16:33+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.