id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
756,349,001
https://api.github.com/repos/huggingface/datasets/issues/1060
https://github.com/huggingface/datasets/pull/1060
1,060
Fix squad V2 metric script
closed
2
2020-12-03T16:23:32
2020-12-22T15:02:20
2020-12-22T15:02:19
sgugger
[]
The current squad v2 metric doesn't work with the squad (v1 or v2) datasets. The script is copied from `squad_evaluate` in transformers that requires the labels (with multiple answers) to be like this: ``` references = [{'id': 'a', 'answers': [ {'text': 'Denver Broncos', 'answer_start': 177}, {'text': 'Denver Broncos', 'answer_start': 177} ]}] ``` while the dataset had references like this: ``` references = [{'id': 'a', 'answers': {'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]} }] ``` Using one or the other format fails with the current squad v2 metric: ``` from datasets import load_metric metric = load_metric("squad_v2") predictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}] references = [{'id': 'a', 'answers': [ {'text': 'Denver Broncos', 'answer_start': 177}, {'text': 'Denver Broncos', 'answer_start': 177} ]}] metric.compute(predictions=predictions, references=references) ``` fails as well as ``` from datasets import load_metric metric = load_metric("squad_v2") predictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}] references = [{'id': 'a', 'answers': {'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]} }] metric.compute(predictions=predictions, references=references) ``` This is because arrow reformats the references behind the scene. With this PR (tested locally), both the snippets up there work and return proper results.
true
756,348,623
https://api.github.com/repos/huggingface/datasets/issues/1059
https://github.com/huggingface/datasets/pull/1059
1,059
Add TLC
closed
3
2020-12-03T16:23:06
2020-12-04T11:15:33
2020-12-04T11:15:33
chameleonTK
[]
Added TLC dataset
true
756,332,704
https://api.github.com/repos/huggingface/datasets/issues/1058
https://github.com/huggingface/datasets/pull/1058
1,058
added paws-x dataset
closed
0
2020-12-03T16:06:01
2020-12-04T13:46:05
2020-12-04T13:46:05
bhavitvyamalik
[]
Added paws-x dataset. Updating README and tags in the dataset card in a while
true
756,331,419
https://api.github.com/repos/huggingface/datasets/issues/1057
https://github.com/huggingface/datasets/pull/1057
1,057
Adding TamilMixSentiment
closed
1
2020-12-03T16:04:25
2020-12-04T10:09:34
2020-12-04T10:09:12
jamespaultg
[]
true
756,309,828
https://api.github.com/repos/huggingface/datasets/issues/1056
https://github.com/huggingface/datasets/pull/1056
1,056
Add deal_or_no_dialog
closed
0
2020-12-03T15:38:07
2020-12-03T18:13:45
2020-12-03T18:13:45
moussaKam
[]
Add deal_or_no_dialog Dataset github: https://github.com/facebookresearch/end-to-end-negotiator Paper: [Deal or No Deal? End-to-End Learning for Negotiation Dialogues](https://arxiv.org/abs/1706.05125)
true
756,298,372
https://api.github.com/repos/huggingface/datasets/issues/1055
https://github.com/huggingface/datasets/pull/1055
1,055
Add hebrew-sentiment
closed
4
2020-12-03T15:24:31
2022-02-21T15:26:05
2020-12-04T11:24:16
elronbandel
[]
hebrew-sentiment dataset is ready! (including tests, tags etc)
true
756,265,688
https://api.github.com/repos/huggingface/datasets/issues/1054
https://github.com/huggingface/datasets/pull/1054
1,054
Add dataset - SemEval 2014 - Task 1
closed
1
2020-12-03T14:52:59
2020-12-04T00:52:44
2020-12-04T00:52:44
ashmeet13
[]
Adding the dataset of SemEval 2014 Task 1 Found the dataset under the shared Google Sheet > Recurring Task Datasets Task Homepage - https://alt.qcri.org/semeval2014/task1 Thank you!
true
756,176,061
https://api.github.com/repos/huggingface/datasets/issues/1053
https://github.com/huggingface/datasets/pull/1053
1,053
Fix dataset URL and file names, and add column name in "Social Bias Frames" dataset
closed
1
2020-12-03T13:03:05
2020-12-03T13:42:26
2020-12-03T13:42:26
otakumesi
[]
# Why I did When I use "social_bias_frames" datasets in this library, I got 404 Errors. So, I fixed this error and another some problems that I faced to use the dataset. # What I did * Modify this dataset URL * Modify this dataset file names * Add a "dataSource" column Thank you!
true
756,171,798
https://api.github.com/repos/huggingface/datasets/issues/1052
https://github.com/huggingface/datasets/pull/1052
1,052
add sharc dataset
closed
0
2020-12-03T12:57:23
2020-12-03T16:44:21
2020-12-03T14:09:54
patil-suraj
[]
This PR adds the ShARC dataset. More info: https://sharc-data.github.io/index.html
true
756,169,049
https://api.github.com/repos/huggingface/datasets/issues/1051
https://github.com/huggingface/datasets/pull/1051
1,051
Add Facebook SimpleQuestionV2
closed
1
2020-12-03T12:53:20
2020-12-03T17:31:59
2020-12-03T17:31:58
abhishekkrthakur
[]
Add simple questions v2: https://research.fb.com/downloads/babi/
true
756,166,728
https://api.github.com/repos/huggingface/datasets/issues/1050
https://github.com/huggingface/datasets/pull/1050
1,050
Add GoEmotions
closed
1
2020-12-03T12:49:53
2020-12-03T17:37:45
2020-12-03T17:30:08
joeddav
[]
Adds the GoEmotions dataset, a nice emotion classification dataset with 27 (multi-)label annotations on reddit comments. Includes both a large raw version and a narrowed version with predefined train/test/val splits, which I've included as separate configs with the latter as a default. - Webpage/repo: https://github.com/google-research/google-research/tree/master/goemotions - Paper: https://arxiv.org/abs/2005.00547
true
756,157,602
https://api.github.com/repos/huggingface/datasets/issues/1049
https://github.com/huggingface/datasets/pull/1049
1,049
Add siswati ner corpus
closed
0
2020-12-03T12:36:00
2020-12-03T17:27:02
2020-12-03T17:26:55
yvonnegitau
[]
true
756,133,072
https://api.github.com/repos/huggingface/datasets/issues/1048
https://github.com/huggingface/datasets/pull/1048
1,048
Adding NCHLT dataset
closed
1
2020-12-03T11:59:25
2020-12-04T13:29:57
2020-12-04T13:29:57
Narsil
[]
https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype_0=database&filtertype_1=title&filter_relational_operator_1=contains&filter_relational_operator_0=equals&filter_1=&filter_0=Monolingual+Text+Corpora%3A+Annotated&filtertype=project&filter_relational_operator=equals&filter=NCHLT+Text+II
true
756,127,490
https://api.github.com/repos/huggingface/datasets/issues/1047
https://github.com/huggingface/datasets/pull/1047
1,047
Add KorNLU
closed
5
2020-12-03T11:50:54
2020-12-03T17:17:07
2020-12-03T17:16:09
sumanthd17
[]
Added Korean NLU datasets. The link to the dataset can be found [here](https://github.com/kakaobrain/KorNLUDatasets) and the paper can be found [here](https://arxiv.org/abs/2004.03289) **Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
true
756,122,709
https://api.github.com/repos/huggingface/datasets/issues/1046
https://github.com/huggingface/datasets/issues/1046
1,046
Dataset.map() turns tensors into lists?
closed
2
2020-12-03T11:43:46
2022-10-05T12:12:41
2022-10-05T12:12:41
tombosc
[]
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists! ```import datasets import torch from datasets import load_dataset print("version datasets", datasets.__version__) dataset = load_dataset("snli", split='train[0:50]') def tokenizer_fn(example): # actually uses a tokenizer which does something like: return {'input_ids': torch.tensor([[0, 1, 2]])} print("First item in dataset:\n", dataset[0]) tokenized = tokenizer_fn(dataset[0]) print("Tokenized hyp:\n", tokenized) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) ``` The output is: ``` version datasets 1.1.3 Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c) First item in dataset: {'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1} Tokenized hyp: {'input_ids': tensor([[0, 1, 2]])} Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow Tokenized using map: {'input_ids': [[0, 1, 2]]} <class 'torch.Tensor'> <class 'list'> ``` Or am I doing something wrong?
false
756,120,760
https://api.github.com/repos/huggingface/datasets/issues/1045
https://github.com/huggingface/datasets/pull/1045
1,045
Add xitsonga ner corpus
closed
1
2020-12-03T11:40:48
2020-12-03T17:20:03
2020-12-03T17:19:32
yvonnegitau
[]
true
756,111,647
https://api.github.com/repos/huggingface/datasets/issues/1044
https://github.com/huggingface/datasets/pull/1044
1,044
Add AMTTL Chinese Word Segmentation Dataset
closed
0
2020-12-03T11:27:52
2020-12-03T17:13:14
2020-12-03T17:13:13
JetRunner
[]
true
756,100,717
https://api.github.com/repos/huggingface/datasets/issues/1043
https://github.com/huggingface/datasets/pull/1043
1,043
Add TSAC: Tunisian Sentiment Analysis Corpus
closed
0
2020-12-03T11:12:35
2020-12-03T13:35:05
2020-12-03T13:32:24
abhishekkrthakur
[]
github: https://github.com/fbougares/TSAC paper: https://www.aclweb.org/anthology/W17-1307/
true
756,097,583
https://api.github.com/repos/huggingface/datasets/issues/1042
https://github.com/huggingface/datasets/pull/1042
1,042
Add Big Patent dataset
closed
2
2020-12-03T11:07:59
2020-12-04T04:38:26
2020-12-04T04:38:26
mattbui
[]
- More info on the dataset: https://evasharma.github.io/bigpatent/ - There's another raw version of the dataset available from tfds. However, they're quite large so I don't have the resources to fully test all the configs for that version yet. We'll try to add it later. - ~Currently, there are no dummy data for this dataset yet as I'm facing some problems with generating them. I'm trying to add them later.~
true
756,055,102
https://api.github.com/repos/huggingface/datasets/issues/1041
https://github.com/huggingface/datasets/pull/1041
1,041
Add SuperGLUE metric
closed
0
2020-12-03T10:11:34
2021-02-23T19:02:59
2021-02-23T18:02:12
calpt
[]
Adds a new metric for the SuperGLUE benchmark (similar to the GLUE benchmark metric).
true
756,050,387
https://api.github.com/repos/huggingface/datasets/issues/1040
https://github.com/huggingface/datasets/pull/1040
1,040
Add UN Universal Declaration of Human Rights (UDHR)
closed
0
2020-12-03T10:04:58
2020-12-03T19:20:15
2020-12-03T19:20:11
joeddav
[]
Universal declaration of human rights with translations in 464 languages and dialects. - UN page: https://www.ohchr.org/EN/UDHR/Pages/UDHRIndex.aspx - Raw data source: https://unicode.org/udhr/index.html Each instance of the dataset corresponds to one translation of the document. Since there's only one instance per language (and because there are 500 languages so the dummy data would be messy), I opted to just include them all under the same single config. I wasn't able to find any kind of license so I just copied the copyright notice. I was pretty careful careful generating the language tags so they _should_ all be correct & consistent BCP-47 codes per the docs.
true
756,000,478
https://api.github.com/repos/huggingface/datasets/issues/1039
https://github.com/huggingface/datasets/pull/1039
1,039
Update ADD NEW DATASET
closed
0
2020-12-03T08:58:32
2020-12-03T09:18:28
2020-12-03T09:18:10
jplu
[]
This PR adds a couple of detail on cloning/rebasing the repo.
true
755,987,997
https://api.github.com/repos/huggingface/datasets/issues/1038
https://github.com/huggingface/datasets/pull/1038
1,038
add med_hop
closed
0
2020-12-03T08:40:27
2020-12-03T16:53:13
2020-12-03T16:52:23
patil-suraj
[]
This PR adds the MedHop dataset from the QAngaroo multi hop reading comprehension datasets More info: http://qangaroo.cs.ucl.ac.uk/index.html
true
755,975,586
https://api.github.com/repos/huggingface/datasets/issues/1037
https://github.com/huggingface/datasets/pull/1037
1,037
Fix docs indentation issues
closed
2
2020-12-03T08:21:34
2020-12-22T16:01:15
2020-12-22T16:01:15
albertvillanova
[]
Replace tabs with spaces.
true
755,953,294
https://api.github.com/repos/huggingface/datasets/issues/1036
https://github.com/huggingface/datasets/pull/1036
1,036
Add PerSenT
closed
2
2020-12-03T07:43:58
2020-12-14T13:40:43
2020-12-14T13:40:43
jeromeku
[]
Added [Person's SentimenT](https://stonybrooknlp.github.io/PerSenT/) dataset.
true
755,947,097
https://api.github.com/repos/huggingface/datasets/issues/1035
https://github.com/huggingface/datasets/pull/1035
1,035
add wiki_hop
closed
1
2020-12-03T07:32:26
2020-12-03T16:43:40
2020-12-03T16:41:12
patil-suraj
[]
This PR adds the WikiHop dataset from the QAngaroo multi hop reading comprehension datasets More info: http://qangaroo.cs.ucl.ac.uk/index.html
true
755,936,327
https://api.github.com/repos/huggingface/datasets/issues/1034
https://github.com/huggingface/datasets/pull/1034
1,034
add scb_mt_enth_2020
closed
0
2020-12-03T07:13:49
2020-12-03T16:57:23
2020-12-03T16:57:23
cstorm125
[]
## scb-mt-en-th-2020: A Large English-Thai Parallel Corpus The primary objective of our work is to build a large-scale English-Thai dataset for machine translation. We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources, namely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents. Methodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner. We train machine translation models based on this dataset. Our models' performance are comparable to that of Google Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is included in the training data for both Thai-English and English-Thai translation. The dataset, pre-trained models, and source code to reproduce our work are available for public use.
true
755,921,927
https://api.github.com/repos/huggingface/datasets/issues/1033
https://github.com/huggingface/datasets/pull/1033
1,033
Add support for ".txm" format
closed
5
2020-12-03T06:52:08
2021-02-21T19:47:11
2021-02-21T19:47:11
albertvillanova
[]
In dummy data generation, add support for XML-like ".txm" file format. Also support filenames with additional compression extension: ".txm.gz".
true
755,858,785
https://api.github.com/repos/huggingface/datasets/issues/1032
https://github.com/huggingface/datasets/pull/1032
1,032
IIT B English to Hindi machine translation dataset
closed
5
2020-12-03T05:18:45
2021-01-10T08:44:51
2021-01-10T08:44:15
spatil6
[]
Adding IIT Bombay English-Hindi Corpus dataset more info : http://www.cfilt.iitb.ac.in/iitb_parallel/
true
755,844,004
https://api.github.com/repos/huggingface/datasets/issues/1031
https://github.com/huggingface/datasets/pull/1031
1,031
add crows_pairs
closed
2
2020-12-03T05:05:11
2020-12-03T18:29:52
2020-12-03T18:29:39
patil-suraj
[]
This PR adds CrowS-Pairs datasets. More info: https://github.com/nyu-mll/crows-pairs/ https://arxiv.org/pdf/2010.00133.pdf
true
755,777,438
https://api.github.com/repos/huggingface/datasets/issues/1030
https://github.com/huggingface/datasets/pull/1030
1,030
allegro_reviews dataset
closed
0
2020-12-03T03:11:39
2020-12-04T10:56:29
2020-12-03T16:34:47
abecadel
[]
- **Name:** *allegro_reviews* - **Description:** *Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review).* - **Data:** *https://github.com/allegro/klejbenchmark-allegroreviews* - **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
true
755,767,616
https://api.github.com/repos/huggingface/datasets/issues/1029
https://github.com/huggingface/datasets/pull/1029
1,029
Add PEC
closed
5
2020-12-03T02:46:08
2020-12-04T10:58:19
2020-12-03T16:15:06
zhongpeixiang
[]
A persona-based empathetic conversation dataset.
true
755,712,854
https://api.github.com/repos/huggingface/datasets/issues/1028
https://github.com/huggingface/datasets/pull/1028
1,028
Add ASSET dataset for text simplification evaluation
closed
1
2020-12-03T00:28:29
2020-12-17T10:03:06
2020-12-03T16:34:37
yjernite
[]
Adding the ASSET dataset from https://github.com/facebookresearch/asset One config for the simplification data, one for the human ratings of quality. The README.md borrows from that written by @juand-r
true
755,695,420
https://api.github.com/repos/huggingface/datasets/issues/1027
https://github.com/huggingface/datasets/issues/1027
1,027
Hi
closed
0
2020-12-02T23:47:14
2020-12-03T16:42:41
2020-12-03T16:42:41
suemori87
[]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
755,689,195
https://api.github.com/repos/huggingface/datasets/issues/1026
https://github.com/huggingface/datasets/issues/1026
1,026
Lío o
closed
0
2020-12-02T23:32:25
2020-12-03T16:42:47
2020-12-03T16:42:47
ghost
[]
````l````````` ``` O ``` ````` Ño ``` ```` ```
false
755,673,371
https://api.github.com/repos/huggingface/datasets/issues/1025
https://github.com/huggingface/datasets/pull/1025
1,025
Add Sesotho Ner
closed
4
2020-12-02T23:00:15
2020-12-16T16:27:03
2020-12-16T16:27:02
yvonnegitau
[]
true
755,664,113
https://api.github.com/repos/huggingface/datasets/issues/1024
https://github.com/huggingface/datasets/pull/1024
1,024
Add ZEST: ZEroShot learning from Task descriptions
closed
1
2020-12-02T22:41:20
2020-12-03T19:21:00
2020-12-03T16:09:15
joeddav
[]
Adds the ZEST dataset on zero-shot learning from task descriptions from AI2. - Webpage: https://allenai.org/data/zest - Paper: https://arxiv.org/abs/2011.08115 The nature of this dataset made the supported task tags tricky if you wouldn't mind giving any feedback @yjernite. Also let me know if you think we should have a `other-task-generalization` or something like that...
true
755,655,752
https://api.github.com/repos/huggingface/datasets/issues/1023
https://github.com/huggingface/datasets/pull/1023
1,023
Add Schema Guided Dialogue dataset
closed
0
2020-12-02T22:26:01
2020-12-03T01:18:01
2020-12-03T01:18:01
yjernite
[]
This PR adds the Schema Guided Dialogue dataset created for the DSTC8 challenge - https://github.com/google-research-datasets/dstc8-schema-guided-dialogue A bit simpler than MultiWOZ, the only tricky thing is the sequence of dictionaries that had to be linearized. There is a config for the data proper, and a config for the schemas.
true
755,651,377
https://api.github.com/repos/huggingface/datasets/issues/1022
https://github.com/huggingface/datasets/pull/1022
1,022
add MRQA
closed
1
2020-12-02T22:17:56
2020-12-04T00:34:26
2020-12-04T00:34:25
VictorSanh
[]
MRQA (shared task 2019) out of distribution generalization Framed as extractive question answering Dataset is the concatenation (of subsets) of existing QA datasets processed to match the SQuAD format
true
755,644,559
https://api.github.com/repos/huggingface/datasets/issues/1021
https://github.com/huggingface/datasets/pull/1021
1,021
Add Gutenberg time references dataset
closed
1
2020-12-02T22:05:26
2020-12-03T10:33:39
2020-12-03T10:33:38
TevenLeScao
[]
This PR adds the gutenberg_time dataset: https://arxiv.org/abs/2011.04124
true
755,601,450
https://api.github.com/repos/huggingface/datasets/issues/1020
https://github.com/huggingface/datasets/pull/1020
1,020
Add Setswana NER
closed
0
2020-12-02T20:52:07
2020-12-03T14:56:14
2020-12-03T14:56:14
yvonnegitau
[]
true
755,582,090
https://api.github.com/repos/huggingface/datasets/issues/1019
https://github.com/huggingface/datasets/pull/1019
1,019
Add caWaC dataset
closed
0
2020-12-02T20:18:55
2020-12-03T14:47:09
2020-12-03T14:47:09
albertvillanova
[]
Add dataset.
true
755,570,882
https://api.github.com/repos/huggingface/datasets/issues/1018
https://github.com/huggingface/datasets/pull/1018
1,018
Add Sepedi NER
closed
1
2020-12-02T20:01:05
2020-12-03T21:47:03
2020-12-03T21:46:38
yvonnegitau
[]
This is a new branch created for this dataset
true
755,558,175
https://api.github.com/repos/huggingface/datasets/issues/1017
https://github.com/huggingface/datasets/pull/1017
1,017
Specify file encoding
closed
1
2020-12-02T19:40:45
2020-12-03T00:44:25
2020-12-03T00:44:25
albertvillanova
[]
If not specified, Python uses system default, which for Windows is not "utf-8".
true
755,521,862
https://api.github.com/repos/huggingface/datasets/issues/1016
https://github.com/huggingface/datasets/pull/1016
1,016
Add CLINC150 dataset
closed
0
2020-12-02T18:44:30
2020-12-03T10:32:04
2020-12-03T10:32:04
sumanthd17
[]
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
true
755,508,841
https://api.github.com/repos/huggingface/datasets/issues/1015
https://github.com/huggingface/datasets/pull/1015
1,015
add hard dataset
closed
1
2020-12-02T18:27:36
2020-12-03T15:03:54
2020-12-03T15:03:54
zaidalyafeai
[]
Hotel Reviews in Arabic language.
true
755,505,851
https://api.github.com/repos/huggingface/datasets/issues/1014
https://github.com/huggingface/datasets/pull/1014
1,014
Add SciTLDR Dataset (Take 2)
closed
6
2020-12-02T18:22:50
2020-12-02T18:55:10
2020-12-02T18:37:58
bharatr21
[]
Adds the SciTLDR Dataset by AI2 Added the `README.md` card with tags to the best of my knowledge Multi-target summaries or TLDRs of Scientific Documents Continued from #986
true
755,493,075
https://api.github.com/repos/huggingface/datasets/issues/1013
https://github.com/huggingface/datasets/pull/1013
1,013
Adding CS restaurants dataset
closed
0
2020-12-02T18:02:30
2020-12-02T18:25:20
2020-12-02T18:25:19
TevenLeScao
[]
This PR adds the CS restaurants dataset; this is a re-opening of a previous PR with a chaotic commit history.
true
755,485,658
https://api.github.com/repos/huggingface/datasets/issues/1012
https://github.com/huggingface/datasets/pull/1012
1,012
Adding Evidence Inference Data:
closed
0
2020-12-02T17:51:35
2020-12-03T15:04:46
2020-12-03T15:04:46
Narsil
[]
http://evidence-inference.ebm-nlp.com/download/ https://arxiv.org/pdf/2005.04177.pdf
true
755,463,726
https://api.github.com/repos/huggingface/datasets/issues/1011
https://github.com/huggingface/datasets/pull/1011
1,011
Add Bilingual Corpus of Arabic-English Parallel Tweets
closed
6
2020-12-02T17:20:02
2020-12-04T14:45:10
2020-12-04T14:44:33
sumanthd17
[]
Added Bilingual Corpus of Arabic-English Parallel Tweets. The link to the dataset can be found [here](https://alt.qcri.org/wp-content/uploads/2020/08/Bilingual-Corpus-of-Arabic-English-Parallel-Tweets.zip) and the paper can be found [here](https://www.aclweb.org/anthology/2020.bucc-1.3.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
true
755,432,143
https://api.github.com/repos/huggingface/datasets/issues/1010
https://github.com/huggingface/datasets/pull/1010
1,010
Add NoReC: Norwegian Review Corpus
closed
0
2020-12-02T16:38:29
2021-02-18T14:47:29
2021-02-18T14:47:28
abhishekkrthakur
[]
true
755,384,433
https://api.github.com/repos/huggingface/datasets/issues/1009
https://github.com/huggingface/datasets/pull/1009
1,009
Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset.
closed
0
2020-12-02T15:40:36
2020-12-03T13:16:30
2020-12-03T13:16:29
Narsil
[]
https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679
true
755,372,798
https://api.github.com/repos/huggingface/datasets/issues/1008
https://github.com/huggingface/datasets/pull/1008
1,008
Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679
closed
1
2020-12-02T15:28:05
2020-12-02T15:40:55
2020-12-02T15:40:55
Narsil
[]
null
true
755,364,078
https://api.github.com/repos/huggingface/datasets/issues/1007
https://github.com/huggingface/datasets/pull/1007
1,007
Include license file in source distribution
closed
0
2020-12-02T15:17:43
2020-12-02T17:58:05
2020-12-02T17:58:05
synapticarbors
[]
It would be helpful to include the license file in the source distribution.
true
755,362,766
https://api.github.com/repos/huggingface/datasets/issues/1006
https://github.com/huggingface/datasets/pull/1006
1,006
add yahoo_answers_topics
closed
1
2020-12-02T15:16:13
2020-12-03T16:44:38
2020-12-02T18:01:32
patil-suraj
[]
This PR adds yahoo answers topic classification dataset. More info: https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset cc @joeddav, @yjernite
true
755,337,255
https://api.github.com/repos/huggingface/datasets/issues/1005
https://github.com/huggingface/datasets/pull/1005
1,005
Adding Autshumato South african langages:
closed
0
2020-12-02T14:47:33
2020-12-03T13:13:30
2020-12-03T13:13:30
Narsil
[]
https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype=database&filter_relational_operator=equals&filter=Multilingual+Text+Corpora%3A+Aligned
true
755,325,368
https://api.github.com/repos/huggingface/datasets/issues/1004
https://github.com/huggingface/datasets/issues/1004
1,004
how large datasets are handled under the hood
closed
3
2020-12-02T14:32:40
2022-10-05T12:13:29
2022-10-05T12:13:29
rabeehkarimimahabadi
[]
Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
false
755,310,318
https://api.github.com/repos/huggingface/datasets/issues/1003
https://github.com/huggingface/datasets/pull/1003
1,003
Add multi_x_science_sum
closed
0
2020-12-02T14:14:01
2020-12-02T17:39:05
2020-12-02T17:39:05
moussaKam
[]
Add Multi-XScience Dataset. github repo: https://github.com/yaolu/Multi-XScience paper: [Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235)
true
755,309,758
https://api.github.com/repos/huggingface/datasets/issues/1002
https://github.com/huggingface/datasets/pull/1002
1,002
Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
closed
2
2020-12-02T14:13:17
2020-12-07T16:58:03
2020-12-03T13:14:33
Narsil
[]
null
true
755,309,071
https://api.github.com/repos/huggingface/datasets/issues/1001
https://github.com/huggingface/datasets/pull/1001
1,001
Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
closed
1
2020-12-02T14:12:30
2020-12-02T14:13:12
2020-12-02T14:13:12
Narsil
[]
null
true
755,292,066
https://api.github.com/repos/huggingface/datasets/issues/1000
https://github.com/huggingface/datasets/pull/1000
1,000
UM005: Urdu <> English Translation Dataset
closed
0
2020-12-02T13:51:35
2020-12-04T15:34:30
2020-12-04T15:34:29
abhishekkrthakur
[]
Adds Urdu-English dataset for machine translation: http://ufal.ms.mff.cuni.cz/umc/005-en-ur/
true
755,246,786
https://api.github.com/repos/huggingface/datasets/issues/999
https://github.com/huggingface/datasets/pull/999
999
add generated_reviews_enth
closed
0
2020-12-02T12:50:43
2020-12-03T11:17:28
2020-12-03T11:17:28
cstorm125
[]
`generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.
true
755,235,356
https://api.github.com/repos/huggingface/datasets/issues/998
https://github.com/huggingface/datasets/pull/998
998
adding yahoo_answers_qa
closed
0
2020-12-02T12:33:54
2020-12-02T13:45:40
2020-12-02T13:26:06
patil-suraj
[]
Adding Yahoo Answers QA dataset. More info: https://ciir.cs.umass.edu/downloads/nfL6/
true
755,185,517
https://api.github.com/repos/huggingface/datasets/issues/997
https://github.com/huggingface/datasets/pull/997
997
Microsoft CodeXGlue
closed
4
2020-12-02T11:21:18
2021-06-08T13:42:25
2021-06-08T13:42:24
madlag
[]
Datasets from https://github.com/microsoft/CodeXGLUE This contains 13 datasets: code_x_glue_cc_clone_detection_big_clone_bench code_x_glue_cc_clone_detection_poj_104 code_x_glue_cc_cloze_testing_all code_x_glue_cc_cloze_testing_maxmin code_x_glue_cc_code_completion_line code_x_glue_cc_code_completion_token code_x_glue_cc_code_refinement code_x_glue_cc_code_to_code_trans code_x_glue_cc_defect_detection code_x_glue_ct_code_to_text code_x_glue_tc_nl_code_search_adv code_x_glue_tc_text_to_code code_x_glue_tt_text_to_text
true
755,176,084
https://api.github.com/repos/huggingface/datasets/issues/996
https://github.com/huggingface/datasets/issues/996
996
NotADirectoryError while loading the CNN/Dailymail dataset
closed
12
2020-12-02T11:07:56
2022-02-17T14:13:39
2022-02-17T14:13:39
arc-bu
[]
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
false
755,175,199
https://api.github.com/repos/huggingface/datasets/issues/995
https://github.com/huggingface/datasets/pull/995
995
added dataset circa
closed
1
2020-12-02T11:06:39
2020-12-04T10:58:16
2020-12-03T09:39:37
bhavitvyamalik
[]
Dataset Circa added. Only README.md and dataset card left
true
755,146,834
https://api.github.com/repos/huggingface/datasets/issues/994
https://github.com/huggingface/datasets/pull/994
994
Add Sepedi ner corpus
closed
2
2020-12-02T10:30:07
2020-12-03T10:19:14
2020-12-02T18:20:08
yvonnegitau
[]
true
755,135,768
https://api.github.com/repos/huggingface/datasets/issues/993
https://github.com/huggingface/datasets/issues/993
993
Problem downloading amazon_reviews_multi
closed
2
2020-12-02T10:15:57
2022-10-05T12:21:34
2022-10-05T12:21:34
hfawaz
[]
Thanks for adding the dataset. After trying to load the dataset, I am getting the following error: `ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json ` I used the following code to load the dataset: `load_dataset( dataset_name, "all_languages", cache_dir=".data" )` I am using version 1.1.3 of `datasets` Note that I can perform a successfull `wget https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json`
false
755,124,963
https://api.github.com/repos/huggingface/datasets/issues/992
https://github.com/huggingface/datasets/pull/992
992
Add CAIL 2018 dataset
closed
0
2020-12-02T10:01:40
2020-12-02T16:49:02
2020-12-02T16:49:01
JetRunner
[]
true
755,117,902
https://api.github.com/repos/huggingface/datasets/issues/991
https://github.com/huggingface/datasets/pull/991
991
Adding farsi_news dataset (https://github.com/sci2lab/Farsi-datasets)
closed
0
2020-12-02T09:52:19
2020-12-03T11:01:26
2020-12-03T11:01:26
Narsil
[]
null
true
755,097,798
https://api.github.com/repos/huggingface/datasets/issues/990
https://github.com/huggingface/datasets/pull/990
990
Add E2E NLG
closed
0
2020-12-02T09:25:12
2020-12-03T13:08:05
2020-12-03T13:08:04
lhoestq
[]
Adding the E2E NLG dataset. More info here : http://www.macs.hw.ac.uk/InteractionLab/E2E/ ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass.
true
755,079,394
https://api.github.com/repos/huggingface/datasets/issues/989
https://github.com/huggingface/datasets/pull/989
989
Fix SV -> NO
closed
0
2020-12-02T08:59:59
2020-12-02T09:18:21
2020-12-02T09:18:14
jplu
[]
This PR fixes the small typo as seen in #956
true
755,069,159
https://api.github.com/repos/huggingface/datasets/issues/988
https://github.com/huggingface/datasets/issues/988
988
making sure datasets are not loaded in memory and distributed training of them
closed
2
2020-12-02T08:45:15
2022-10-05T13:00:42
2022-10-05T13:00:42
rabeehk
[]
Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks
false
755,059,469
https://api.github.com/repos/huggingface/datasets/issues/987
https://github.com/huggingface/datasets/pull/987
987
Add OPUS DOGC dataset
closed
1
2020-12-02T08:30:32
2020-12-04T13:27:41
2020-12-04T13:27:41
albertvillanova
[]
true
755,047,470
https://api.github.com/repos/huggingface/datasets/issues/986
https://github.com/huggingface/datasets/pull/986
986
Add SciTLDR Dataset
closed
5
2020-12-02T08:11:16
2020-12-02T18:37:22
2020-12-02T18:02:59
bharatr21
[]
Adds the SciTLDR Dataset by AI2 Added README card with tags to the best of my knowledge Multi-target summaries or TLDRs of Scientific Documents
true
755,020,564
https://api.github.com/repos/huggingface/datasets/issues/985
https://github.com/huggingface/datasets/pull/985
985
Add GAP dataset
closed
3
2020-12-02T07:25:11
2022-10-06T14:11:52
2020-12-02T16:16:32
VictorSanh
[]
GAP dataset Gender bias coreference resolution
true
755,009,916
https://api.github.com/repos/huggingface/datasets/issues/984
https://github.com/huggingface/datasets/pull/984
984
committing Whoa file
closed
2
2020-12-02T07:07:46
2020-12-02T16:15:29
2020-12-02T15:40:58
StulosDunamos
[]
true
754,966,620
https://api.github.com/repos/huggingface/datasets/issues/983
https://github.com/huggingface/datasets/pull/983
983
add mc taco
closed
0
2020-12-02T05:54:55
2020-12-02T15:37:47
2020-12-02T15:37:46
VictorSanh
[]
MC-TACO Temporal commonsense knowledge
true
754,946,337
https://api.github.com/repos/huggingface/datasets/issues/982
https://github.com/huggingface/datasets/pull/982
982
add prachathai67k take2
closed
0
2020-12-02T05:12:01
2020-12-02T10:18:11
2020-12-02T10:18:11
cstorm125
[]
I decided it will be faster to create a new pull request instead of fixing the rebase issues. continuing from https://github.com/huggingface/datasets/pull/954
true
754,937,612
https://api.github.com/repos/huggingface/datasets/issues/981
https://github.com/huggingface/datasets/pull/981
981
add wisesight_sentiment take2
closed
0
2020-12-02T04:50:59
2020-12-02T10:37:13
2020-12-02T10:37:13
cstorm125
[]
Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one.
true
754,899,301
https://api.github.com/repos/huggingface/datasets/issues/980
https://github.com/huggingface/datasets/pull/980
980
Wongnai - Thai reviews dataset
closed
2
2020-12-02T03:20:08
2020-12-02T15:34:41
2020-12-02T15:30:05
mapmeld
[]
40,000 reviews, previously released on GitHub ( https://github.com/wongnai/wongnai-corpus ) with an LGPL license, and on a closed Kaggle competition ( https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction/ )
true
754,893,337
https://api.github.com/repos/huggingface/datasets/issues/979
https://github.com/huggingface/datasets/pull/979
979
[WIP] Add multi woz
closed
0
2020-12-02T03:05:42
2020-12-02T16:07:16
2020-12-02T16:07:16
yjernite
[]
This PR adds version 2.2 of the Multi-domain Wizard of OZ dataset: https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2 It was a pretty big chunk of work to figure out the structure, so I stil have tol add the description to the README.md On the plus side the structure is broadly similar to that of the Google Schema Guided dialogue [dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue), so will take care of that one next.
true
754,854,478
https://api.github.com/repos/huggingface/datasets/issues/978
https://github.com/huggingface/datasets/pull/978
978
Add code refinement
closed
5
2020-12-02T01:29:58
2020-12-07T01:52:58
2020-12-07T01:52:58
reshinthadithyan
[]
### OVERVIEW Millions of open-source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs Code refinement aims to automatically fix bugs in the code, which can contribute to reducing the cost of bug-fixes for developers. Given a piece of Java code with bugs, the task is to remove the bugs to output the refined code.
true
754,839,594
https://api.github.com/repos/huggingface/datasets/issues/977
https://github.com/huggingface/datasets/pull/977
977
Add ROPES dataset
closed
0
2020-12-02T00:52:10
2020-12-02T10:58:36
2020-12-02T10:58:35
VictorSanh
[]
ROPES dataset Reasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa. One thing to note: labels of the test set are hidden (leaderboard submission) so I encoded that as an empty list (ropes.py:L125)
true
754,826,146
https://api.github.com/repos/huggingface/datasets/issues/976
https://github.com/huggingface/datasets/pull/976
976
Arabic pos dialect
closed
2
2020-12-02T00:21:13
2020-12-09T17:30:32
2020-12-09T17:30:32
mcmillanmajora
[]
A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP.
true
754,823,701
https://api.github.com/repos/huggingface/datasets/issues/975
https://github.com/huggingface/datasets/pull/975
975
add MeTooMA dataset
closed
0
2020-12-02T00:15:55
2020-12-02T10:58:56
2020-12-02T10:58:55
akash418
[]
This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines. Paper: https://ojs.aaai.org/index.php/ICWSM/article/view/7292 Dataset Link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU --- annotations_creators: - expert-generated language_creators: - found languages: - en multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification - text-retrieval task_ids: - multi-class-classification - multi-label-classification --- # Dataset Card for #MeTooMA dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU - **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292 - **Point of Contact:** https://github.com/midas-research/MeTooMA ### Dataset Summary - The dataset consists of tweets belonging to #MeToo movement on Twitter, labeled into different categories. - This dataset includes more data points and has more labels than any of the previous datasets that contain social media posts about sexual abuse disclosures. Please refer to the Related Datasets of the publication for detailed information about this. - Due to Twitter's development policies, the authors provide only the tweet IDs and corresponding labels, other data can be fetched via Twitter API. - The data has been labeled by experts, with the majority taken into the account for deciding the final label. - The authors provide these labels for each of the tweets. - Relevance - Directed Hate - Generalized Hate - Sarcasm - Allegation - Justification - Refutation - Support - Oppose - The definitions for each task/label are in the main publication. - Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data extracted from this dataset. - The language of all the tweets in this dataset is English - Time period: October 2018 - December 2018 - Suggested Use Cases of this dataset: - Evaluating usage of linguistic acts such as hate-speech and sarcasm in the context of public sexual abuse disclosures. - Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations. - Identifying how influential people were portrayed on the public platform in the events of mass social movements. - Polarization analysis based on graph simulations of social nodes of users involved in the #MeToo movement. ### Supported Tasks and Leaderboards Multi-Label and Multi-Class Classification ### Languages English ## Dataset Structure - The dataset is structured into CSV format with TweetID and accompanying labels. - Train and Test sets are split into respective files. ### Data Instances Tweet ID and the appropriate labels ### Data Fields Tweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID ### Data Splits - Train: 7979 - Test: 1996 ## Dataset Creation ### Curation Rationale - Twitter was the major source of all the public disclosures of sexual abuse incidents during the #MeToo movement. - People expressed their opinions over issues that were previously missing from the social media space. - This provides an option to study the linguistic behaviors of social media users in an informal setting, therefore the authors decide to curate this annotated dataset. - The authors expect this dataset would be of great interest and use to both computational and socio-linguists. - For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media. ### Source Data - Source of all the data points in this dataset is a Twitter social media platform. #### Initial Data Collection and Normalization - All the tweets are mined from Twitter with initial search parameters identified using keywords from the #MeToo movement. - Redundant keywords were removed based on manual inspection. - Public streaming APIs of Twitter was used for querying with the selected keywords. - Based on text de-duplication and cosine similarity score, the set of tweets were pruned. - Non-English tweets were removed. - The final set was labeled by experts with the majority label taken into the account for deciding the final label. - Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292 #### Who are the source language producers? Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292 ### Annotations #### Annotation process - The authors chose against crowdsourcing for labeling this dataset due to its highly sensitive nature. - The annotators are domain experts having degrees in advanced clinical psychology and gender studies. - They were provided a guidelines document with instructions about each task and its definitions, labels, and examples. - They studied the document, worked on a few examples to get used to this annotation task. - They also provided feedback for improving the class definitions. - The annotation process is not mutually exclusive, implying that the presence of one label does not mean the absence of the other one. #### Who are the annotators? - The annotators are domain experts having a degree in clinical psychology and gender studies. - Please refer to the accompanying paper for a detailed annotation process. ### Personal and Sensitive Information - Considering Twitter's policy for distribution of data, only Tweet ID and applicable labels are shared for public use. - It is highly encouraged to use this dataset for scientific purposes only. - This dataset collection completely follows the Twitter mandated guidelines for distribution and usage. ## Considerations for Using the Data ### Social Impact of Dataset - The authors of this dataset do not intend to conduct a population-centric analysis of the #MeToo movement on Twitter. - The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these should be used to assist already existing human intervention tools and therapies. - Enough care has been taken to ensure that this work comes off as trying to target a specific person for their the personal stance of issues pertaining to the #MeToo movement. - The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner. - Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset and the social impact of this work. ### Discussion of Biases - The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of the community affected by sexual abuse. - Any work undertaken on this dataset should aim to minimize the bias against minority groups which might amplify in cases of a sudden outburst of public reactions over sensitive social media discussions. ### Other Known Limitations - Considering privacy concerns, social media practitioners should be aware of making automated interventions to aid the victims of sexual abuse as some people might not prefer to disclose their notions. - Concerned social media users might also repeal their social information if they found out that their information is being used for computational purposes, hence it is important to seek subtle individual consent before trying to profile authors involved in online discussions to uphold personal privacy. ## Additional Information Please refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU ### Dataset Curators - If you use the corpus in a product or application, then please credit the authors and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi] (http://midas.iiitd.edu.in) appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - If interested in the commercial use of the corpus, send an email to midas@iiitd.ac.in. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your social media data. - if interested in a collaborative research project. ### Licensing Information [More Information Needed] ### Citation Information Please cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292 ``` @article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={&lt;p&gt;In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.&lt;/p&#38;gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} } ```
true
754,811,185
https://api.github.com/repos/huggingface/datasets/issues/974
https://github.com/huggingface/datasets/pull/974
974
Add MeTooMA Dataset
closed
0
2020-12-01T23:44:01
2020-12-01T23:57:58
2020-12-01T23:57:58
akash418
[]
true
754,807,963
https://api.github.com/repos/huggingface/datasets/issues/973
https://github.com/huggingface/datasets/pull/973
973
Adding The Microsoft Terminology Collection dataset.
closed
9
2020-12-01T23:36:23
2020-12-04T15:25:44
2020-12-04T15:12:46
leoxzhao
[]
true
754,787,314
https://api.github.com/repos/huggingface/datasets/issues/972
https://github.com/huggingface/datasets/pull/972
972
Add Children's Book Test (CBT) dataset
closed
2
2020-12-01T22:53:26
2021-03-19T11:30:03
2021-03-19T11:30:03
thomwolf
[]
Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016). Sentence completion given a few sentences as context from a children's book.
true
754,784,041
https://api.github.com/repos/huggingface/datasets/issues/971
https://github.com/huggingface/datasets/pull/971
971
add piqa
closed
0
2020-12-01T22:47:04
2020-12-02T09:58:02
2020-12-02T09:58:01
VictorSanh
[]
Physical Interaction: Question Answering (commonsense) https://yonatanbisk.com/piqa/
true
754,697,489
https://api.github.com/repos/huggingface/datasets/issues/970
https://github.com/huggingface/datasets/pull/970
970
Add SWAG
closed
0
2020-12-01T20:21:05
2020-12-02T09:55:16
2020-12-02T09:55:15
VictorSanh
[]
Commonsense NLI -> https://rowanzellers.com/swag/
true
754,681,940
https://api.github.com/repos/huggingface/datasets/issues/969
https://github.com/huggingface/datasets/pull/969
969
Add wiki auto dataset
closed
0
2020-12-01T19:58:11
2020-12-02T16:19:14
2020-12-02T16:19:14
yjernite
[]
This PR adds the WikiAuto sentence simplification dataset https://github.com/chaojiang06/wiki-auto This is also a prospective GEM task, hence the README.md
true
754,659,015
https://api.github.com/repos/huggingface/datasets/issues/968
https://github.com/huggingface/datasets/pull/968
968
ADD Afrikaans NER
closed
1
2020-12-01T19:23:03
2020-12-02T09:41:28
2020-12-02T09:41:28
yvonnegitau
[]
Afrikaans NER corpus
true
754,578,988
https://api.github.com/repos/huggingface/datasets/issues/967
https://github.com/huggingface/datasets/pull/967
967
Add CS Restaurants dataset
closed
4
2020-12-01T17:17:37
2020-12-02T17:57:44
2020-12-02T17:57:25
TevenLeScao
[]
This PR adds the Czech restaurants dataset for Czech NLG.
true
754,558,686
https://api.github.com/repos/huggingface/datasets/issues/966
https://github.com/huggingface/datasets/pull/966
966
Add CLINC150 Dataset
closed
2
2020-12-01T16:50:13
2020-12-02T18:45:43
2020-12-02T18:45:30
sumanthd17
[]
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
true
754,553,169
https://api.github.com/repos/huggingface/datasets/issues/965
https://github.com/huggingface/datasets/pull/965
965
Add CLINC150 Dataset
closed
0
2020-12-01T16:43:00
2020-12-01T16:51:16
2020-12-01T16:49:15
sumanthd17
[]
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
true
754,474,660
https://api.github.com/repos/huggingface/datasets/issues/964
https://github.com/huggingface/datasets/pull/964
964
Adding the WebNLG dataset
closed
1
2020-12-01T15:05:23
2020-12-02T17:34:05
2020-12-02T17:34:05
yjernite
[]
This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration. More information can be found [here](https://webnlg-challenge.loria.fr/) Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB even keeping only one example per file).
true
754,451,234
https://api.github.com/repos/huggingface/datasets/issues/963
https://github.com/huggingface/datasets/pull/963
963
add CODAH dataset
closed
0
2020-12-01T14:37:05
2020-12-02T13:45:58
2020-12-02T13:21:25
patil-suraj
[]
Adding CODAH dataset. More info: https://github.com/Websail-NU/CODAH
true
754,441,428
https://api.github.com/repos/huggingface/datasets/issues/962
https://github.com/huggingface/datasets/pull/962
962
Add Danish Political Comments Dataset
closed
0
2020-12-01T14:28:32
2020-12-03T10:31:55
2020-12-03T10:31:54
abhishekkrthakur
[]
true
754,434,398
https://api.github.com/repos/huggingface/datasets/issues/961
https://github.com/huggingface/datasets/issues/961
961
sample multiple datasets
closed
6
2020-12-01T14:20:02
2024-06-17T08:23:20
2023-07-20T14:08:57
rabeehk
[]
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
false