id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
760,101,728
https://api.github.com/repos/huggingface/datasets/issues/1361
https://github.com/huggingface/datasets/pull/1361
1,361
adding bprec
closed
2
2020-12-09T08:02:45
2020-12-16T17:04:44
2020-12-16T17:04:44
kldarek
[]
Brand-Product Relation Extraction Corpora in Polish
true
760,088,419
https://api.github.com/repos/huggingface/datasets/issues/1360
https://github.com/huggingface/datasets/pull/1360
1,360
add wisesight1000
closed
0
2020-12-09T07:41:30
2020-12-10T14:28:41
2020-12-10T14:28:41
cstorm125
[]
`wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (neutral), `pos` (positive), `q` (question), 250 samples each. Some texts are removed because they look like spam.Because these samples are representative of real world content, we believe having these annotaed samples will allow the community to robustly evaluate tokenization algorithms.
true
760,055,969
https://api.github.com/repos/huggingface/datasets/issues/1359
https://github.com/huggingface/datasets/pull/1359
1,359
Add JNLPBA
closed
0
2020-12-09T06:48:51
2020-12-10T14:24:36
2020-12-10T14:24:36
edugp
[]
true
760,031,131
https://api.github.com/repos/huggingface/datasets/issues/1358
https://github.com/huggingface/datasets/pull/1358
1,358
Add spider dataset
closed
0
2020-12-09T06:06:18
2020-12-10T15:12:31
2020-12-10T15:12:31
olinguyen
[]
This PR adds the Spider dataset, a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students. The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases. Dataset website: https://yale-lily.github.io/spider Paper link: https://www.aclweb.org/anthology/D18-1425/
true
760,023,525
https://api.github.com/repos/huggingface/datasets/issues/1357
https://github.com/huggingface/datasets/pull/1357
1,357
Youtube caption corrections
closed
10
2020-12-09T05:52:34
2020-12-15T18:12:56
2020-12-15T18:12:56
2dot71mily
[]
This PR adds a new dataset of YouTube captions, error and corrections. This dataset was created in just the last week, as inspired by this sprint!
true
759,994,457
https://api.github.com/repos/huggingface/datasets/issues/1356
https://github.com/huggingface/datasets/pull/1356
1,356
Add StackOverflow StackSample dataset
closed
5
2020-12-09T04:59:51
2020-12-21T14:48:21
2020-12-21T14:48:21
ncoop57
[]
This PR adds the StackOverflow StackSample dataset from Kaggle: https://www.kaggle.com/stackoverflow/stacksample Ran through all of the steps. However, since my dataset requires manually downloading the data, I was unable to run the pytest on the real dataset (the dummy data pytest passed).
true
759,994,208
https://api.github.com/repos/huggingface/datasets/issues/1355
https://github.com/huggingface/datasets/pull/1355
1,355
Addition of py_ast dataset
closed
0
2020-12-09T04:59:17
2020-12-09T16:19:49
2020-12-09T16:19:48
reshinthadithyan
[]
@lhoestq as discussed in PR #1195
true
759,987,763
https://api.github.com/repos/huggingface/datasets/issues/1354
https://github.com/huggingface/datasets/pull/1354
1,354
Add TweetQA dataset
closed
0
2020-12-09T04:44:01
2020-12-10T15:10:30
2020-12-10T15:10:30
anaerobeth
[]
This PR adds the TweetQA dataset, the first dataset for QA on social media data by leveraging news media and crowdsourcing. Paper: https://arxiv.org/abs/1907.06292 Repository: https://tweetqa.github.io/
true
759,980,004
https://api.github.com/repos/huggingface/datasets/issues/1353
https://github.com/huggingface/datasets/pull/1353
1,353
New instruction for how to generate dataset_infos.json
closed
0
2020-12-09T04:24:40
2020-12-10T13:45:15
2020-12-10T13:45:15
ncoop57
[]
Add additional instructions for how to generate dataset_infos.json for manual download datasets. Information courtesy of `Taimur Ibrahim` from the slack channel
true
759,978,543
https://api.github.com/repos/huggingface/datasets/issues/1352
https://github.com/huggingface/datasets/pull/1352
1,352
change url for prachathai67k to internet archive
closed
0
2020-12-09T04:20:37
2020-12-10T13:42:17
2020-12-10T13:42:17
cstorm125
[]
`prachathai67k` is currently downloaded from git-lfs of PyThaiNLP github. Since the size is quite large (~250MB), I moved the URL to archive.org in order to prevent rate limit issues.
true
759,902,770
https://api.github.com/repos/huggingface/datasets/issues/1351
https://github.com/huggingface/datasets/pull/1351
1,351
added craigslist_bargians
closed
0
2020-12-09T01:02:31
2020-12-10T14:14:34
2020-12-10T14:14:34
ZacharySBrown
[]
`craigslist_bargains` data set from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/) (Cleaned up version of #1278)
true
759,879,789
https://api.github.com/repos/huggingface/datasets/issues/1350
https://github.com/huggingface/datasets/pull/1350
1,350
add LeNER-Br dataset
closed
4
2020-12-09T00:06:38
2020-12-10T14:11:33
2020-12-10T14:11:33
jonatasgrosman
[]
Adding the LeNER-Br dataset, a Portuguese language dataset for named entity recognition
true
759,870,664
https://api.github.com/repos/huggingface/datasets/issues/1349
https://github.com/huggingface/datasets/pull/1349
1,349
initial commit for MultiReQA
closed
2
2020-12-08T23:44:34
2020-12-09T16:46:37
2020-12-09T16:46:37
Karthik-Bhaskar
[]
Added MultiReQA, which is a dataset containing the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA.
true
759,869,849
https://api.github.com/repos/huggingface/datasets/issues/1348
https://github.com/huggingface/datasets/pull/1348
1,348
add Yoruba NER dataset
closed
4
2020-12-08T23:42:35
2020-12-10T14:30:25
2020-12-10T14:09:43
dadelani
[]
Added Yoruba GV dataset based on this paper
true
759,845,231
https://api.github.com/repos/huggingface/datasets/issues/1347
https://github.com/huggingface/datasets/pull/1347
1,347
Add spanish billion words corpus
closed
4
2020-12-08T22:51:38
2020-12-11T11:26:39
2020-12-11T11:15:28
mariagrandury
[]
Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
true
759,844,137
https://api.github.com/repos/huggingface/datasets/issues/1346
https://github.com/huggingface/datasets/pull/1346
1,346
Add MultiBooked dataset
closed
1
2020-12-08T22:49:36
2020-12-15T17:02:09
2020-12-15T17:02:09
albertvillanova
[]
Add dataset.
true
759,835,486
https://api.github.com/repos/huggingface/datasets/issues/1345
https://github.com/huggingface/datasets/pull/1345
1,345
First commit of NarrativeQA Dataset
closed
0
2020-12-08T22:31:59
2021-01-25T15:31:52
2020-12-09T09:29:52
rsanjaykamath
[]
Added NarrativeQA dataset and included a manual downloading option to download scripts from the original scripts provided by the authors.
true
759,831,925
https://api.github.com/repos/huggingface/datasets/issues/1344
https://github.com/huggingface/datasets/pull/1344
1,344
Add hausa ner corpus
closed
0
2020-12-08T22:25:04
2020-12-08T23:11:55
2020-12-08T23:11:55
dadelani
[]
Added Hausa VOA NER data
true
759,809,999
https://api.github.com/repos/huggingface/datasets/issues/1343
https://github.com/huggingface/datasets/pull/1343
1,343
Add LiveQA
closed
0
2020-12-08T21:52:36
2020-12-14T09:40:28
2020-12-14T09:40:28
j-chim
[]
This PR adds LiveQA, the Chinese real-time/timeline-based QA task by [Liu et al., 2020](https://arxiv.org/pdf/2010.00526.pdf).
true
759,794,121
https://api.github.com/repos/huggingface/datasets/issues/1342
https://github.com/huggingface/datasets/pull/1342
1,342
[yaml] Fix metadata according to pre-specified scheme
closed
0
2020-12-08T21:26:34
2020-12-09T15:37:27
2020-12-09T15:37:26
julien-c
[]
@lhoestq @yjernite
true
759,784,557
https://api.github.com/repos/huggingface/datasets/issues/1341
https://github.com/huggingface/datasets/pull/1341
1,341
added references to only data card creator to all guides
closed
0
2020-12-08T21:11:11
2020-12-08T21:36:12
2020-12-08T21:36:11
yjernite
[]
We can now use the wonderful online form for dataset cards created by @evrardts
true
759,765,408
https://api.github.com/repos/huggingface/datasets/issues/1340
https://github.com/huggingface/datasets/pull/1340
1,340
:fist: ¡Viva la Independencia!
closed
1
2020-12-08T20:43:43
2020-12-14T10:36:01
2020-12-14T10:36:01
lewtun
[]
Adds the Catalonia Independence Corpus for stance-detection of Tweets. Ready for review!
true
759,744,088
https://api.github.com/repos/huggingface/datasets/issues/1339
https://github.com/huggingface/datasets/pull/1339
1,339
hate_speech_18 initial commit
closed
2
2020-12-08T20:10:08
2020-12-12T16:17:32
2020-12-12T16:17:32
czabo
[]
true
759,725,770
https://api.github.com/repos/huggingface/datasets/issues/1338
https://github.com/huggingface/datasets/pull/1338
1,338
Add GigaFren Dataset
closed
1
2020-12-08T19:42:04
2020-12-14T10:03:47
2020-12-14T10:03:46
abhishekkrthakur
[]
true
759,710,482
https://api.github.com/repos/huggingface/datasets/issues/1337
https://github.com/huggingface/datasets/pull/1337
1,337
Add spanish billion words
closed
1
2020-12-08T19:18:02
2020-12-08T22:59:38
2020-12-08T21:15:27
mariagrandury
[]
Add an unannotated corpus of the Spanish language of nearly 1.5 billion words, compiled from different resources from the web. The dataset needs 10 GB (download: 1.89 GiB, generated: 8.34 GiB, post-processed: Unknown size, total: 10.22 GiB), the test using dummy data pass but my laptop isn't able to run it on the real data (I left it running for over 8 hours and it didn't finish).
true
759,706,932
https://api.github.com/repos/huggingface/datasets/issues/1336
https://github.com/huggingface/datasets/pull/1336
1,336
Add dataset Yoruba BBC Topic Classification
closed
0
2020-12-08T19:12:18
2020-12-10T11:27:41
2020-12-10T11:27:41
michael-aloys
[]
Added new dataset Yoruba BBC Topic Classification Contains loading script as well as dataset card including YAML tags.
true
759,705,835
https://api.github.com/repos/huggingface/datasets/issues/1335
https://github.com/huggingface/datasets/pull/1335
1,335
Added Bianet dataset
closed
1
2020-12-08T19:10:32
2020-12-14T10:00:56
2020-12-14T10:00:56
param087
[]
Hi :hugs:, This is a PR for [Bianet: A parallel news corpus in Turkish, Kurdish and English; Source](http://opus.nlpl.eu/Bianet.php) dataset
true
759,699,993
https://api.github.com/repos/huggingface/datasets/issues/1334
https://github.com/huggingface/datasets/pull/1334
1,334
Add QED Amara Dataset
closed
0
2020-12-08T19:01:13
2020-12-10T11:17:25
2020-12-10T11:15:57
abhishekkrthakur
[]
true
759,687,836
https://api.github.com/repos/huggingface/datasets/issues/1333
https://github.com/huggingface/datasets/pull/1333
1,333
Add Tanzil Dataset
closed
0
2020-12-08T18:45:15
2020-12-10T11:17:56
2020-12-10T11:14:43
abhishekkrthakur
[]
true
759,679,135
https://api.github.com/repos/huggingface/datasets/issues/1332
https://github.com/huggingface/datasets/pull/1332
1,332
Add Open Subtitles Dataset
closed
0
2020-12-08T18:31:45
2020-12-10T11:17:38
2020-12-10T11:13:18
abhishekkrthakur
[]
true
759,677,189
https://api.github.com/repos/huggingface/datasets/issues/1331
https://github.com/huggingface/datasets/pull/1331
1,331
First version of the new dataset hausa_voa_topics
closed
0
2020-12-08T18:28:52
2020-12-10T11:09:53
2020-12-10T11:09:53
michael-aloys
[]
Contains loading script as well as dataset card including YAML tags.
true
759,657,324
https://api.github.com/repos/huggingface/datasets/issues/1330
https://github.com/huggingface/datasets/pull/1330
1,330
added un_ga dataset
closed
2
2020-12-08T17:58:38
2020-12-14T17:52:34
2020-12-14T17:52:34
param087
[]
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset
true
759,654,174
https://api.github.com/repos/huggingface/datasets/issues/1329
https://github.com/huggingface/datasets/pull/1329
1,329
Add yoruba ner corpus
closed
0
2020-12-08T17:54:00
2020-12-08T23:11:12
2020-12-08T23:11:12
dadelani
[]
true
759,634,907
https://api.github.com/repos/huggingface/datasets/issues/1328
https://github.com/huggingface/datasets/pull/1328
1,328
Added the NewsPH Raw dataset and corresponding dataset card
closed
0
2020-12-08T17:25:45
2020-12-10T11:04:34
2020-12-10T11:04:34
jcblaisecruz02
[]
This PR adds the original NewsPH dataset which is used to autogenerate the NewsPH-NLI dataset. Reopened a new PR as the previous one had problems. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
true
759,629,321
https://api.github.com/repos/huggingface/datasets/issues/1327
https://github.com/huggingface/datasets/pull/1327
1,327
Add msr_genomics_kbcomp dataset
closed
0
2020-12-08T17:18:20
2020-12-08T18:18:32
2020-12-08T18:18:06
manandey
[]
true
759,611,784
https://api.github.com/repos/huggingface/datasets/issues/1326
https://github.com/huggingface/datasets/pull/1326
1,326
TEP: Tehran English-Persian parallel corpus
closed
0
2020-12-08T16:56:53
2020-12-19T14:55:03
2020-12-10T11:25:17
spatil6
[]
TEP: Tehran English-Persian parallel corpus more info : http://opus.nlpl.eu/TEP.php
true
759,595,556
https://api.github.com/repos/huggingface/datasets/issues/1325
https://github.com/huggingface/datasets/pull/1325
1,325
Add humicroedit dataset
closed
2
2020-12-08T16:35:46
2020-12-17T17:59:09
2020-12-17T17:59:09
saradhix
[]
Pull request for adding humicroedit dataset
true
759,587,864
https://api.github.com/repos/huggingface/datasets/issues/1324
https://github.com/huggingface/datasets/issues/1324
1,324
❓ Sharing ElasticSearch indexed dataset
open
3
2020-12-08T16:25:58
2020-12-22T07:50:56
null
pietrolesci
[ "dataset request" ]
Hi there, First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing. **Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering - how can I know where it has been saved? - how can I share the indexed dataset with others? I tried to dig into the docs, but could not find anything about that. Thank you very much for your help. Best, Pietro Edit: apologies for the wrong label
false
759,581,919
https://api.github.com/repos/huggingface/datasets/issues/1323
https://github.com/huggingface/datasets/pull/1323
1,323
Add CC-News dataset of English language articles
closed
5
2020-12-08T16:18:15
2021-02-01T16:55:49
2021-02-01T16:55:49
vblagoje
[]
Adds [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/) dataset. It contains 708241 English language news articles. Although each article has a language field these tags are not reliable. I've used Spacy language detection [pipeline](https://spacy.io/universe/project/spacy-langdetect) to confirm that the article language is indeed English. The prepared dataset is temporarily hosted on my private Google Storage [bucket](https://storage.googleapis.com/hf_datasets/cc_news.tar.gz). We can move it to HF storage and update this PR before merging.
true
759,576,003
https://api.github.com/repos/huggingface/datasets/issues/1322
https://github.com/huggingface/datasets/pull/1322
1,322
add indonlu benchmark datasets
closed
0
2020-12-08T16:10:58
2020-12-13T02:11:27
2020-12-13T01:54:28
yasirabd
[]
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.
true
759,573,610
https://api.github.com/repos/huggingface/datasets/issues/1321
https://github.com/huggingface/datasets/pull/1321
1,321
added dutch_social
closed
4
2020-12-08T16:07:54
2020-12-16T10:14:17
2020-12-16T10:14:17
skyprince999
[]
The Dutch social media tweets dataset. Which has a total of more than 210k tweets in dutch language. These tweets have been machine annotated with sentiment scores (`label` feature) and `industry` and `hisco_codes` It can be used for sentiment analysis, multi-label classification and entity tagging
true
759,566,148
https://api.github.com/repos/huggingface/datasets/issues/1320
https://github.com/huggingface/datasets/pull/1320
1,320
Added the WikiText-TL39 dataset and corresponding card
closed
0
2020-12-08T16:00:26
2020-12-10T11:24:53
2020-12-10T11:24:53
jcblaisecruz02
[]
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Restarted a new pull request since there were problems with the earlier one. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
true
759,565,923
https://api.github.com/repos/huggingface/datasets/issues/1319
https://github.com/huggingface/datasets/pull/1319
1,319
adding wili-2018 language identification dataset
closed
4
2020-12-08T16:00:09
2020-12-14T21:20:32
2020-12-14T21:20:32
Shubhambindal2017
[]
true
759,565,629
https://api.github.com/repos/huggingface/datasets/issues/1318
https://github.com/huggingface/datasets/pull/1318
1,318
ethos first commit
closed
3
2020-12-08T15:59:47
2020-12-10T14:45:57
2020-12-10T14:45:57
iamollas
[]
Ethos passed all the tests except from this one: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name> with this error: E OSError: Cannot find data file. E Original error: E [Errno 2] No such file or directory:
true
759,553,495
https://api.github.com/repos/huggingface/datasets/issues/1317
https://github.com/huggingface/datasets/pull/1317
1,317
add 10k German News Article Dataset
closed
2
2020-12-08T15:44:25
2021-09-17T16:55:51
2020-12-16T16:50:43
stevhliu
[]
true
759,549,601
https://api.github.com/repos/huggingface/datasets/issues/1316
https://github.com/huggingface/datasets/pull/1316
1,316
Allow GitHub releases as dataset source
closed
0
2020-12-08T15:39:35
2020-12-10T10:12:00
2020-12-10T10:12:00
benjaminvdb
[]
# Summary Providing a GitHub release URL to `DownloadManager.download()` currently throws a `ConnectionError: Couldn't reach [DOWNLOAD_URL]`. This PR fixes this problem by adding an exception for GitHub releases in `datasets.utils.file_utils.get_from_cache()`. # Reproduce ``` import datasets url = 'http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz' result = datasets.utils.file_utils.get_from_cache(url) # Returns: ConnectionError: Couldn't reach http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz ``` # Cause GitHub releases returns a HTTP status 403 (FOUND), indicating that the request is being redirected (to AWS S3, in this case). `get_from_cache()` checks whether the status is 200 (OK) or if it is part of two exceptions (Google Drive or Firebase), otherwise the mentioned error is thrown. # Solution Just like the exceptions for Google Drive and Firebase, add a condition for GitHub releases URLs that return the HTTP status 403. If this is the case, continue normally.
true
759,548,706
https://api.github.com/repos/huggingface/datasets/issues/1315
https://github.com/huggingface/datasets/pull/1315
1,315
add yelp_review_full
closed
0
2020-12-08T15:38:27
2020-12-09T15:55:49
2020-12-09T15:55:49
hfawaz
[]
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353 I included the dataset card.
true
759,541,937
https://api.github.com/repos/huggingface/datasets/issues/1314
https://github.com/huggingface/datasets/pull/1314
1,314
Add snips built in intents 2016 12
closed
3
2020-12-08T15:30:19
2020-12-14T09:59:07
2020-12-14T09:59:07
bduvenhage
[]
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
true
759,536,512
https://api.github.com/repos/huggingface/datasets/issues/1313
https://github.com/huggingface/datasets/pull/1313
1,313
Add HateSpeech Corpus for Polish
closed
3
2020-12-08T15:23:53
2020-12-16T16:48:45
2020-12-16T16:48:45
kacperlukawski
[]
This PR adds a HateSpeech Corpus for Polish, containing offensive language examples. - **Homepage:** http://zil.ipipan.waw.pl/HateSpeech - **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf
true
759,532,626
https://api.github.com/repos/huggingface/datasets/issues/1312
https://github.com/huggingface/datasets/pull/1312
1,312
Jigsaw toxicity pred
closed
0
2020-12-08T15:19:14
2020-12-11T12:11:32
2020-12-11T12:11:32
taihim
[]
Requires manually downloading data from Kaggle.
true
759,514,819
https://api.github.com/repos/huggingface/datasets/issues/1311
https://github.com/huggingface/datasets/pull/1311
1,311
Add OPUS Bible Corpus (102 Languages)
closed
1
2020-12-08T14:57:08
2020-12-09T15:30:57
2020-12-09T15:30:56
abhishekkrthakur
[]
true
759,508,921
https://api.github.com/repos/huggingface/datasets/issues/1310
https://github.com/huggingface/datasets/pull/1310
1,310
Add OffensEval-TR 2020 Dataset
closed
4
2020-12-08T14:49:51
2020-12-12T14:15:42
2020-12-09T16:02:06
yavuzKomecoglu
[]
This PR adds the OffensEval-TR 2020 dataset which is a Turkish offensive language corpus by me and @basakbuluz. The corpus consist of randomly sampled tweets and annotated in a similar way to [OffensEval](https://sites.google.com/site/offensevalsharedtask/) and [GermEval](https://projects.fzai.h-da.de/iggsa/). - **Homepage:** [offensive-turkish](https://coltekin.github.io/offensive-turkish/) - **Paper:** [A Corpus of Turkish Offensive Language on Social Media](https://coltekin.github.io/offensive-turkish/troff.pdf) - **Point of Contact:** [Çağrı Çöltekin](ccoltekin@sfs.uni-tuebingen.de)
true
759,501,370
https://api.github.com/repos/huggingface/datasets/issues/1309
https://github.com/huggingface/datasets/pull/1309
1,309
Add SAMSum Corpus dataset
closed
5
2020-12-08T14:40:56
2020-12-14T12:32:33
2020-12-14T10:20:55
changjonathanc
[]
Did not spent much time writing README, might update later. Copied description and some stuff from tensorflow_datasets https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/samsum.py
true
759,492,953
https://api.github.com/repos/huggingface/datasets/issues/1308
https://github.com/huggingface/datasets/pull/1308
1,308
Add Wiki Lingua Dataset
closed
6
2020-12-08T14:30:13
2020-12-14T10:39:52
2020-12-14T10:39:52
katnoria
[]
Hello, This is my first PR. I have added Wiki Lingua Dataset along with dataset card to the best of my knowledge. There was one hiccup though. I was unable to create dummy data because the data is in pkl format. From the document, I see that: ```At the moment it supports data files in the following format: txt, csv, tsv, jsonl, json, xml```
true
759,458,835
https://api.github.com/repos/huggingface/datasets/issues/1307
https://github.com/huggingface/datasets/pull/1307
1,307
adding capes
closed
0
2020-12-08T13:46:13
2020-12-09T15:40:09
2020-12-09T15:27:45
patil-suraj
[]
Adding Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6
true
759,448,427
https://api.github.com/repos/huggingface/datasets/issues/1306
https://github.com/huggingface/datasets/pull/1306
1,306
add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC)
closed
1
2020-12-08T13:31:34
2020-12-10T09:53:54
2020-12-10T09:53:28
aseifert
[]
- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC) - **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data - **Paper:** https://www.aclweb.org/anthology/W19-4406/ - **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP. ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
true
759,446,665
https://api.github.com/repos/huggingface/datasets/issues/1305
https://github.com/huggingface/datasets/pull/1305
1,305
[README] Added Windows command to enable slow tests
closed
0
2020-12-08T13:29:04
2020-12-08T13:56:33
2020-12-08T13:56:32
TevenLeScao
[]
The Windows command to run slow tests has caused issues, so this adds a functional Windows command.
true
759,440,841
https://api.github.com/repos/huggingface/datasets/issues/1304
https://github.com/huggingface/datasets/pull/1304
1,304
adding eitb_parcc
closed
0
2020-12-08T13:20:54
2020-12-09T18:02:54
2020-12-09T18:02:03
patil-suraj
[]
Adding EiTB-ParCC: Parallel Corpus of Comparable News http://opus.nlpl.eu/EiTB-ParCC.php
true
759,440,484
https://api.github.com/repos/huggingface/datasets/issues/1303
https://github.com/huggingface/datasets/pull/1303
1,303
adding opus_openoffice
closed
0
2020-12-08T13:20:21
2020-12-10T09:37:10
2020-12-10T09:37:10
patil-suraj
[]
Adding Opus OpenOffice: http://opus.nlpl.eu/OpenOffice.php 8 languages, 28 bitexts
true
759,435,740
https://api.github.com/repos/huggingface/datasets/issues/1302
https://github.com/huggingface/datasets/pull/1302
1,302
Add Danish NER dataset
closed
0
2020-12-08T13:13:54
2020-12-10T09:35:26
2020-12-10T09:35:26
ophelielacroix
[]
true
759,419,945
https://api.github.com/repos/huggingface/datasets/issues/1301
https://github.com/huggingface/datasets/pull/1301
1,301
arxiv dataset added
closed
2
2020-12-08T12:50:51
2020-12-09T18:05:16
2020-12-09T18:05:16
tanmoyio
[]
**adding arXiv dataset**: arXiv dataset and metadata of 1.7M+ scholarly papers across STEM dataset link: https://www.kaggle.com/Cornell-University/arxiv
true
759,418,122
https://api.github.com/repos/huggingface/datasets/issues/1300
https://github.com/huggingface/datasets/pull/1300
1,300
added dutch_social
closed
1
2020-12-08T12:47:50
2020-12-08T16:09:05
2020-12-08T16:09:05
skyprince999
[]
WIP As some tests did not clear! 👎🏼
true
759,414,566
https://api.github.com/repos/huggingface/datasets/issues/1299
https://github.com/huggingface/datasets/issues/1299
1,299
can't load "german_legal_entity_recognition" dataset
closed
3
2020-12-08T12:42:01
2020-12-16T16:03:13
2020-12-16T16:03:13
nataly-obr
[]
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py
false
759,412,451
https://api.github.com/repos/huggingface/datasets/issues/1298
https://github.com/huggingface/datasets/pull/1298
1,298
Add OPUS Ted Talks 2013
closed
1
2020-12-08T12:38:38
2020-12-16T16:57:50
2020-12-16T16:57:49
abhishekkrthakur
[]
true
759,404,103
https://api.github.com/repos/huggingface/datasets/issues/1297
https://github.com/huggingface/datasets/pull/1297
1,297
OPUS Ted Talks 2013
closed
0
2020-12-08T12:25:39
2023-09-24T09:51:49
2020-12-08T12:35:50
abhishekkrthakur
[]
true
759,375,292
https://api.github.com/repos/huggingface/datasets/issues/1296
https://github.com/huggingface/datasets/pull/1296
1,296
The Snips Built In Intents 2016 dataset.
closed
2
2020-12-08T11:40:10
2020-12-08T15:27:52
2020-12-08T15:27:52
bduvenhage
[]
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
true
759,375,251
https://api.github.com/repos/huggingface/datasets/issues/1295
https://github.com/huggingface/datasets/pull/1295
1,295
add hrenwac_para
closed
0
2020-12-08T11:40:06
2020-12-11T17:42:20
2020-12-11T17:42:20
IvanZidov
[]
true
759,365,246
https://api.github.com/repos/huggingface/datasets/issues/1294
https://github.com/huggingface/datasets/pull/1294
1,294
adding opus_euconst
closed
0
2020-12-08T11:24:16
2020-12-08T18:44:20
2020-12-08T18:41:23
patil-suraj
[]
Adding EUconst, a parallel corpus collected from the European Constitution. 21 languages, 210 bitexts
true
759,360,113
https://api.github.com/repos/huggingface/datasets/issues/1293
https://github.com/huggingface/datasets/pull/1293
1,293
add hrenwac_para
closed
0
2020-12-08T11:16:41
2020-12-08T11:34:47
2020-12-08T11:34:38
ivan-zidov
[]
true
759,354,627
https://api.github.com/repos/huggingface/datasets/issues/1292
https://github.com/huggingface/datasets/pull/1292
1,292
arXiv dataset added
closed
0
2020-12-08T11:08:28
2020-12-08T14:02:13
2020-12-08T14:02:13
tanmoyio
[]
true
759,352,810
https://api.github.com/repos/huggingface/datasets/issues/1291
https://github.com/huggingface/datasets/pull/1291
1,291
adding pubmed_qa dataset
closed
0
2020-12-08T11:05:44
2020-12-09T08:54:50
2020-12-09T08:54:50
tuner007
[]
Pubmed QA dataset: PQA-L(abeled) 1k PQA-U(labeled) 61.2k PQA-A(rtifical labeled) 211.3k
true
759,339,989
https://api.github.com/repos/huggingface/datasets/issues/1290
https://github.com/huggingface/datasets/issues/1290
1,290
imdb dataset cannot be downloaded
closed
3
2020-12-08T10:47:36
2020-12-24T17:38:09
2020-12-24T17:38:09
rabeehk
[]
hi please find error below getting imdb train spli: thanks ` datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")` errors ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}] ```
false
759,333,684
https://api.github.com/repos/huggingface/datasets/issues/1289
https://github.com/huggingface/datasets/pull/1289
1,289
Jigsaw toxicity classification dataset added
closed
0
2020-12-08T10:38:51
2020-12-08T15:17:48
2020-12-08T15:17:48
taihim
[]
The dataset requires manually downloading data from Kaggle.
true
759,309,457
https://api.github.com/repos/huggingface/datasets/issues/1288
https://github.com/huggingface/datasets/pull/1288
1,288
Add CodeSearchNet corpus dataset
closed
1
2020-12-08T10:07:50
2020-12-09T17:05:28
2020-12-09T17:05:28
SBrandeis
[]
This PR adds the CodeSearchNet corpus proxy dataset for semantic code search: https://github.com/github/CodeSearchNet I have had a few issues, mentioned below. Would appreciate some help on how to solve them. ## Issues generating dataset card Is there something wrong with my declaration of the dataset features ? ``` features=datasets.Features( { "repository_name": datasets.Value("string"), "func_path_in_repository": datasets.Value("string"), "func_name": datasets.Value("string"), "whole_func_string": datasets.Value("string"), "language": datasets.Value("string"), "func_code_string": datasets.Value("string"), "func_code_tokens": datasets.Sequence(datasets.Value("string")), "func_documentation_string": datasets.Value("string"), "func_documentation_tokens": datasets.Sequence(datasets.Value("string")), "split_name": datasets.Value("string"), "func_code_url": datasets.Value("string"), # TODO - add licensing info in the examples } ), ``` When running the streamlite app for tagging the dataset on my machine, I get the following error : ![image](https://user-images.githubusercontent.com/33657802/101469132-9ed12c80-3944-11eb-94ff-2d9c1d0ea080.png) ## Issues with dummy data Due to the unusual structure of the data, I have been unable to generate dummy data automatically. I tried to generate it manually, but pytests fail when using the manually-generated dummy data ! Pytests work fine when using the real data. ``` ============================================================================================== test session starts ============================================================================================== platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 plugins: xdist-2.1.0, forked-1.3.0 collected 1 item tests/test_dataset_common.py F [100%] =================================================================================================== FAILURES ==================================================================================================== ________________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_code_search_net _________________________________________________________________________ self = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_code_search_net>, dataset_name = 'code_search_net' @slow def test_load_dataset_all_configs(self, dataset_name): configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True) > self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True) tests/test_dataset_common.py:237: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/test_dataset_common.py:198: in check_load_dataset self.parent.assertTrue(len(dataset[split]) > 0) E AssertionError: False is not true --------------------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------------------- Downloading and preparing dataset code_search_net/all (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /tmp/tmppx78sj24/code_search_net/all/1.0.0... Dataset code_search_net downloaded and prepared to /tmp/tmppx78sj24/code_search_net/all/1.0.0. Subsequent calls will reuse this data. --------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------- ... (irrelevant info - Deprecation warnings) ============================================================================================ short test summary info ============================================================================================ FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_code_search_net - AssertionError: False is not true ========================================================================================= 1 failed, 4 warnings in 3.00s ======================================================================================== ``` ## Note : Data structure in S3 The data is stored on S3, and organized by programming languages. It is stored in the following repository structure: ``` . ├── <language_name> # e.g. python │   └── final │   └── jsonl │   ├── test │   │   └── <language_name>_test_0.jsonl.gz │   ├── train │   │   ├── <language_name>_train_0.jsonl.gz │   │   ├── <language_name>_train_1.jsonl.gz │   │   ├── ... │   │   └── <language_name>_train_n.jsonl.gz │   └── valid │   └── <language_name>_valid_0.jsonl.gz ├── <language_name>_dedupe_definitions_v2.pkl └── <language_name>_licenses.pkl ```
true
759,300,992
https://api.github.com/repos/huggingface/datasets/issues/1287
https://github.com/huggingface/datasets/issues/1287
1,287
'iwslt2017-ro-nl', cannot be downloaded
closed
4
2020-12-08T09:56:55
2022-06-13T10:41:33
2022-06-13T10:41:33
rabeehk
[ "dataset bug" ]
Hi I am trying `>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")` getting this error thank you for your help ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (download: 314.07 MiB, generated: 39.92 MiB, post-processed: Unknown size, total: 354.00 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/iwsl_t217/iwslt2017-ro-nl/1.0.0/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/iwslt2017/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd/iwslt2017.py", line 118, in _split_generators dl_dir = dl_manager.download_and_extract(MULTI_URL) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 216, in map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
false
759,291,509
https://api.github.com/repos/huggingface/datasets/issues/1286
https://github.com/huggingface/datasets/issues/1286
1,286
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
closed
6
2020-12-08T09:44:15
2020-12-12T19:36:22
2020-12-12T16:22:36
rabeehk
[]
Hi I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help {'epoch': 20.0} 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:16<00:00, 1.22it/s] 12/08/2020 10:41:19 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/experiment/joint/finetune/lr-2e-5 12/08/2020 10:41:24 - INFO - __main__ - {'wmt16-en-ro': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1998), 'qnli': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 5462), 'scitail': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1303)} 12/08/2020 10:41:24 - INFO - __main__ - *** Evaluate *** 12/08/2020 10:41:24 - INFO - seq2seq.utils.utils - using task specific params for wmt16-en-ro: {'max_length': 300, 'num_beams': 4} 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation ***** 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Num examples = 1998 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Batch size = 64 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:37<00:00, 1.19s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
false
759,278,758
https://api.github.com/repos/huggingface/datasets/issues/1285
https://github.com/huggingface/datasets/issues/1285
1,285
boolq does not work
closed
3
2020-12-08T09:28:47
2020-12-08T09:47:10
2020-12-08T09:47:10
rabeehk
[]
Hi I am getting this error when trying to load boolq, thanks for your help ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock Traceback (most recent call last): File "finetune_t5_trainer.py", line 274, in <module> main() File "finetune_t5_trainer.py", line 147, in main for task in data_args.tasks] File "finetune_t5_trainer.py", line 147, in <listcomp> for task in data_args.tasks] File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 58, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 54, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
false
759,269,920
https://api.github.com/repos/huggingface/datasets/issues/1284
https://github.com/huggingface/datasets/pull/1284
1,284
Update coqa dataset url
closed
0
2020-12-08T09:16:38
2020-12-08T18:19:09
2020-12-08T18:19:09
ojasaar
[]
`datasets.stanford.edu` is invalid.
true
759,251,457
https://api.github.com/repos/huggingface/datasets/issues/1283
https://github.com/huggingface/datasets/pull/1283
1,283
Add dutch book review dataset
closed
1
2020-12-08T08:50:48
2020-12-09T20:21:58
2020-12-09T17:25:25
benjaminvdb
[]
- Name: Dutch Book Review Dataset (DBRD) - Description: The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels and is intended as a benchmark for sentiment classification in Dutch. - Paper: https://arxiv.org/abs/1910.00896 - Data: https://github.com/benjaminvdb/DBRD - Motivation: A large (real-life) dataset of Dutch book reviews and sentiment polarity (positive/negative), based on the associated rating. Checks - [x] Create the dataset script /datasets/dbrd/dbrd.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _info(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
true
759,208,335
https://api.github.com/repos/huggingface/datasets/issues/1282
https://github.com/huggingface/datasets/pull/1282
1,282
add thaiqa_squad
closed
0
2020-12-08T08:14:38
2020-12-08T18:36:18
2020-12-08T18:36:18
cstorm125
[]
Example format is a little different from SQuAD since `thaiqa` always have one answer per question so I added a check to convert answers to lists if they are not already one to future-proof additional questions that might have multiple answers. `thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/).
true
759,203,317
https://api.github.com/repos/huggingface/datasets/issues/1281
https://github.com/huggingface/datasets/pull/1281
1,281
adding hybrid_qa
closed
0
2020-12-08T08:10:19
2020-12-08T18:09:28
2020-12-08T18:07:00
patil-suraj
[]
Adding HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data https://github.com/wenhuchen/HybridQA
true
759,151,028
https://api.github.com/repos/huggingface/datasets/issues/1280
https://github.com/huggingface/datasets/pull/1280
1,280
disaster response messages dataset
closed
2
2020-12-08T07:27:16
2020-12-09T16:21:57
2020-12-09T16:21:57
darshan-gandhi
[]
true
759,108,726
https://api.github.com/repos/huggingface/datasets/issues/1279
https://github.com/huggingface/datasets/pull/1279
1,279
added para_pat
closed
2
2020-12-08T06:28:47
2020-12-14T13:41:17
2020-12-14T13:41:17
bhavitvyamalik
[]
Dataset link : https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632 Working on README.md currently
true
758,988,465
https://api.github.com/repos/huggingface/datasets/issues/1278
https://github.com/huggingface/datasets/pull/1278
1,278
Craigslist bargains
closed
2
2020-12-08T01:45:55
2020-12-09T00:46:15
2020-12-09T00:46:15
ZacharySBrown
[]
`craigslist_bargains` dataset from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
true
758,965,936
https://api.github.com/repos/huggingface/datasets/issues/1276
https://github.com/huggingface/datasets/pull/1276
1,276
add One Million Posts Corpus
closed
1
2020-12-08T00:50:08
2020-12-11T18:28:18
2020-12-11T18:28:18
aseifert
[]
- **Name:** One Million Posts Corpus - **Description:** The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). - **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711 - **Data:** https://github.com/OFAI/million-post-corpus - **Motivation:** Big German (real-life) dataset containing different annotations around forum moderation with expert annotations. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
true
758,958,066
https://api.github.com/repos/huggingface/datasets/issues/1275
https://github.com/huggingface/datasets/pull/1275
1,275
Yoruba GV NER added
closed
1
2020-12-08T00:31:38
2020-12-08T23:25:28
2020-12-08T23:25:28
dadelani
[]
I just added Yoruba GV NER dataset from this paper https://www.aclweb.org/anthology/2020.lrec-1.335/
true
758,943,174
https://api.github.com/repos/huggingface/datasets/issues/1274
https://github.com/huggingface/datasets/pull/1274
1,274
oclar-dataset
closed
1
2020-12-07T23:56:45
2020-12-09T15:36:08
2020-12-09T15:36:08
alaameloh
[]
Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on reviews, including hotels, restaurants, shops, and others. : [homepage](http://archive.ics.uci.edu/ml/datasets/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)
true
758,935,768
https://api.github.com/repos/huggingface/datasets/issues/1273
https://github.com/huggingface/datasets/pull/1273
1,273
Created wiki_movies dataset.
closed
5
2020-12-07T23:38:54
2020-12-14T13:56:49
2020-12-14T13:56:49
aclifton314
[]
First PR (ever). Hopefully this movies dataset is useful to others!
true
758,924,960
https://api.github.com/repos/huggingface/datasets/issues/1272
https://github.com/huggingface/datasets/pull/1272
1,272
Psc
closed
0
2020-12-07T23:19:36
2020-12-07T23:48:05
2020-12-07T23:47:48
abecadel
[]
true
758,924,203
https://api.github.com/repos/huggingface/datasets/issues/1271
https://github.com/huggingface/datasets/pull/1271
1,271
SMS Spam Dataset
closed
0
2020-12-07T23:18:06
2020-12-08T17:42:19
2020-12-08T17:42:19
czabo
[]
Hi :) I added this [SMS Spam Dataset](http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection)
true
758,917,216
https://api.github.com/repos/huggingface/datasets/issues/1270
https://github.com/huggingface/datasets/pull/1270
1,270
add DFKI SmartData Corpus
closed
0
2020-12-07T23:03:48
2020-12-08T17:41:23
2020-12-08T17:41:23
aseifert
[]
- **Name:** DFKI SmartData Corpus - **Description:** DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. - **Paper:** https://www.dfki.de/fileadmin/user_upload/import/9427_lrec_smartdata_corpus.pdf - **Data:** https://github.com/DFKI-NLP/smartdata-corpus - **Motivation:** Contains fine-grained NER labels for German. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
true
758,886,174
https://api.github.com/repos/huggingface/datasets/issues/1269
https://github.com/huggingface/datasets/pull/1269
1,269
Adding OneStopEnglish corpus dataset
closed
1
2020-12-07T22:05:11
2020-12-09T18:43:38
2020-12-09T15:33:53
purvimisal
[]
This PR adds OneStopEnglish Corpus containing texts classified into reading levels (elementary, intermediate, advance) for automatic readability assessment and text simplification. Link to the paper: https://www.aclweb.org/anthology/W18-0535.pdf
true
758,871,252
https://api.github.com/repos/huggingface/datasets/issues/1268
https://github.com/huggingface/datasets/pull/1268
1,268
new pr for Turkish NER
closed
3
2020-12-07T21:40:26
2020-12-09T13:45:05
2020-12-09T13:45:05
merveenoyan
[]
true
758,826,568
https://api.github.com/repos/huggingface/datasets/issues/1267
https://github.com/huggingface/datasets/pull/1267
1,267
Has part
closed
1
2020-12-07T20:32:03
2020-12-11T18:25:42
2020-12-11T18:25:42
jeromeku
[]
true
758,704,178
https://api.github.com/repos/huggingface/datasets/issues/1266
https://github.com/huggingface/datasets/pull/1266
1,266
removing unzipped hansards dummy data
closed
0
2020-12-07T17:31:16
2020-12-07T17:32:29
2020-12-07T17:32:29
yjernite
[]
which were added by mistake
true
758,687,223
https://api.github.com/repos/huggingface/datasets/issues/1265
https://github.com/huggingface/datasets/pull/1265
1,265
Add CovidQA dataset
closed
3
2020-12-07T17:06:51
2020-12-08T17:02:26
2020-12-08T17:02:26
olinguyen
[]
This PR adds CovidQA, a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle’s COVID-19 Open Research Dataset Challenge. Link to the paper: https://arxiv.org/pdf/2004.11339.pdf Link to the homepage: https://covidqa.ai
true
758,686,474
https://api.github.com/repos/huggingface/datasets/issues/1264
https://github.com/huggingface/datasets/pull/1264
1,264
enriched webnlg dataset rebase
closed
1
2020-12-07T17:05:45
2020-12-09T17:00:29
2020-12-09T17:00:27
TevenLeScao
[]
Rebase of #1206 !
true
758,663,787
https://api.github.com/repos/huggingface/datasets/issues/1263
https://github.com/huggingface/datasets/pull/1263
1,263
Added kannada news headlines classification dataset.
closed
1
2020-12-07T16:35:37
2020-12-10T14:30:55
2020-12-09T18:01:31
vrindaprabhu
[]
Manual Download of a kaggle dataset. Mostly followed process as ms_terms.
true
758,637,124
https://api.github.com/repos/huggingface/datasets/issues/1262
https://github.com/huggingface/datasets/pull/1262
1,262
Adding msr_genomics_kbcomp dataset
closed
0
2020-12-07T16:01:30
2020-12-08T18:08:55
2020-12-08T18:08:47
manandey
[]
true
758,626,112
https://api.github.com/repos/huggingface/datasets/issues/1261
https://github.com/huggingface/datasets/pull/1261
1,261
Add Google Sentence Compression dataset
closed
0
2020-12-07T15:47:43
2020-12-08T17:01:59
2020-12-08T17:01:59
mattbui
[]
For more information: https://www.aclweb.org/anthology/D13-1155.pdf
true