id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
765,831,436
https://api.github.com/repos/huggingface/datasets/issues/1561
https://github.com/huggingface/datasets/pull/1561
1,561
Lama
closed
6
2020-12-14T03:27:10
2020-12-28T09:51:47
2020-12-28T09:51:47
huu4ontocord
[]
This the LAMA dataset for probing facts and common sense from language models. See https://github.com/facebookresearch/LAMA for more details.
true
765,814,964
https://api.github.com/repos/huggingface/datasets/issues/1560
https://github.com/huggingface/datasets/pull/1560
1,560
Adding the BrWaC dataset
closed
0
2020-12-14T03:03:56
2020-12-18T15:56:56
2020-12-18T15:56:55
jonatasgrosman
[]
Adding the BrWaC dataset, a large corpus of Portuguese language texts
true
765,714,183
https://api.github.com/repos/huggingface/datasets/issues/1559
https://github.com/huggingface/datasets/pull/1559
1,559
adding dataset card information to CONTRIBUTING.md
closed
0
2020-12-14T00:08:43
2020-12-14T17:55:03
2020-12-14T17:55:03
yjernite
[]
Added a documentation line and link to the full sprint guide in the "How to add a dataset" section, and a section on how to contribute to the dataset card of an existing dataset. And a thank you note at the end :hugs:
true
765,707,907
https://api.github.com/repos/huggingface/datasets/issues/1558
https://github.com/huggingface/datasets/pull/1558
1,558
Adding Igbo NER data
closed
3
2020-12-13T23:52:11
2020-12-21T14:38:20
2020-12-21T14:38:20
purvimisal
[]
This PR adds the Igbo NER dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner
true
765,693,927
https://api.github.com/repos/huggingface/datasets/issues/1557
https://github.com/huggingface/datasets/pull/1557
1,557
HindEncorp again commited
closed
7
2020-12-13T23:09:02
2020-12-15T10:37:05
2020-12-15T10:37:04
rahul-art
[]
true
765,689,730
https://api.github.com/repos/huggingface/datasets/issues/1556
https://github.com/huggingface/datasets/pull/1556
1,556
add bswac
closed
1
2020-12-13T22:55:35
2020-12-18T15:14:28
2020-12-18T15:14:27
IvanZidov
[]
true
765,681,607
https://api.github.com/repos/huggingface/datasets/issues/1555
https://github.com/huggingface/datasets/pull/1555
1,555
Added Opus TedTalks
closed
2
2020-12-13T22:29:33
2020-12-18T09:44:43
2020-12-18T09:44:43
rkc007
[]
Dataset : http://opus.nlpl.eu/TedTalks.php
true
765,675,148
https://api.github.com/repos/huggingface/datasets/issues/1554
https://github.com/huggingface/datasets/pull/1554
1,554
Opus CAPES added
closed
3
2020-12-13T22:11:34
2020-12-18T09:54:57
2020-12-18T08:46:59
rkc007
[]
Dataset : http://opus.nlpl.eu/CAPES.php
true
765,670,083
https://api.github.com/repos/huggingface/datasets/issues/1553
https://github.com/huggingface/datasets/pull/1553
1,553
added air_dialogue
closed
0
2020-12-13T21:59:02
2020-12-23T11:20:40
2020-12-23T11:20:39
skyprince999
[]
UPDATE2 (3797ce5): Updated for multi-configs UPDATE (7018082): manually created the dummy_datasets. All tests were cleared locally. Pushed it to origin/master DRAFT VERSION (57fdb20): (_no longer draft_) Uploaded the air_dialogue database. dummy_data creation was failing in local, since the original downloaded file has some nested folders. Pushing it since the tests with real data was cleared. Will re-check & update via manually creating some dummy_data
true
765,664,411
https://api.github.com/repos/huggingface/datasets/issues/1552
https://github.com/huggingface/datasets/pull/1552
1,552
Added OPUS ParaCrawl
closed
6
2020-12-13T21:44:29
2020-12-21T09:50:26
2020-12-21T09:50:25
rkc007
[]
Dataset : http://opus.nlpl.eu/ParaCrawl.php
true
765,621,879
https://api.github.com/repos/huggingface/datasets/issues/1551
https://github.com/huggingface/datasets/pull/1551
1,551
Monero
closed
3
2020-12-13T19:56:48
2022-10-03T09:38:35
2022-10-03T09:38:35
iliemihai
[ "dataset contribution" ]
Biomedical Romanian dataset :)
true
765,620,925
https://api.github.com/repos/huggingface/datasets/issues/1550
https://github.com/huggingface/datasets/pull/1550
1,550
Add offensive langauge dravidian dataset
closed
1
2020-12-13T19:54:19
2020-12-18T15:52:49
2020-12-18T14:25:30
jamespaultg
[]
true
765,612,905
https://api.github.com/repos/huggingface/datasets/issues/1549
https://github.com/huggingface/datasets/pull/1549
1,549
Generics kb new branch
closed
0
2020-12-13T19:33:10
2020-12-21T13:55:09
2020-12-21T13:55:09
bpatidar
[]
Datasets need manual downloads. Have thus created dummy data as well. But pytest on real and dummy data are failing. I have completed the readme , tags and other required things. I need to create the metadata json once tests get successful. Opening a PR while working with Yacine Jernite to resolve my pytest issues.
true
765,592,336
https://api.github.com/repos/huggingface/datasets/issues/1548
https://github.com/huggingface/datasets/pull/1548
1,548
Fix `🤗Datasets` - `tfds` differences link + a few aesthetics
closed
0
2020-12-13T18:48:21
2020-12-15T12:55:27
2020-12-15T12:55:27
VIVelev
[]
true
765,562,792
https://api.github.com/repos/huggingface/datasets/issues/1547
https://github.com/huggingface/datasets/pull/1547
1,547
Adding PolEval2019 Machine Translation Task dataset
closed
6
2020-12-13T17:50:03
2023-04-03T09:20:23
2020-12-21T16:13:21
vrindaprabhu
[]
Facing an error with pytest in training. Dummy data is passing. README has to be updated.
true
765,559,923
https://api.github.com/repos/huggingface/datasets/issues/1546
https://github.com/huggingface/datasets/pull/1546
1,546
Add persian ner dataset
closed
3
2020-12-13T17:45:48
2020-12-23T09:53:03
2020-12-23T09:53:03
KMFODA
[]
Adding the following dataset: https://github.com/HaniehP/PersianNER
true
765,550,283
https://api.github.com/repos/huggingface/datasets/issues/1545
https://github.com/huggingface/datasets/pull/1545
1,545
add hrwac
closed
1
2020-12-13T17:31:54
2020-12-18T13:35:17
2020-12-18T13:35:17
IvanZidov
[]
true
765,514,828
https://api.github.com/repos/huggingface/datasets/issues/1544
https://github.com/huggingface/datasets/pull/1544
1,544
Added Wiki Summary Dataset
closed
18
2020-12-13T16:33:46
2020-12-18T16:20:06
2020-12-18T16:17:18
tanmoyio
[]
Wiki Summary: Dataset extracted from Persian Wikipedia into the form of articles and highlights. Link: https://github.com/m3hrdadfi/wiki-summary
true
765,476,196
https://api.github.com/repos/huggingface/datasets/issues/1543
https://github.com/huggingface/datasets/pull/1543
1,543
adding HindEncorp
closed
3
2020-12-13T15:39:07
2020-12-13T23:35:53
2020-12-13T23:35:53
rahul-art
[]
adding Hindi Wikipedia corpus
true
765,439,746
https://api.github.com/repos/huggingface/datasets/issues/1542
https://github.com/huggingface/datasets/pull/1542
1,542
fix typo readme
closed
0
2020-12-13T14:41:22
2020-12-13T17:16:41
2020-12-13T17:16:40
clmnt
[]
true
765,430,586
https://api.github.com/repos/huggingface/datasets/issues/1541
https://github.com/huggingface/datasets/issues/1541
1,541
connection issue while downloading data
closed
2
2020-12-13T14:27:00
2022-10-05T12:33:29
2022-10-05T12:33:29
rabeehkarimimahabadi
[]
Hi I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. thanks ``` Traceback (most recent call last): File "finetune_t5_trainer.py", line 361, in <module> main() File "finetune_t5_trainer.py", line 269, in main add_prefix=False if training_args.train_adapters else True) File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 306, in load_dataset return datasets.load_dataset('glue', 'cola', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) ```
false
765,357,702
https://api.github.com/repos/huggingface/datasets/issues/1540
https://github.com/huggingface/datasets/pull/1540
1,540
added TTC4900: A Benchmark Data for Turkish Text Categorization dataset
closed
7
2020-12-13T12:43:33
2020-12-18T10:09:01
2020-12-18T10:09:01
yavuzKomecoglu
[]
This PR adds the TTC4900 dataset which is a Turkish Text Categorization dataset by me and @basakbuluz. Homepage: [https://www.kaggle.com/savasy/ttc4900](https://www.kaggle.com/savasy/ttc4900) Point of Contact: [Savaş Yıldırım](mailto:savasy@gmail.com) / @savasy
true
765,338,910
https://api.github.com/repos/huggingface/datasets/issues/1539
https://github.com/huggingface/datasets/pull/1539
1,539
Added Wiki Asp dataset
closed
3
2020-12-13T12:18:34
2020-12-22T10:16:01
2020-12-22T10:16:01
katnoria
[]
Hello, I have added Wiki Asp dataset. Please review the PR.
true
765,139,739
https://api.github.com/repos/huggingface/datasets/issues/1538
https://github.com/huggingface/datasets/pull/1538
1,538
tweets_hate_speech_detection
closed
3
2020-12-13T07:37:53
2020-12-21T15:54:28
2020-12-21T15:54:27
darshan-gandhi
[]
true
765,095,210
https://api.github.com/repos/huggingface/datasets/issues/1537
https://github.com/huggingface/datasets/pull/1537
1,537
added ohsumed
closed
0
2020-12-13T06:58:23
2020-12-17T18:28:16
2020-12-17T18:28:16
skyprince999
[]
UPDATE2: PR passed all tests. Now waiting for review. UPDATE: pushed a new version. cross fingers that it should complete all the tests! :) If it passes all tests then it's not a draft version. This is a draft version
true
765,043,121
https://api.github.com/repos/huggingface/datasets/issues/1536
https://github.com/huggingface/datasets/pull/1536
1,536
Add Hippocorpus Dataset
closed
2
2020-12-13T06:13:02
2020-12-15T13:41:17
2020-12-15T13:40:11
manandey
[]
true
764,977,542
https://api.github.com/repos/huggingface/datasets/issues/1535
https://github.com/huggingface/datasets/pull/1535
1,535
Adding Igbo monolingual dataset
closed
1
2020-12-13T05:16:37
2020-12-21T14:39:49
2020-12-21T14:39:49
purvimisal
[]
This PR adds the Igbo Monolingual dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling Paper: https://arxiv.org/abs/2004.00648
true
764,934,681
https://api.github.com/repos/huggingface/datasets/issues/1534
https://github.com/huggingface/datasets/pull/1534
1,534
adding dataset for diplomacy detection
closed
1
2020-12-13T04:38:43
2020-12-15T19:52:52
2020-12-15T19:52:25
MisbahKhan789
[]
true
764,835,913
https://api.github.com/repos/huggingface/datasets/issues/1533
https://github.com/huggingface/datasets/pull/1533
1,533
add id_panl_bppt, a parallel corpus for en-id
closed
2
2020-12-13T03:11:27
2020-12-21T10:40:36
2020-12-21T10:40:36
cahya-wirawan
[]
Parallel Text Corpora for English - Indonesian
true
764,772,184
https://api.github.com/repos/huggingface/datasets/issues/1532
https://github.com/huggingface/datasets/pull/1532
1,532
adding hate-speech-and-offensive-language
closed
1
2020-12-13T02:16:31
2020-12-17T18:36:54
2020-12-17T18:10:05
MisbahKhan789
[]
true
764,752,882
https://api.github.com/repos/huggingface/datasets/issues/1531
https://github.com/huggingface/datasets/pull/1531
1,531
adding hate-speech-and-offensive-language
closed
0
2020-12-13T01:59:07
2020-12-13T02:17:02
2020-12-13T02:17:02
MisbahKhan789
[]
true
764,749,507
https://api.github.com/repos/huggingface/datasets/issues/1530
https://github.com/huggingface/datasets/pull/1530
1,530
add indonlu benchmark datasets
closed
0
2020-12-13T01:56:09
2020-12-16T11:11:43
2020-12-16T11:11:43
yasirabd
[]
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU. This is a new clean PR from [#1322](https://github.com/huggingface/datasets/pull/1322)
true
764,748,410
https://api.github.com/repos/huggingface/datasets/issues/1529
https://github.com/huggingface/datasets/pull/1529
1,529
Ro sent
closed
8
2020-12-13T01:55:02
2021-03-19T10:32:43
2021-03-19T10:32:42
iliemihai
[]
Movies reviews dataset for Romanian language.
true
764,724,035
https://api.github.com/repos/huggingface/datasets/issues/1528
https://github.com/huggingface/datasets/pull/1528
1,528
initial commit for Common Crawl Domain Names
closed
1
2020-12-13T01:32:49
2020-12-18T13:51:38
2020-12-18T10:22:32
Karthik-Bhaskar
[]
true
764,638,504
https://api.github.com/repos/huggingface/datasets/issues/1527
https://github.com/huggingface/datasets/pull/1527
1,527
Add : Conv AI 2 (Messed up original PR)
closed
0
2020-12-13T00:21:14
2020-12-13T19:14:24
2020-12-13T19:14:24
rkc007
[]
@lhoestq Sorry I messed up the previous 2 PR's -> https://github.com/huggingface/datasets/pull/1462 -> https://github.com/huggingface/datasets/pull/1383. So created a new one. Also, everything is fixed in this PR. Can you please review it ? Thanks in advance.
true
764,591,243
https://api.github.com/repos/huggingface/datasets/issues/1526
https://github.com/huggingface/datasets/pull/1526
1,526
added Hebrew thisworld corpus
closed
1
2020-12-12T23:42:52
2020-12-18T10:47:30
2020-12-18T10:47:30
imvladikon
[]
added corpus from https://thisworld.online/ , https://github.com/thisworld1/thisworld.online
true
764,530,582
https://api.github.com/repos/huggingface/datasets/issues/1525
https://github.com/huggingface/datasets/pull/1525
1,525
Adding a second branch for Atomic to fix git errors
closed
0
2020-12-12T22:54:50
2020-12-28T15:51:11
2020-12-28T15:51:11
huu4ontocord
[]
Adding the Atomic common sense dataset. See https://homes.cs.washington.edu/~msap/atomic/
true
764,521,672
https://api.github.com/repos/huggingface/datasets/issues/1524
https://github.com/huggingface/datasets/pull/1524
1,524
ADD: swahili dataset for language modeling
closed
0
2020-12-12T22:47:18
2020-12-17T16:37:16
2020-12-17T16:37:16
akshayb7
[]
Add a corpus for Swahili language modelling. All tests passed locally. README updated with all information available.
true
764,359,524
https://api.github.com/repos/huggingface/datasets/issues/1523
https://github.com/huggingface/datasets/pull/1523
1,523
Add eHealth Knowledge Discovery dataset
closed
2
2020-12-12T20:44:18
2020-12-17T17:02:41
2020-12-17T16:48:56
mariagrandury
[]
This Spanish dataset can be used to mine knowledge from unstructured health texts. In particular, for: - Entity recognition - Relation extraction
true
764,341,594
https://api.github.com/repos/huggingface/datasets/issues/1522
https://github.com/huggingface/datasets/pull/1522
1,522
Add semeval 2020 task 11
closed
2
2020-12-12T20:32:14
2020-12-15T16:48:52
2020-12-15T16:48:52
ZacharySBrown
[]
Adding in propaganda detection task (task 11) from Sem Eval 2020
true
764,320,841
https://api.github.com/repos/huggingface/datasets/issues/1521
https://github.com/huggingface/datasets/pull/1521
1,521
Atomic
closed
1
2020-12-12T20:18:08
2020-12-12T22:56:48
2020-12-12T22:56:48
huu4ontocord
[]
This is the ATOMIC common sense dataset. More info can be found here: * README.md still to be created.
true
764,140,938
https://api.github.com/repos/huggingface/datasets/issues/1520
https://github.com/huggingface/datasets/pull/1520
1,520
ru_reviews dataset adding
closed
3
2020-12-12T18:13:06
2022-10-03T09:38:42
2022-10-03T09:38:42
darshan-gandhi
[ "dataset contribution" ]
RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian
true
764,107,360
https://api.github.com/repos/huggingface/datasets/issues/1519
https://github.com/huggingface/datasets/pull/1519
1,519
Initial commit for AQuaMuSe
closed
3
2020-12-12T17:46:16
2020-12-18T13:50:42
2020-12-17T17:03:30
Karthik-Bhaskar
[]
There is an issue in generation of dummy data. Tests on real data have passed locally.
true
764,045,722
https://api.github.com/repos/huggingface/datasets/issues/1518
https://github.com/huggingface/datasets/pull/1518
1,518
Add twi text
closed
2
2020-12-12T16:52:02
2020-12-13T18:53:37
2020-12-13T18:53:37
dadelani
[]
Add Twi texts
true
764,045,214
https://api.github.com/repos/huggingface/datasets/issues/1517
https://github.com/huggingface/datasets/pull/1517
1,517
Kd conv smangrul
closed
2
2020-12-12T16:51:30
2020-12-16T14:56:14
2020-12-16T14:56:14
pacman100
[]
true
764,032,327
https://api.github.com/repos/huggingface/datasets/issues/1516
https://github.com/huggingface/datasets/pull/1516
1,516
adding wrbsc
closed
2
2020-12-12T16:38:40
2020-12-18T09:41:33
2020-12-18T09:41:33
kldarek
[]
true
764,022,753
https://api.github.com/repos/huggingface/datasets/issues/1515
https://github.com/huggingface/datasets/pull/1515
1,515
Add yoruba text
closed
1
2020-12-12T16:29:30
2020-12-13T18:37:58
2020-12-13T18:37:58
dadelani
[]
Adding Yoruba text C3
true
764,017,148
https://api.github.com/repos/huggingface/datasets/issues/1514
https://github.com/huggingface/datasets/issues/1514
1,514
how to get all the options of a property in datasets
closed
2
2020-12-12T16:24:08
2022-05-25T16:27:29
2022-05-25T16:27:29
rabeehk
[ "question" ]
Hi could you tell me how I can get all unique options of a property of dataset? for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks
false
764,016,850
https://api.github.com/repos/huggingface/datasets/issues/1513
https://github.com/huggingface/datasets/pull/1513
1,513
app_reviews_by_users
closed
1
2020-12-12T16:23:49
2020-12-14T20:45:24
2020-12-14T20:45:24
darshan-gandhi
[]
Software Applications User Reviews
true
764,010,722
https://api.github.com/repos/huggingface/datasets/issues/1512
https://github.com/huggingface/datasets/pull/1512
1,512
Add Hippocorpus Dataset
closed
0
2020-12-12T16:17:53
2020-12-13T05:09:08
2020-12-13T05:08:58
manandey
[]
true
764,006,477
https://api.github.com/repos/huggingface/datasets/issues/1511
https://github.com/huggingface/datasets/pull/1511
1,511
poleval cyberbullying
closed
1
2020-12-12T16:13:44
2020-12-17T16:20:59
2020-12-17T16:19:58
czabo
[]
true
763,980,369
https://api.github.com/repos/huggingface/datasets/issues/1510
https://github.com/huggingface/datasets/pull/1510
1,510
Add Dataset for (qa_srl)Question-Answer Driven Semantic Role Labeling
closed
2
2020-12-12T15:48:11
2020-12-17T16:06:22
2020-12-17T16:06:22
bpatidar
[]
- Added tags, Readme file - Added code changes
true
763,964,857
https://api.github.com/repos/huggingface/datasets/issues/1509
https://github.com/huggingface/datasets/pull/1509
1,509
Added dataset Makhzan
closed
4
2020-12-12T15:34:07
2020-12-16T15:04:52
2020-12-16T15:04:52
arkhalid
[]
Need help with the dummy data.
true
763,908,724
https://api.github.com/repos/huggingface/datasets/issues/1508
https://github.com/huggingface/datasets/pull/1508
1,508
Fix namedsplit docs
closed
2
2020-12-12T14:43:38
2021-03-11T02:18:39
2020-12-15T12:57:48
mariosasko
[]
Fixes a broken link and `DatasetInfoMixin.split`'s docstring.
true
763,857,872
https://api.github.com/repos/huggingface/datasets/issues/1507
https://github.com/huggingface/datasets/pull/1507
1,507
Add SelQA Dataset
closed
3
2020-12-12T13:58:07
2020-12-16T16:49:23
2020-12-16T16:49:23
bharatr21
[]
Add the SelQA Dataset, a new benchmark for selection-based question answering tasks Repo: https://github.com/emorynlp/selqa/ Paper: https://arxiv.org/pdf/1606.08513.pdf
true
763,846,074
https://api.github.com/repos/huggingface/datasets/issues/1506
https://github.com/huggingface/datasets/pull/1506
1,506
Add nq_open question answering dataset
closed
6
2020-12-12T13:46:48
2020-12-17T15:34:50
2020-12-17T15:34:50
Nilanshrajput
[]
Added nq_open Open-domain question answering dataset. The NQ-Open task is currently being used to evaluate submissions to the EfficientQA competition, which is part of the NeurIPS 2020 competition track.
true
763,750,773
https://api.github.com/repos/huggingface/datasets/issues/1505
https://github.com/huggingface/datasets/pull/1505
1,505
add ilist dataset
closed
0
2020-12-12T12:44:12
2020-12-17T15:43:07
2020-12-17T15:43:07
thevasudevgupta
[]
This PR will add Indo-Aryan Language Identification Shared Task Dataset.
true
763,697,231
https://api.github.com/repos/huggingface/datasets/issues/1504
https://github.com/huggingface/datasets/pull/1504
1,504
Add SentiWS dataset for pos-tagging and sentiment-scoring (German)
closed
2
2020-12-12T12:17:53
2020-12-15T18:32:38
2020-12-15T18:32:38
harshalmittal4
[]
true
763,667,489
https://api.github.com/repos/huggingface/datasets/issues/1503
https://github.com/huggingface/datasets/pull/1503
1,503
Adding COVID QA dataset in Chinese and English from UC SanDiego
closed
1
2020-12-12T12:02:48
2021-02-16T05:29:18
2020-12-17T15:29:26
vrindaprabhu
[]
true
763,658,208
https://api.github.com/repos/huggingface/datasets/issues/1502
https://github.com/huggingface/datasets/pull/1502
1,502
Add Senti_Lex Dataset
closed
5
2020-12-12T11:55:29
2020-12-28T14:01:12
2020-12-28T14:01:12
KMFODA
[]
TODO: Fix feature format issue Create dataset_info.json file Run pytests Make Style
true
763,517,647
https://api.github.com/repos/huggingface/datasets/issues/1501
https://github.com/huggingface/datasets/pull/1501
1,501
Adds XED dataset
closed
1
2020-12-12T09:47:00
2020-12-14T21:20:59
2020-12-14T21:20:59
harshalmittal4
[]
true
763,479,305
https://api.github.com/repos/huggingface/datasets/issues/1500
https://github.com/huggingface/datasets/pull/1500
1,500
adding polsum
closed
1
2020-12-12T09:05:29
2020-12-18T09:43:43
2020-12-18T09:43:43
kldarek
[]
true
763,464,693
https://api.github.com/repos/huggingface/datasets/issues/1499
https://github.com/huggingface/datasets/pull/1499
1,499
update the dataset id_newspapers_2018
closed
0
2020-12-12T08:47:12
2020-12-14T15:28:07
2020-12-14T15:28:07
cahya-wirawan
[]
Hi, I need to update the link to the dataset. The link in the previous PR was to a small test dataset. Thanks
true
763,303,606
https://api.github.com/repos/huggingface/datasets/issues/1498
https://github.com/huggingface/datasets/pull/1498
1,498
add stereoset
closed
0
2020-12-12T05:04:37
2020-12-18T10:03:53
2020-12-18T10:03:53
cstorm125
[]
StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measures model preferences across gender, race, religion, and profession.
true
763,180,824
https://api.github.com/repos/huggingface/datasets/issues/1497
https://github.com/huggingface/datasets/pull/1497
1,497
adding fake-news-english-5
closed
1
2020-12-12T02:13:11
2020-12-17T20:07:17
2020-12-17T20:07:17
MisbahKhan789
[]
true
763,091,663
https://api.github.com/repos/huggingface/datasets/issues/1496
https://github.com/huggingface/datasets/pull/1496
1,496
Add Multi-Dimensional Gender Bias classification data
closed
0
2020-12-12T00:17:37
2020-12-14T21:14:55
2020-12-14T21:14:55
yjernite
[]
https://parl.ai/projects/md_gender/ Mostly has the ABOUT dimension since the others are inferred from other datasets in most cases. I tried to keep the dummy data small but one of the configs has 140 splits ( > 56KB data)
true
763,025,562
https://api.github.com/repos/huggingface/datasets/issues/1495
https://github.com/huggingface/datasets/pull/1495
1,495
Opus DGT added
closed
1
2020-12-11T23:05:09
2020-12-17T14:38:41
2020-12-17T14:38:41
rkc007
[]
Dataset : http://opus.nlpl.eu/DGT.php
true
762,992,601
https://api.github.com/repos/huggingface/datasets/issues/1494
https://github.com/huggingface/datasets/pull/1494
1,494
Added Opus Wikipedia
closed
1
2020-12-11T22:28:03
2020-12-17T14:38:28
2020-12-17T14:38:28
rkc007
[]
Dataset : http://opus.nlpl.eu/Wikipedia.php
true
762,979,415
https://api.github.com/repos/huggingface/datasets/issues/1493
https://github.com/huggingface/datasets/pull/1493
1,493
Added RONEC dataset.
closed
4
2020-12-11T22:14:50
2020-12-21T14:48:56
2020-12-21T14:48:56
iliemihai
[]
true
762,965,239
https://api.github.com/repos/huggingface/datasets/issues/1492
https://github.com/huggingface/datasets/pull/1492
1,492
OPUS UBUNTU dataset
closed
1
2020-12-11T22:01:37
2020-12-17T14:38:16
2020-12-17T14:38:15
rkc007
[]
Dataset : http://opus.nlpl.eu/Ubuntu.php
true
762,920,920
https://api.github.com/repos/huggingface/datasets/issues/1491
https://github.com/huggingface/datasets/pull/1491
1,491
added opus GNOME data
closed
1
2020-12-11T21:21:51
2020-12-17T14:20:23
2020-12-17T14:20:23
rkc007
[]
Dataset : http://opus.nlpl.eu/GNOME.php
true
762,915,346
https://api.github.com/repos/huggingface/datasets/issues/1490
https://github.com/huggingface/datasets/pull/1490
1,490
ADD: opus_rf dataset for translation
closed
1
2020-12-11T21:16:43
2020-12-13T19:12:24
2020-12-13T19:12:24
akshayb7
[]
Passed all local tests. Hopefully passes all Circle CI tests too. Tried to keep the commit history clean.
true
762,908,763
https://api.github.com/repos/huggingface/datasets/issues/1489
https://github.com/huggingface/datasets/pull/1489
1,489
Fake news english 4
closed
3
2020-12-11T21:10:35
2020-12-12T19:39:52
2020-12-12T19:38:09
MisbahKhan789
[]
true
762,860,679
https://api.github.com/repos/huggingface/datasets/issues/1488
https://github.com/huggingface/datasets/pull/1488
1,488
Adding NELL
closed
2
2020-12-11T20:25:25
2021-01-07T08:37:07
2020-12-21T14:45:00
huu4ontocord
[]
NELL is a knowledge base and knowledge graph along with sentences used to create the KB. See http://rtw.ml.cmu.edu/rtw/ for more details.
true
762,794,921
https://api.github.com/repos/huggingface/datasets/issues/1487
https://github.com/huggingface/datasets/pull/1487
1,487
added conv_ai_3 dataset
closed
4
2020-12-11T19:26:26
2020-12-28T09:38:40
2020-12-28T09:38:39
rkc007
[]
Dataset : https://github.com/aliannejadi/ClariQ/
true
762,790,102
https://api.github.com/repos/huggingface/datasets/issues/1486
https://github.com/huggingface/datasets/pull/1486
1,486
hate speech 18 dataset
closed
2
2020-12-11T19:22:14
2020-12-14T19:43:18
2020-12-14T19:43:18
czabo
[]
This is again a PR instead of #1339, because something went wrong there.
true
762,774,822
https://api.github.com/repos/huggingface/datasets/issues/1485
https://github.com/huggingface/datasets/pull/1485
1,485
Re-added wiki_movies dataset due to previous PR having changes from m…
closed
0
2020-12-11T19:07:48
2020-12-14T14:08:22
2020-12-14T14:08:22
aclifton314
[]
…any other unassociated files.
true
762,747,096
https://api.github.com/repos/huggingface/datasets/issues/1484
https://github.com/huggingface/datasets/pull/1484
1,484
Add peer-read dataset
closed
2
2020-12-11T18:43:44
2020-12-21T09:40:50
2020-12-21T09:40:50
vinaykudari
[]
true
762,712,337
https://api.github.com/repos/huggingface/datasets/issues/1483
https://github.com/huggingface/datasets/pull/1483
1,483
Added Times of India News Headlines Dataset
closed
3
2020-12-11T18:12:38
2020-12-14T18:08:08
2020-12-14T18:08:08
tanmoyio
[]
Dataset name: Times of India News Headlines link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DPQMQH
true
762,686,820
https://api.github.com/repos/huggingface/datasets/issues/1482
https://github.com/huggingface/datasets/pull/1482
1,482
Adding medical database chinese and english
closed
5
2020-12-11T17:50:39
2021-02-16T05:28:36
2020-12-15T18:23:53
vrindaprabhu
[]
Error in creating dummy dataset
true
762,579,658
https://api.github.com/repos/huggingface/datasets/issues/1481
https://github.com/huggingface/datasets/pull/1481
1,481
Fix ADD_NEW_DATASET to avoid rebasing once pushed
closed
0
2020-12-11T16:27:49
2021-01-07T10:10:20
2021-01-07T10:10:20
albertvillanova
[]
true
762,530,805
https://api.github.com/repos/huggingface/datasets/issues/1480
https://github.com/huggingface/datasets/pull/1480
1,480
Adding the Mac-Morpho dataset
closed
0
2020-12-11T16:01:38
2020-12-21T10:03:37
2020-12-21T10:03:37
jonatasgrosman
[]
Adding the Mac-Morpho dataset, a Portuguese language dataset for Part-of-speech tagging tasks
true
762,320,736
https://api.github.com/repos/huggingface/datasets/issues/1479
https://github.com/huggingface/datasets/pull/1479
1,479
Add narrativeQA
closed
2
2020-12-11T12:58:31
2020-12-11T13:33:23
2020-12-11T13:33:23
ghomasHudson
[]
Redo of #1368 #309 #499 In redoing the dummy data a few times, I ended up adding a load of files to git. Hopefully this should work.
true
762,293,076
https://api.github.com/repos/huggingface/datasets/issues/1478
https://github.com/huggingface/datasets/issues/1478
1,478
Inconsistent argument names.
closed
2
2020-12-11T12:19:38
2020-12-19T15:03:39
2020-12-19T15:03:39
Fraser-Greenlee
[]
Just find it a wee bit odd that in the transformers library `predictions` are those made by the model: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61 While in many datasets metrics they are the ground truth labels: https://github.com/huggingface/datasets/blob/c3f53792a744ede18d748a1133b6597fdd2d8d18/metrics/accuracy/accuracy.py#L31-L40 Do you think predictions & references should be swapped? I'd be willing to do some refactoring here if you agree.
false
762,288,811
https://api.github.com/repos/huggingface/datasets/issues/1477
https://github.com/huggingface/datasets/pull/1477
1,477
Jigsaw toxicity pred
closed
0
2020-12-11T12:13:20
2020-12-14T13:19:35
2020-12-14T13:19:35
taihim
[]
Managed to mess up my original pull request, opening a fresh one incorporating the changes suggested by @lhoestq.
true
762,256,048
https://api.github.com/repos/huggingface/datasets/issues/1476
https://github.com/huggingface/datasets/pull/1476
1,476
Add Spanish Billion Words Corpus
closed
0
2020-12-11T11:24:58
2020-12-17T17:04:08
2020-12-14T13:14:31
mariagrandury
[]
Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
true
762,187,000
https://api.github.com/repos/huggingface/datasets/issues/1475
https://github.com/huggingface/datasets/pull/1475
1,475
Fix XML iterparse in opus_dogc dataset
closed
0
2020-12-11T10:08:18
2020-12-17T11:28:47
2020-12-17T11:28:46
albertvillanova
[]
I forgot to add `elem.clear()` to clear the element from memory.
true
762,083,706
https://api.github.com/repos/huggingface/datasets/issues/1474
https://github.com/huggingface/datasets/pull/1474
1,474
Create JSON dummy data without loading all dataset in memory
open
0
2020-12-11T08:44:23
2022-07-06T15:19:47
null
albertvillanova
[]
See #1442. The statement `json.load()` loads **all the file content in memory**. In order to avoid this, file content should be parsed **iteratively**, by using the library `ijson` e.g. I have refactorized the code into a function `_create_json_dummy_data` and I have added some tests.
true
762,055,694
https://api.github.com/repos/huggingface/datasets/issues/1473
https://github.com/huggingface/datasets/pull/1473
1,473
add srwac
closed
2
2020-12-11T08:20:29
2020-12-17T11:40:59
2020-12-17T11:40:59
IvanZidov
[]
true
762,037,907
https://api.github.com/repos/huggingface/datasets/issues/1472
https://github.com/huggingface/datasets/pull/1472
1,472
add Srwac
closed
0
2020-12-11T08:04:57
2020-12-11T08:08:12
2020-12-11T08:05:54
IvanZidov
[]
true
761,842,512
https://api.github.com/repos/huggingface/datasets/issues/1471
https://github.com/huggingface/datasets/pull/1471
1,471
Adding the HAREM dataset
closed
5
2020-12-11T03:21:10
2020-12-22T10:37:33
2020-12-22T10:37:33
jonatasgrosman
[]
Adding the HAREM dataset, a Portuguese language dataset for NER tasks
true
761,791,065
https://api.github.com/repos/huggingface/datasets/issues/1470
https://github.com/huggingface/datasets/pull/1470
1,470
Add wiki lingua dataset
closed
7
2020-12-11T02:04:18
2020-12-16T15:27:13
2020-12-16T15:27:13
katnoria
[]
Hello @lhoestq , I am opening a fresh pull request as advised in my original PR https://github.com/huggingface/datasets/pull/1308 Thanks
true
761,611,315
https://api.github.com/repos/huggingface/datasets/issues/1469
https://github.com/huggingface/datasets/pull/1469
1,469
ADD: Wino_bias dataset
closed
1
2020-12-10T20:59:45
2020-12-13T19:13:57
2020-12-13T19:13:57
akshayb7
[]
Updated PR to counter messed up history of previous one (https://github.com/huggingface/datasets/pull/1235) due to rebase. Removed manual downloading of dataset.
true
761,607,531
https://api.github.com/repos/huggingface/datasets/issues/1468
https://github.com/huggingface/datasets/pull/1468
1,468
add Indonesian newspapers (id_newspapers_2018)
closed
6
2020-12-10T20:54:12
2020-12-12T08:50:51
2020-12-11T17:04:41
cahya-wirawan
[]
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers. The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB.
true
761,557,290
https://api.github.com/repos/huggingface/datasets/issues/1467
https://github.com/huggingface/datasets/pull/1467
1,467
adding snow_simplified_japanese_corpus
closed
2
2020-12-10T19:45:03
2020-12-17T13:22:48
2020-12-17T11:25:34
forest1988
[]
Adding simplified Japanese corpus "SNOW T15" and "SNOW T23". They contain original Japanese, simplified Japanese, and original English (the original text is gotten from en-ja translation corpus). Hence, it can be used not only for Japanese simplification but also for en-ja translation. - http://www.jnlp.org/SNOW/T15 - http://www.jnlp.org/SNOW/T23
true
761,554,357
https://api.github.com/repos/huggingface/datasets/issues/1466
https://github.com/huggingface/datasets/pull/1466
1,466
Add Turkish News Category Dataset (270K).Updates were made for review…
closed
4
2020-12-10T19:41:12
2020-12-11T14:27:15
2020-12-11T14:27:15
basakbuluz
[]
This PR adds the **Turkish News Categories Dataset (270K)** dataset which is a text classification dataset by me and @yavuzKomecoglu. Turkish news dataset consisting of **273601 news in 17 categories**, compiled from printed media and news websites between 2010 and 2017 by the [Interpress](https://www.interpress.com/) media monitoring company. **Note**: Resubmitted as a clean version of the previous Pull Request(#1419). @SBrandeis @lhoestq
true
761,538,931
https://api.github.com/repos/huggingface/datasets/issues/1465
https://github.com/huggingface/datasets/pull/1465
1,465
Add clean menyo20k data
closed
1
2020-12-10T19:22:00
2020-12-14T10:30:21
2020-12-14T10:30:21
yvonnegitau
[]
New Clean PR for menyo20k_mt
true
761,533,566
https://api.github.com/repos/huggingface/datasets/issues/1464
https://github.com/huggingface/datasets/pull/1464
1,464
Reddit jokes
closed
2
2020-12-10T19:15:19
2020-12-10T20:14:00
2020-12-10T20:14:00
tanmoyio
[]
196k Reddit Jokes dataset Dataset link- https://raw.githubusercontent.com/taivop/joke-dataset/master/reddit_jokes.json
true
761,510,908
https://api.github.com/repos/huggingface/datasets/issues/1463
https://github.com/huggingface/datasets/pull/1463
1,463
Adding enriched_web_nlg features + handling xml bugs
closed
0
2020-12-10T18:48:19
2020-12-17T10:44:35
2020-12-17T10:44:34
TevenLeScao
[]
This PR adds features of the enriched_web_nlg dataset that were not present yet (most notably sorted rdf triplet sets), and deals with some xml issues that led to returning no data in cases where surgery could be performed to salvage it.
true
761,489,274
https://api.github.com/repos/huggingface/datasets/issues/1462
https://github.com/huggingface/datasets/pull/1462
1,462
Added conv ai 2 (Again)
closed
6
2020-12-10T18:21:55
2020-12-13T00:21:32
2020-12-13T00:21:31
rkc007
[]
The original PR -> https://github.com/huggingface/datasets/pull/1383 Reason for creating again - The reason I had to create the PR again was due to the master rebasing issue. After rebasing the changes, all the previous commits got added to the branch.
true