id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
775,914,320
https://api.github.com/repos/huggingface/datasets/issues/1663
https://github.com/huggingface/datasets/pull/1663
1,663
update saving and loading methods for faiss index so to accept path l…
closed
1
2020-12-29T14:15:37
2021-01-18T09:27:23
2021-01-18T09:27:23
tslott
[]
- Update saving and loading methods for faiss index so to accept path like objects from pathlib The current code only supports using a string type to save and load a faiss index. This change makes it possible to use a string type OR a Path from [pathlib](https://docs.python.org/3/library/pathlib.html). The codes becomes a more intuitive this way I think.
true
775,890,154
https://api.github.com/repos/huggingface/datasets/issues/1662
https://github.com/huggingface/datasets/issues/1662
1,662
Arrow file is too large when saving vector data
closed
4
2020-12-29T13:23:12
2021-01-21T14:12:39
2021-01-21T14:12:39
weiwangorg
[]
I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file?
false
775,840,801
https://api.github.com/repos/huggingface/datasets/issues/1661
https://github.com/huggingface/datasets/pull/1661
1,661
updated dataset cards
closed
0
2020-12-29T11:20:40
2020-12-30T17:15:16
2020-12-30T17:15:16
Nilanshrajput
[]
added dataset instance in the card.
true
775,831,423
https://api.github.com/repos/huggingface/datasets/issues/1660
https://github.com/huggingface/datasets/pull/1660
1,660
add dataset info
closed
0
2020-12-29T10:58:19
2020-12-30T17:04:30
2020-12-30T17:04:30
harshalmittal4
[]
true
775,831,288
https://api.github.com/repos/huggingface/datasets/issues/1659
https://github.com/huggingface/datasets/pull/1659
1,659
update dataset info
closed
0
2020-12-29T10:58:01
2020-12-30T16:55:07
2020-12-30T16:55:07
harshalmittal4
[]
true
775,651,085
https://api.github.com/repos/huggingface/datasets/issues/1658
https://github.com/huggingface/datasets/pull/1658
1,658
brwac dataset: add instances and data splits info
closed
0
2020-12-29T01:24:45
2020-12-30T16:54:26
2020-12-30T16:54:26
jonatasgrosman
[]
true
775,647,000
https://api.github.com/repos/huggingface/datasets/issues/1657
https://github.com/huggingface/datasets/pull/1657
1,657
mac_morpho dataset: add data splits info
closed
0
2020-12-29T01:05:21
2020-12-30T16:51:24
2020-12-30T16:51:24
jonatasgrosman
[]
true
775,645,356
https://api.github.com/repos/huggingface/datasets/issues/1656
https://github.com/huggingface/datasets/pull/1656
1,656
assin 2 dataset: add instances and data splits info
closed
0
2020-12-29T00:57:51
2020-12-30T16:50:56
2020-12-30T16:50:56
jonatasgrosman
[]
true
775,643,418
https://api.github.com/repos/huggingface/datasets/issues/1655
https://github.com/huggingface/datasets/pull/1655
1,655
assin dataset: add instances and data splits info
closed
0
2020-12-29T00:47:56
2020-12-30T16:50:23
2020-12-30T16:50:23
jonatasgrosman
[]
true
775,640,729
https://api.github.com/repos/huggingface/datasets/issues/1654
https://github.com/huggingface/datasets/pull/1654
1,654
lener_br dataset: add instances and data splits info
closed
0
2020-12-29T00:35:12
2020-12-30T16:49:32
2020-12-30T16:49:32
jonatasgrosman
[]
true
775,632,945
https://api.github.com/repos/huggingface/datasets/issues/1653
https://github.com/huggingface/datasets/pull/1653
1,653
harem dataset: add data splits info
closed
0
2020-12-28T23:58:20
2020-12-30T16:49:03
2020-12-30T16:49:03
jonatasgrosman
[]
true
775,571,813
https://api.github.com/repos/huggingface/datasets/issues/1652
https://github.com/huggingface/datasets/pull/1652
1,652
Update dataset cards from previous sprint
closed
0
2020-12-28T20:20:47
2020-12-30T16:48:04
2020-12-30T16:48:04
j-chim
[]
This PR updates the dataset cards/readmes for the 4 approved PRs I submitted in the previous sprint.
true
775,554,319
https://api.github.com/repos/huggingface/datasets/issues/1651
https://github.com/huggingface/datasets/pull/1651
1,651
Add twi wordsim353
closed
3
2020-12-28T19:31:55
2021-01-04T09:39:39
2021-01-04T09:39:38
dadelani
[]
Added the citation information to the README file
true
775,545,912
https://api.github.com/repos/huggingface/datasets/issues/1650
https://github.com/huggingface/datasets/pull/1650
1,650
Update README.md
closed
0
2020-12-28T19:09:05
2020-12-29T10:43:14
2020-12-29T10:43:14
MisbahKhan789
[]
added dataset summary
true
775,544,487
https://api.github.com/repos/huggingface/datasets/issues/1649
https://github.com/huggingface/datasets/pull/1649
1,649
Update README.md
closed
0
2020-12-28T19:05:00
2020-12-29T10:50:58
2020-12-29T10:43:03
MisbahKhan789
[]
Added information in the dataset card
true
775,542,360
https://api.github.com/repos/huggingface/datasets/issues/1648
https://github.com/huggingface/datasets/pull/1648
1,648
Update README.md
closed
0
2020-12-28T18:59:06
2020-12-29T10:39:14
2020-12-29T10:39:14
MisbahKhan789
[]
added dataset summary
true
775,525,799
https://api.github.com/repos/huggingface/datasets/issues/1647
https://github.com/huggingface/datasets/issues/1647
1,647
NarrativeQA fails to load with `load_dataset`
closed
3
2020-12-28T18:16:09
2021-01-05T12:05:08
2021-01-03T17:58:05
eric-mitchell
[]
When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/narrativeqa/narrativeqa.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/narrativeqa/narrativeqa.py Workaround: manually copy the `narrativeqa.py` builder into my local directory with curl https://raw.githubusercontent.com/huggingface/datasets/master/datasets/narrativeqa/narrativeqa.py -o narrativeqa.py and load the dataset as `load_dataset('narrativeqa.py')` everything works fine. I'm on datasets v1.1.3 using Python 3.6.10.
false
775,499,344
https://api.github.com/repos/huggingface/datasets/issues/1646
https://github.com/huggingface/datasets/pull/1646
1,646
Add missing homepage in some dataset cards
closed
0
2020-12-28T17:09:48
2021-01-04T14:08:57
2021-01-04T14:08:56
lhoestq
[]
In some dataset cards the homepage field in the `Dataset Description` section was missing/empty
true
775,473,106
https://api.github.com/repos/huggingface/datasets/issues/1645
https://github.com/huggingface/datasets/pull/1645
1,645
Rename "part-of-speech-tagging" tag in some dataset cards
closed
0
2020-12-28T16:09:09
2021-01-07T10:08:14
2021-01-07T10:08:13
lhoestq
[]
`part-of-speech-tagging` was not part of the tagging taxonomy under `structure-prediction`
true
775,375,880
https://api.github.com/repos/huggingface/datasets/issues/1644
https://github.com/huggingface/datasets/issues/1644
1,644
HoVeR dataset fails to load
closed
1
2020-12-28T12:27:07
2022-10-05T12:40:34
2022-10-05T12:40:34
urikz
[]
Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library. Steps to reproduce the error: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("hover") Traceback (most recent call last): File "/Users/urikz/anaconda/envs/mentionmemory/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/Users/urikz/anaconda/envs/mentionmemory/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/Users/urikz/anaconda/envs/mentionmemory/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/hover/hover.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/urikz/anaconda/envs/mentionmemory/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/Users/urikz/anaconda/envs/mentionmemory/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/Users/urikz/anaconda/envs/mentionmemory/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/hover/hover.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/urikz/anaconda/envs/mentionmemory/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/Users/urikz/anaconda/envs/mentionmemory/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module combined_path, github_file_path, file_path FileNotFoundError: Couldn't find file locally at hover/hover.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/hover/hover.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/hover/hover.py ```
false
775,280,046
https://api.github.com/repos/huggingface/datasets/issues/1643
https://github.com/huggingface/datasets/issues/1643
1,643
Dataset social_bias_frames 404
closed
1
2020-12-28T08:35:34
2020-12-28T08:38:07
2020-12-28T08:38:07
atemate
[]
``` >>> from datasets import load_dataset >>> dataset = load_dataset("social_bias_frames") ... Downloading and preparing dataset social_bias_frames/default ... ~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 484 ) 485 elif response is not None and response.status_code == 404: --> 486 raise FileNotFoundError("Couldn't find file at {}".format(url)) 487 raise ConnectionError("Couldn't reach {}".format(url)) 488 FileNotFoundError: Couldn't find file at https://homes.cs.washington.edu/~msap/social-bias-frames/SocialBiasFrames_v2.tgz ``` [Here](https://homes.cs.washington.edu/~msap/social-bias-frames/) we find button `Download data` with the correct URL for the data: https://homes.cs.washington.edu/~msap/social-bias-frames/SBIC.v2.tgz
false
775,159,568
https://api.github.com/repos/huggingface/datasets/issues/1642
https://github.com/huggingface/datasets/pull/1642
1,642
Ollie dataset
closed
0
2020-12-28T02:43:37
2021-01-04T13:35:25
2021-01-04T13:35:24
huu4ontocord
[]
This is the dataset used to train the Ollie open information extraction algorithm. It has over 21M sentences. See http://knowitall.github.io/ollie/ for more details.
true
775,110,872
https://api.github.com/repos/huggingface/datasets/issues/1641
https://github.com/huggingface/datasets/issues/1641
1,641
muchocine dataset cannot be dowloaded
closed
5
2020-12-27T21:26:28
2021-08-03T05:07:29
2021-08-03T05:07:29
mrm8488
[ "wontfix", "dataset bug" ]
```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 7 frames FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/muchocine/muchocine.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/muchocine/muchocine.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 281 raise FileNotFoundError( 282 "Couldn't find file locally at {}, or remotely at {} or {}".format( --> 283 combined_path, github_file_path, file_path 284 ) 285 ) FileNotFoundError: Couldn't find file locally at muchocine/muchocine.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/muchocine/muchocine.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/muchocine/muchocine.py ```
false
774,921,836
https://api.github.com/repos/huggingface/datasets/issues/1640
https://github.com/huggingface/datasets/pull/1640
1,640
Fix "'BertTokenizerFast' object has no attribute 'max_len'"
closed
0
2020-12-26T19:25:41
2020-12-28T17:26:35
2020-12-28T17:26:35
mflis
[]
Tensorflow 2.3.0 gives: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. Tensorflow 2.4.0 gives: AttributeError 'BertTokenizerFast' object has no attribute 'max_len'
true
774,903,472
https://api.github.com/repos/huggingface/datasets/issues/1639
https://github.com/huggingface/datasets/issues/1639
1,639
bug with sst2 in glue
closed
3
2020-12-26T16:57:23
2022-10-05T12:40:16
2022-10-05T12:40:16
ghost
[]
Hi I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below. Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on this dataset. thank you for your help. @lhoestq ``` >>> a = datasets.load_dataset('glue', 'sst2', split="validation", script_version="master") Reusing dataset glue (/julia/datasets/glue/sst2/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4) >>> a[:10] {'idx': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], 'label': [1, 0, 1, 1, 0, 1, 0, 0, 1, 0], 'sentence': ["it 's a charming and often affecting journey . ", 'unflinchingly bleak and desperate ', 'allows us to hope that nolan is poised to embark a major career as a commercial yet inventive filmmaker . ', "the acting , costumes , music , cinematography and sound are all astounding given the production 's austere locales . ", "it 's slow -- very , very slow . ", 'although laced with humor and a few fanciful touches , the film is a refreshingly serious look at young women . ', 'a sometimes tedious film . ', "or doing last year 's taxes with your ex-wife . ", "you do n't have to know about music to appreciate the film 's easygoing blend of comedy and romance . ", "in exactly 89 minutes , most of which passed as slowly as if i 'd been sitting naked on an igloo , formula 51 sank from quirky to jerky to utter turkey . "]} ```
false
774,869,184
https://api.github.com/repos/huggingface/datasets/issues/1638
https://github.com/huggingface/datasets/pull/1638
1,638
Add id_puisi dataset
closed
0
2020-12-26T12:41:55
2020-12-30T16:34:17
2020-12-30T16:34:17
ilhamfp
[]
Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. :)
true
774,710,014
https://api.github.com/repos/huggingface/datasets/issues/1637
https://github.com/huggingface/datasets/pull/1637
1,637
Added `pn_summary` dataset
closed
2
2020-12-25T11:01:24
2021-01-04T13:43:19
2021-01-04T13:43:19
m3hrdadfi
[]
#1635 You did a great job with the fluent procedure regarding adding a dataset. I took the chance to add the dataset on my own. Thank you for your awesome job, and I hope this dataset found the researchers happy, specifically those interested in Persian Language (Farsi)!
true
774,574,378
https://api.github.com/repos/huggingface/datasets/issues/1636
https://github.com/huggingface/datasets/issues/1636
1,636
winogrande cannot be dowloaded
closed
2
2020-12-24T22:28:22
2022-10-05T12:35:44
2022-10-05T12:35:44
ghost
[]
Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks] File "./finetune_trainer.py", line 148, in <listcomp> for task in data_args.tasks] File "/workdir/seq2seq/data/tasks.py", line 65, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 466, in load_dataset return datasets.load_dataset('winogrande', 'winogrande_l', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/winogrande/winogrande.py yo/0 I1224 14:17:46.419031 31226 main shadow.py:122 > Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 260, in <module> main() File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) ```
false
774,524,492
https://api.github.com/repos/huggingface/datasets/issues/1635
https://github.com/huggingface/datasets/issues/1635
1,635
Persian Abstractive/Extractive Text Summarization
closed
0
2020-12-24T17:47:12
2021-01-04T15:11:04
2021-01-04T15:11:04
m3hrdadfi
[ "dataset request" ]
Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included. ## Adding a Dataset - **Name:** *pn-summary* - **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.* - **Paper:** *https://arxiv.org/abs/2012.11204* - **Data:** *https://github.com/hooshvare/pn-summary/#download* - **Motivation:** *It is the first Persian abstractive/extractive Text summarization dataset (like cnn_dailymail for English)!* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
774,487,934
https://api.github.com/repos/huggingface/datasets/issues/1634
https://github.com/huggingface/datasets/issues/1634
1,634
Inspecting datasets per category
closed
4
2020-12-24T15:26:34
2022-10-04T14:57:33
2022-10-04T14:57:33
ghost
[]
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
false
774,422,603
https://api.github.com/repos/huggingface/datasets/issues/1633
https://github.com/huggingface/datasets/issues/1633
1,633
social_i_qa wrong format of labels
closed
2
2020-12-24T13:11:54
2020-12-30T17:18:49
2020-12-30T17:18:49
ghost
[]
Hi, there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent. so label is 'label': '1\n', not '1' thanks ``` >>> import datasets >>> from datasets import load_dataset >>> dataset = load_dataset( ... 'social_i_qa') cahce dir /julia/cache/datasets Downloading: 4.72kB [00:00, 3.52MB/s] cahce dir /julia/cache/datasets Downloading: 2.19kB [00:00, 1.81MB/s] Using custom data configuration default Reusing dataset social_i_qa (/julia/datasets/social_i_qa/default/0.1.0/4a4190cc2d2482d43416c2167c0c5dccdd769d4482e84893614bd069e5c3ba06) >>> dataset['train'][0] {'answerA': 'like attending', 'answerB': 'like staying home', 'answerC': 'a good friend to have', 'context': 'Cameron decided to have a barbecue and gathered her friends together.', 'label': '1\n', 'question': 'How would Others feel as a result?'} ```
false
774,388,625
https://api.github.com/repos/huggingface/datasets/issues/1632
https://github.com/huggingface/datasets/issues/1632
1,632
SICK dataset
closed
0
2020-12-24T12:40:14
2021-02-05T15:49:25
2021-02-05T15:49:25
rabeehk
[ "dataset request" ]
Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you. ## Adding a Dataset - **Name:** SICK - **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic, and semantic phenomena. - **Paper:** https://www.aclweb.org/anthology/L14-1314/ - **Data:** http://marcobaroni.org/composes/sick.html - **Motivation:** This dataset is well-known in the NLP community used for recognizing entailment between sentences. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
774,349,222
https://api.github.com/repos/huggingface/datasets/issues/1631
https://github.com/huggingface/datasets/pull/1631
1,631
Update README.md
closed
0
2020-12-24T11:45:52
2020-12-28T17:35:41
2020-12-28T17:16:04
savasy
[]
I made small change for citation
true
774,332,129
https://api.github.com/repos/huggingface/datasets/issues/1630
https://github.com/huggingface/datasets/issues/1630
1,630
Adding UKP Argument Aspect Similarity Corpus
closed
3
2020-12-24T11:01:31
2022-10-05T12:36:12
2022-10-05T12:36:12
rabeehk
[ "dataset request" ]
Hi, this would be great to have this dataset included. ## Adding a Dataset - **Name:** UKP Argument Aspect Similarity Corpus - **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as either “high similarity”, “some similarity”, “no similarity” or “not related” with respect to the topic. - **Paper:** https://www.aclweb.org/anthology/P19-1054/ - **Data:** https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998 - **Motivation:** this is one of the datasets currently used frequently in recent adapter papers like https://arxiv.org/pdf/2005.00247.pdf Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Thank you
false
774,255,716
https://api.github.com/repos/huggingface/datasets/issues/1629
https://github.com/huggingface/datasets/pull/1629
1,629
add wongnai_reviews test set labels
closed
0
2020-12-24T08:02:31
2020-12-28T17:23:39
2020-12-28T17:23:39
cstorm125
[]
- add test set labels provided by @ekapolc - refactor `star_rating` to a `datasets.features.ClassLabel` field
true
774,091,411
https://api.github.com/repos/huggingface/datasets/issues/1628
https://github.com/huggingface/datasets/pull/1628
1,628
made suggested changes to hate-speech-and-offensive-language
closed
0
2020-12-23T23:25:32
2020-12-28T10:11:20
2020-12-28T10:11:20
MisbahKhan789
[]
true
773,960,255
https://api.github.com/repos/huggingface/datasets/issues/1627
https://github.com/huggingface/datasets/issues/1627
1,627
`Dataset.map` disable progress bar
closed
5
2020-12-23T17:53:42
2025-05-16T16:36:24
2020-12-26T19:57:17
Nickil21
[]
I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that?
false
773,840,368
https://api.github.com/repos/huggingface/datasets/issues/1626
https://github.com/huggingface/datasets/pull/1626
1,626
Fix dataset_dict.shuffle with single seed
closed
0
2020-12-23T14:33:36
2021-01-04T10:00:04
2021-01-04T10:00:03
lhoestq
[]
Fix #1610 I added support for single integer used in `DatasetDict.shuffle`. Previously only a dictionary of seed was allowed. Moreover I added the missing `seed` parameter. Previously only `seeds` was allowed.
true
773,771,596
https://api.github.com/repos/huggingface/datasets/issues/1625
https://github.com/huggingface/datasets/pull/1625
1,625
Fixed bug in the shape property
closed
0
2020-12-23T13:33:21
2021-01-02T23:22:52
2020-12-23T14:13:13
noaonoszko
[]
Fix to the bug reported in issue #1622. Just replaced `return tuple(self._indices.num_rows, self._data.num_columns)` by `return (self._indices.num_rows, self._data.num_columns)`.
true
773,669,700
https://api.github.com/repos/huggingface/datasets/issues/1624
https://github.com/huggingface/datasets/issues/1624
1,624
Cannot download ade_corpus_v2
closed
2
2020-12-23T10:58:14
2021-08-03T05:08:54
2021-08-03T05:08:54
him1411
[]
I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2 but received this error : `Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module combined_path, github_file_path, file_path FileNotFoundError: Couldn't find file locally at ade_corpus_v2/ade_corpus_v2.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py`
false
772,950,710
https://api.github.com/repos/huggingface/datasets/issues/1623
https://github.com/huggingface/datasets/pull/1623
1,623
Add CLIMATE-FEVER dataset
closed
1
2020-12-22T13:34:05
2020-12-22T17:53:53
2020-12-22T17:53:53
tdiggelm
[]
As suggested by @SBrandeis , fresh PR that adds CLIMATE-FEVER. Replaces PR #1579. --- A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: * Homepage: http://climatefever.ai * Paper: https://arxiv.org/abs/2012.00614
true
772,940,768
https://api.github.com/repos/huggingface/datasets/issues/1622
https://github.com/huggingface/datasets/issues/1622
1,622
Can't call shape on the output of select()
closed
2
2020-12-22T13:18:40
2020-12-23T13:37:13
2020-12-23T13:37:12
noaonoszko
[]
I get the error `TypeError: tuple expected at most 1 argument, got 2` when calling `shape` on the output of `select()`. It's line 531 in shape in arrow_dataset.py that causes the problem: ``return tuple(self._indices.num_rows, self._data.num_columns)`` This makes sense, since `tuple(num1, num2)` is not a valid call. Full code to reproduce: ```python dataset = load_dataset("cnn_dailymail", "3.0.0") train_set = dataset["train"] t = train_set.select(range(10)) print(t.shape)
false
772,940,417
https://api.github.com/repos/huggingface/datasets/issues/1621
https://github.com/huggingface/datasets/pull/1621
1,621
updated dutch_social.py for loading jsonl (lines instead of list) files
closed
0
2020-12-22T13:18:11
2020-12-23T11:51:51
2020-12-23T11:51:51
skyprince999
[]
the data_loader is modified to load files on the fly. Earlier it was reading the entire file and then processing the records Pls refer to previous PR #1321
true
772,620,056
https://api.github.com/repos/huggingface/datasets/issues/1620
https://github.com/huggingface/datasets/pull/1620
1,620
Adding myPOS2017 dataset
closed
4
2020-12-22T04:04:55
2022-10-03T09:38:23
2022-10-03T09:38:23
hungluumfc
[ "dataset contribution" ]
myPOS Corpus (Myanmar Part-of-Speech Corpus) for Myanmar language NLP Research and Developments
true
772,508,558
https://api.github.com/repos/huggingface/datasets/issues/1619
https://github.com/huggingface/datasets/pull/1619
1,619
data loader for reading comprehension task
closed
2
2020-12-21T22:40:34
2020-12-28T10:32:53
2020-12-28T10:32:53
songfeng
[]
added doc2dial data loader and dummy data for reading comprehension task.
true
772,248,730
https://api.github.com/repos/huggingface/datasets/issues/1618
https://github.com/huggingface/datasets/issues/1618
1,618
Can't filter language:EN on https://huggingface.co/datasets
closed
3
2020-12-21T15:23:23
2020-12-22T17:17:00
2020-12-22T17:16:09
davidefiocco
[]
When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge: ![screenshot](https://user-images.githubusercontent.com/4547987/102792244-892e1f00-43a8-11eb-9e89-4826ca201a87.png)
false
772,084,764
https://api.github.com/repos/huggingface/datasets/issues/1617
https://github.com/huggingface/datasets/pull/1617
1,617
cifar10 initial commit
closed
2
2020-12-21T11:18:50
2020-12-22T10:18:05
2020-12-22T10:11:28
czabo
[]
CIFAR-10 dataset. Didn't add the tagging since there are no vision related tags.
true
772,074,229
https://api.github.com/repos/huggingface/datasets/issues/1616
https://github.com/huggingface/datasets/pull/1616
1,616
added TurkishMovieSentiment dataset
closed
1
2020-12-21T11:03:16
2020-12-24T07:08:41
2020-12-23T16:50:06
yavuzKomecoglu
[]
This PR adds the **TurkishMovieSentiment: This dataset contains turkish movie reviews.** - **Homepage:** [https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks](https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks) - **Point of Contact:** [Mustafa Keskin](https://www.linkedin.com/in/mustfkeskin/)
true
771,641,088
https://api.github.com/repos/huggingface/datasets/issues/1615
https://github.com/huggingface/datasets/issues/1615
1,615
Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
open
10
2020-12-20T17:27:38
2021-06-25T13:11:33
null
SapirWeissbuch
[]
Hello, I'm having issue downloading TriviaQA dataset with `load_dataset`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```python import datasets dataset = datasets.load_dataset("trivia_qa", "rc", cache_dir = "./datasets") ``` ## The output: 1. Download begins: ``` Downloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to /cs/labs/gabis/sapirweissbuch/tr ivia_qa/rc/1.1.0/e734e28133f4d9a353af322aa52b9f266f6f27cbf2f072690a1694e577546b0d... Downloading: 17%|███████████████████▉ | 446M/2.67G [00:37<04:45, 7.77MB/s] ``` 2. 100% is reached 3. It got stuck here for about an hour, and added additional 30G of data to "./datasets" directory. I killed the process eventually. A similar issue can be observed in Google Colab: https://colab.research.google.com/drive/1nn1Lw02GhfGFylzbS2j6yksGjPo7kkN-?usp=sharing ## Expected behaviour: The dataset "TriviaQA" should be successfully downloaded.
false
771,577,050
https://api.github.com/repos/huggingface/datasets/issues/1613
https://github.com/huggingface/datasets/pull/1613
1,613
Add id_clickbait
closed
0
2020-12-20T12:24:49
2020-12-22T17:45:27
2020-12-22T17:45:27
cahya-wirawan
[]
This is the CLICK-ID dataset, a collection of annotated clickbait Indonesian news headlines that was collected from 12 local online news
true
771,558,160
https://api.github.com/repos/huggingface/datasets/issues/1612
https://github.com/huggingface/datasets/pull/1612
1,612
Adding wiki asp dataset as new PR
closed
0
2020-12-20T10:25:08
2020-12-21T14:13:33
2020-12-21T14:13:33
katnoria
[]
Hi @lhoestq, Adding wiki asp as new branch because #1539 has other commits. This version has dummy data for each domain <20/30KB.
true
771,486,456
https://api.github.com/repos/huggingface/datasets/issues/1611
https://github.com/huggingface/datasets/issues/1611
1,611
shuffle with torch generator
closed
8
2020-12-20T00:57:14
2022-06-01T15:30:13
2022-06-01T15:30:13
rabeehkarimimahabadi
[ "enhancement" ]
Hi I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I really need to make shuffle work with this generator and I was wondering what I can do about this issue, thanks for your help @lhoestq
false
771,453,599
https://api.github.com/repos/huggingface/datasets/issues/1610
https://github.com/huggingface/datasets/issues/1610
1,610
shuffle does not accept seed
closed
3
2020-12-19T20:59:39
2021-01-04T10:00:03
2021-01-04T10:00:03
rabeehk
[ "bug" ]
Hi I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
false
771,421,881
https://api.github.com/repos/huggingface/datasets/issues/1609
https://github.com/huggingface/datasets/issues/1609
1,609
Not able to use 'jigsaw_toxicity_pred' dataset
closed
2
2020-12-19T17:35:48
2020-12-22T16:42:24
2020-12-22T16:42:23
jassimran
[]
When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing): ``` from datasets import list_datasets, list_metrics, load_dataset, load_metric ds = load_dataset("jigsaw_toxicity_pred") ``` I see below error: > FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 280 raise FileNotFoundError( 281 "Couldn't find file locally at {}, or remotely at {} or {}".format( --> 282 combined_path, github_file_path, file_path 283 ) 284 ) FileNotFoundError: Couldn't find file locally at jigsaw_toxicity_pred/jigsaw_toxicity_pred.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py
false
771,329,434
https://api.github.com/repos/huggingface/datasets/issues/1608
https://github.com/huggingface/datasets/pull/1608
1,608
adding ted_talks_iwslt
closed
1
2020-12-19T07:36:41
2021-01-02T15:44:12
2021-01-02T15:44:11
skyprince999
[]
UPDATE2: (2nd Jan) Wrote a long writeup on the slack channel. I don't think this approach is correct. Basically this created language pairs (109*108) Running the `pytest `went for more than 40+ hours and it was still running! So working on a different approach, such that the number of configs = number of languages. Will make a new pull request with that. UPDATE: This requires manual download dataset This is a draft version
true
771,325,852
https://api.github.com/repos/huggingface/datasets/issues/1607
https://github.com/huggingface/datasets/pull/1607
1,607
modified tweets hate speech detection
closed
0
2020-12-19T07:13:40
2020-12-21T16:08:48
2020-12-21T16:08:48
darshan-gandhi
[]
true
771,116,455
https://api.github.com/repos/huggingface/datasets/issues/1606
https://github.com/huggingface/datasets/pull/1606
1,606
added Semantic Scholar Open Research Corpus
closed
1
2020-12-18T19:21:24
2021-02-03T09:30:59
2021-02-03T09:30:59
bhavitvyamalik
[]
I picked up this dataset [Semantic Scholar Open Research Corpus](https://allenai.org/data/s2orc) but it contains 6000 files to be downloaded. I tried the current code with 100 files and it worked fine (took ~15GB space). For 6000 files it would occupy ~900GB space which I don’t have. Can someone from the HF team with that much of disk space help me with generate dataset_infos and dummy_data?
true
770,979,620
https://api.github.com/repos/huggingface/datasets/issues/1605
https://github.com/huggingface/datasets/issues/1605
1,605
Navigation version breaking
closed
1
2020-12-18T15:36:24
2022-10-05T12:35:11
2022-10-05T12:35:11
mttk
[]
Hi, when navigating docs (Chrome, Ubuntu) (e.g. on this page: https://huggingface.co/docs/datasets/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version: ![image](https://user-images.githubusercontent.com/3007947/102632187-02cad080-414f-11eb-813b-28f3c8d80def.png) **Edit:** this actually happens _only_ if you open a link to a concrete subsection. IMO, the best way to fix this without getting too deep into the intricacies of retrieving version numbers from the URL would be to change [this](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L112) line to: ``` let label = (version in versionMapping) ? version : stableVersion ``` which delegates the check to the (already maintained) keys of the version mapping dictionary & should be more robust. There's a similar ternary expression [here](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L97) which should also fail in this case. I'd also suggest swapping this [block](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L80-L90) to `string.contains(version) for version in versionMapping` which might be more robust. I'd add a PR myself but I'm by no means competent in JS :) I also have a side question wrt. docs versioning: I'm trying to make docs for a project which are versioned alike to your dropdown versioning. I was wondering how do you handle storage of multiple doc versions on your server? Do you update what `https://huggingface.co/docs/datasets` points to for every stable release & manually create new folders for each released version? So far I'm building & publishing (scping) the docs to the server with a github action which works well for a single version, but would ideally need to reorder the public files triggered on a new release.
false
770,862,112
https://api.github.com/repos/huggingface/datasets/issues/1604
https://github.com/huggingface/datasets/issues/1604
1,604
Add tests for the download functions ?
closed
1
2020-12-18T12:49:25
2022-10-05T13:04:24
2022-10-05T13:04:24
SBrandeis
[ "enhancement" ]
AFAIK the download functions in `DownloadManager` are not tested yet. It could be good to add some to ensure behavior is as expected.
false
770,857,221
https://api.github.com/repos/huggingface/datasets/issues/1603
https://github.com/huggingface/datasets/pull/1603
1,603
Add retries to HTTP requests
closed
1
2020-12-18T12:41:31
2020-12-22T15:34:07
2020-12-22T15:34:07
SBrandeis
[ "enhancement" ]
## What does this PR do ? Adding retries to HTTP GET & HEAD requests, when they fail with a `ConnectTimeout` exception. The "canonical" way to do this is to use [urllib's Retry class](https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#urllib3.util.Retry) and wrap it in a [HttpAdapter](https://requests.readthedocs.io/en/master/api/#requests.adapters.HTTPAdapter). Seems a bit overkill to me, plus it forces us to use the `requests.Session` object. I prefer this simpler implementation. I'm open to remarks and suggestions @lhoestq @yjernite Fixes #1102
true
770,841,810
https://api.github.com/repos/huggingface/datasets/issues/1602
https://github.com/huggingface/datasets/pull/1602
1,602
second update of id_newspapers_2018
closed
0
2020-12-18T12:16:37
2020-12-22T10:41:15
2020-12-22T10:41:14
cahya-wirawan
[]
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
true
770,758,914
https://api.github.com/repos/huggingface/datasets/issues/1601
https://github.com/huggingface/datasets/pull/1601
1,601
second update of the id_newspapers_2018
closed
1
2020-12-18T10:10:20
2020-12-18T12:15:31
2020-12-18T12:15:31
cahya-wirawan
[]
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
true
770,582,960
https://api.github.com/repos/huggingface/datasets/issues/1600
https://github.com/huggingface/datasets/issues/1600
1,600
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
closed
7
2020-12-18T05:37:10
2023-05-03T04:22:55
2020-12-21T07:38:58
david-waterworth
[ "question" ]
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong? ``` from datasets import load_dataset dataset = load_dataset('csv', data_files='data.txt') dataset = dataset.train_test_split(test_size=0.1) ``` > AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
false
770,431,389
https://api.github.com/repos/huggingface/datasets/issues/1599
https://github.com/huggingface/datasets/pull/1599
1,599
add Korean Sarcasm Dataset
closed
0
2020-12-17T22:49:56
2021-09-17T16:54:32
2020-12-23T17:25:59
stevhliu
[]
true
770,332,440
https://api.github.com/repos/huggingface/datasets/issues/1598
https://github.com/huggingface/datasets/pull/1598
1,598
made suggested changes in fake-news-english
closed
0
2020-12-17T20:06:29
2020-12-18T09:43:58
2020-12-18T09:43:57
MisbahKhan789
[]
true
770,276,140
https://api.github.com/repos/huggingface/datasets/issues/1597
https://github.com/huggingface/datasets/pull/1597
1,597
adding hate-speech-and-offensive-language
closed
1
2020-12-17T18:35:15
2020-12-23T23:27:17
2020-12-23T23:27:16
MisbahKhan789
[]
true
770,260,531
https://api.github.com/repos/huggingface/datasets/issues/1596
https://github.com/huggingface/datasets/pull/1596
1,596
made suggested changes to hate-speech-and-offensive-language
closed
0
2020-12-17T18:09:26
2020-12-17T18:36:02
2020-12-17T18:35:53
MisbahKhan789
[]
true
770,153,693
https://api.github.com/repos/huggingface/datasets/issues/1595
https://github.com/huggingface/datasets/pull/1595
1,595
Logiqa en
closed
8
2020-12-17T15:42:00
2022-10-03T09:38:30
2022-10-03T09:38:30
aclifton314
[ "dataset contribution" ]
logiqa in english.
true
769,747,767
https://api.github.com/repos/huggingface/datasets/issues/1594
https://github.com/huggingface/datasets/issues/1594
1,594
connection error
closed
4
2020-12-17T09:18:34
2022-06-01T15:33:42
2022-06-01T15:33:41
rabeehkarimimahabadi
[]
Hi I am hitting to this error, thanks ``` > Traceback (most recent call last): File "finetune_t5_trainer.py", line 379, in <module> main() File "finetune_t5_trainer.py", line 208, in main if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO File "finetune_t5_trainer.py", line 207, in <dictcomp> for task in data_args.eval_tasks} File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 66, in load_dataset return datasets.load_dataset(self.task.name, split=split, script_version="master") File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/boolq/boolq.py el/0 I1217 01:11:33.898849 354161 main shadow.py:210 Current job status: FINISHED ```
false
769,611,386
https://api.github.com/repos/huggingface/datasets/issues/1593
https://github.com/huggingface/datasets/issues/1593
1,593
Access to key in DatasetDict map
closed
3
2020-12-17T07:02:20
2022-10-05T13:47:28
2022-10-05T12:33:06
ZhaofengWu
[ "enhancement" ]
It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be nice if there can be a flag, similar to `with_indices`, that allows the callable to know the key inside `DatasetDict`.
false
769,383,714
https://api.github.com/repos/huggingface/datasets/issues/1591
https://github.com/huggingface/datasets/issues/1591
1,591
IWSLT-17 Link Broken
closed
2
2020-12-17T00:46:42
2020-12-18T08:06:36
2020-12-18T08:05:28
ZhaofengWu
[ "duplicate", "dataset bug" ]
``` FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
false
769,242,858
https://api.github.com/repos/huggingface/datasets/issues/1590
https://github.com/huggingface/datasets/issues/1590
1,590
Add helper to resolve namespace collision
closed
5
2020-12-16T20:17:24
2022-06-01T15:32:04
2022-06-01T15:32:04
jramapuram
[]
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
false
769,187,141
https://api.github.com/repos/huggingface/datasets/issues/1589
https://github.com/huggingface/datasets/pull/1589
1,589
Update doc2dial.py
closed
1
2020-12-16T18:50:56
2022-07-06T15:19:57
2022-07-06T15:19:57
songfeng
[]
Added data loader for machine reading comprehension tasks proposed in the Doc2Dial EMNLP 2020 paper.
true
769,068,227
https://api.github.com/repos/huggingface/datasets/issues/1588
https://github.com/huggingface/datasets/pull/1588
1,588
Modified hind encorp
closed
1
2020-12-16T16:28:14
2020-12-16T22:41:53
2020-12-16T17:20:28
rahul-art
[]
description added, unnecessary comments removed from .py and readme.md reformated @lhoestq for #1584
true
768,929,877
https://api.github.com/repos/huggingface/datasets/issues/1587
https://github.com/huggingface/datasets/pull/1587
1,587
Add nq_open question answering dataset
closed
1
2020-12-16T14:22:08
2020-12-17T16:07:10
2020-12-17T16:07:10
Nilanshrajput
[]
this is pr is a copy of #1506 due to messed up git history in that pr.
true
768,864,502
https://api.github.com/repos/huggingface/datasets/issues/1586
https://github.com/huggingface/datasets/pull/1586
1,586
added irc disentangle dataset
closed
5
2020-12-16T13:25:58
2021-01-29T10:28:53
2021-01-29T10:28:53
dhruvjoshi1998
[]
added irc disentanglement dataset
true
768,831,171
https://api.github.com/repos/huggingface/datasets/issues/1585
https://github.com/huggingface/datasets/issues/1585
1,585
FileNotFoundError for `amazon_polarity`
closed
1
2020-12-16T12:51:05
2020-12-16T16:02:56
2020-12-16T16:02:56
phtephanx
[]
Version: `datasets==v1.1.3` ### Reproduction ```python from datasets import load_dataset data = load_dataset("amazon_polarity") ``` crashes with ```bash FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file locally at amazon_polarity/amazon_polarity.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ```
false
768,820,406
https://api.github.com/repos/huggingface/datasets/issues/1584
https://github.com/huggingface/datasets/pull/1584
1,584
Load hind encorp
closed
0
2020-12-16T12:38:38
2020-12-18T02:27:24
2020-12-18T02:27:24
rahul-art
[]
reformated well documented, yaml tags added, code
true
768,795,986
https://api.github.com/repos/huggingface/datasets/issues/1583
https://github.com/huggingface/datasets/pull/1583
1,583
Update metrics docstrings.
closed
0
2020-12-16T12:14:18
2020-12-18T18:39:06
2020-12-18T18:39:06
Fraser-Greenlee
[]
#1478 Correcting the argument descriptions for metrics. Let me know if there's any issues.
true
768,776,617
https://api.github.com/repos/huggingface/datasets/issues/1582
https://github.com/huggingface/datasets/pull/1582
1,582
Adding wiki lingua dataset as new branch
closed
0
2020-12-16T11:53:07
2020-12-17T18:06:46
2020-12-17T18:06:45
katnoria
[]
Adding the dataset as new branch as advised here: #1470
true
768,320,594
https://api.github.com/repos/huggingface/datasets/issues/1581
https://github.com/huggingface/datasets/issues/1581
1,581
Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
closed
5
2020-12-16T00:02:21
2021-06-17T15:40:45
2021-06-17T15:40:45
eduardofv
[]
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`: ``` $ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data -v $(pwd):/root -v $(pwd)/models/:/root/models -v $(pwd)/saved_models/:/root/saved_models -e "HOST_HOSTNAME=$(hostname)" hf-error:latest /bin/bash ________ _______________ ___ __/__________________________________ ____/__ /________ __ __ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / / _ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ / /_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/ You are running this container as user with ID 1000 and group 1000, which should map to the ID and group for your user on the Docker host. Great! tf-docker /root > python Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers 2020-12-15 23:53:21.165827: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 5, in <module> from .trainer_utils import EvaluationStrategy File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 25, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 88, in <module> import datasets # noqa: F401 File "/usr/local/lib/python3.6/dist-packages/datasets/__init__.py", line 26, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 40, in <module> from .arrow_reader import ArrowReader File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 31, in <module> from .utils import cached_path, logging File "/usr/local/lib/python3.6/dist-packages/datasets/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/usr/local/lib/python3.6/dist-packages/datasets/utils/download_manager.py", line 25, in <module> from .file_utils import HF_DATASETS_CACHE, cached_path, get_from_cache, hash_url_to_filename File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 118, in <module> os.makedirs(HF_MODULES_CACHE, exist_ok=True) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/.cache' ``` I've pinned the problem to `RUN pip install datasets`, and by commenting it you can actually import transformers correctly. Another workaround I've found is creating the directory and giving permissions to it directly on the Dockerfile. ``` FROM tensorflow/tensorflow:latest-gpu-jupyter WORKDIR /root EXPOSE 80 EXPOSE 8888 EXPOSE 6006 ENV SHELL /bin/bash ENV PATH="/root/.local/bin:${PATH}" ENV CUDA_CACHE_PATH="/root/cache/cuda" ENV CUDA_CACHE_MAXSIZE="4294967296" ENV TFHUB_CACHE_DIR="/root/cache/tfhub" RUN pip install --upgrade pip RUN apt update -y && apt upgrade -y RUN pip install transformers #Installing datasets will throw the error, try commenting and rebuilding RUN pip install datasets #Another workaround is creating the directory and give permissions explicitly #RUN mkdir /.cache #RUN chmod 777 /.cache ```
false
768,111,377
https://api.github.com/repos/huggingface/datasets/issues/1580
https://github.com/huggingface/datasets/pull/1580
1,580
made suggested changes in diplomacy_detection.py
closed
0
2020-12-15T19:52:00
2020-12-16T10:27:52
2020-12-16T10:27:52
MisbahKhan789
[]
true
767,808,465
https://api.github.com/repos/huggingface/datasets/issues/1579
https://github.com/huggingface/datasets/pull/1579
1,579
Adding CLIMATE-FEVER dataset
closed
5
2020-12-15T16:49:22
2020-12-22T13:43:16
2020-12-22T13:43:15
tdiggelm
[]
This PR request the addition of the CLIMATE-FEVER dataset: A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: - Homepage: <http://climatefever.ai> - Paper: <https://arxiv.org/abs/2012.00614>
true
767,760,513
https://api.github.com/repos/huggingface/datasets/issues/1578
https://github.com/huggingface/datasets/pull/1578
1,578
update multiwozv22 checksums
closed
0
2020-12-15T16:13:52
2020-12-15T17:06:29
2020-12-15T17:06:29
yjernite
[]
a file was updated on the GitHub repo for the dataset
true
767,342,432
https://api.github.com/repos/huggingface/datasets/issues/1577
https://github.com/huggingface/datasets/pull/1577
1,577
Add comet metric
closed
1
2020-12-15T08:56:00
2021-01-14T13:33:10
2021-01-14T13:33:10
ricardorei
[]
Hey! I decided to add our new Crosslingual Optimized Metric for Evaluation of Translation (COMET) to the list of the available metrics. COMET was [presented at EMNLP20](https://www.aclweb.org/anthology/2020.emnlp-main.213/) and it is the highest performing metric, so far, on the WMT19 benchmark. We also participated in the [WMT20 Metrics shared task ](http://www.statmt.org/wmt20/pdf/2020.wmt-1.101.pdf) where once again COMET was validated as a top-performing metric. I hope that this metric will help researcher's and industry workers to better validate their MT systems in the future 🤗 ! Cheers, Ricardo
true
767,080,645
https://api.github.com/repos/huggingface/datasets/issues/1576
https://github.com/huggingface/datasets/pull/1576
1,576
Remove the contributors section
closed
0
2020-12-15T01:47:15
2020-12-15T12:53:47
2020-12-15T12:53:46
clmnt
[]
sourcerer is down
true
767,076,374
https://api.github.com/repos/huggingface/datasets/issues/1575
https://github.com/huggingface/datasets/pull/1575
1,575
Hind_Encorp all done
closed
11
2020-12-15T01:36:02
2020-12-16T15:15:17
2020-12-16T15:15:17
rahul-art
[]
true
767,015,317
https://api.github.com/repos/huggingface/datasets/issues/1574
https://github.com/huggingface/datasets/pull/1574
1,574
Diplomacy detection 3
closed
0
2020-12-14T23:28:51
2020-12-14T23:29:32
2020-12-14T23:29:32
MisbahKhan789
[]
true
767,011,938
https://api.github.com/repos/huggingface/datasets/issues/1573
https://github.com/huggingface/datasets/pull/1573
1,573
adding dataset for diplomacy detection-2
closed
0
2020-12-14T23:21:37
2020-12-14T23:36:57
2020-12-14T23:36:57
MisbahKhan789
[]
true
767,008,470
https://api.github.com/repos/huggingface/datasets/issues/1572
https://github.com/huggingface/datasets/pull/1572
1,572
add Gnad10 dataset
closed
0
2020-12-14T23:15:02
2021-09-17T16:54:37
2020-12-16T16:52:30
stevhliu
[]
reference [PR#1317](https://github.com/huggingface/datasets/pull/1317)
true
766,981,721
https://api.github.com/repos/huggingface/datasets/issues/1571
https://github.com/huggingface/datasets/pull/1571
1,571
Fixing the KILT tasks to match our current standards
closed
0
2020-12-14T22:26:12
2020-12-14T23:07:41
2020-12-14T23:07:41
yjernite
[]
This introduces a few changes to the Knowledge Intensive Learning task benchmark to bring it more in line with our current datasets, including adding the (minimal) dataset card and having one config per sub-task
true
766,830,545
https://api.github.com/repos/huggingface/datasets/issues/1570
https://github.com/huggingface/datasets/pull/1570
1,570
Documentation for loading CSV datasets misleads the user
closed
0
2020-12-14T19:04:37
2020-12-22T19:30:12
2020-12-21T13:47:09
onurgu
[]
Documentation for loading CSV datasets misleads the user into thinking setting `quote_char' to False will disable quoting. There are two problems here: i) `quote_char' is misspelled, must be `quotechar' ii) the documentation should mention `quoting'
true
766,758,895
https://api.github.com/repos/huggingface/datasets/issues/1569
https://github.com/huggingface/datasets/pull/1569
1,569
added un_ga dataset
closed
0
2020-12-14T17:42:04
2020-12-15T15:28:58
2020-12-15T15:28:58
param087
[]
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset. With suggested changes in #1330
true
766,722,994
https://api.github.com/repos/huggingface/datasets/issues/1568
https://github.com/huggingface/datasets/pull/1568
1,568
Added the dataset clickbait_news_bg
closed
2
2020-12-14T17:03:00
2020-12-15T18:28:56
2020-12-15T18:28:56
tsvm
[]
There was a problem with my [previous PR 1445](https://github.com/huggingface/datasets/pull/1445) after rebasing, so I'm copying the dataset code into a new branch and submitting a new PR.
true
766,382,609
https://api.github.com/repos/huggingface/datasets/issues/1567
https://github.com/huggingface/datasets/pull/1567
1,567
[wording] Update Readme.md
closed
0
2020-12-14T12:34:52
2020-12-15T12:54:07
2020-12-15T12:54:06
thomwolf
[]
Make the features of the library clearer.
true
766,354,236
https://api.github.com/repos/huggingface/datasets/issues/1566
https://github.com/huggingface/datasets/pull/1566
1,566
Add Microsoft Research Sequential Question Answering (SQA) Dataset
closed
1
2020-12-14T12:02:30
2020-12-15T15:24:22
2020-12-15T15:24:22
mattbui
[]
For more information: https://msropendata.com/datasets/b25190ed-0f59-47b1-9211-5962858142c2
true
766,333,940
https://api.github.com/repos/huggingface/datasets/issues/1565
https://github.com/huggingface/datasets/pull/1565
1,565
Create README.md
closed
5
2020-12-14T11:40:23
2021-03-25T14:01:49
2021-03-25T14:01:49
ManuelFay
[]
true
766,266,609
https://api.github.com/repos/huggingface/datasets/issues/1564
https://github.com/huggingface/datasets/pull/1564
1,564
added saudinewsnet
closed
9
2020-12-14T10:35:09
2020-12-22T09:51:04
2020-12-22T09:51:04
abdulelahsm
[]
I'm having issues in creating the dummy data. I'm still investigating how to fix it. I'll close the PR if I couldn't find a solution
true
766,211,931
https://api.github.com/repos/huggingface/datasets/issues/1563
https://github.com/huggingface/datasets/pull/1563
1,563
adding tmu-gfm-dataset
closed
2
2020-12-14T09:45:30
2020-12-21T10:21:04
2020-12-21T10:07:13
forest1988
[]
Adding TMU-GFM-Dataset for Grammatical Error Correction. https://github.com/tmu-nlp/TMU-GFM-Dataset A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. More detail about the creation of the dataset can be found in [Yoshimura et al. (2020)](https://www.aclweb.org/anthology/2020.coling-main.573.pdf).
true
765,981,749
https://api.github.com/repos/huggingface/datasets/issues/1562
https://github.com/huggingface/datasets/pull/1562
1,562
Add dataset COrpus of Urdu News TExt Reuse (COUNTER).
closed
3
2020-12-14T06:32:48
2020-12-21T13:14:46
2020-12-21T13:14:46
arkhalid
[]
true