url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
1.08B
node_id
stringlengths
18
24
number
int64
2
3.45k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,640B
updated_at
int64
1,588B
1,640B
closed_at
int64
1,588B
1,640B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
null
pull_request
null
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/1643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1643/comments
https://api.github.com/repos/huggingface/datasets/issues/1643/events
https://github.com/huggingface/datasets/issues/1643
775,280,046
MDU6SXNzdWU3NzUyODAwNDY=
1,643
Dataset social_bias_frames 404
{ "login": "atemate", "id": 7501517, "node_id": "MDQ6VXNlcjc1MDE1MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7501517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atemate", "html_url": "https://github.com/atemate", "followers_url": "https://api.github.com/users/atemate/followers", "following_url": "https://api.github.com/users/atemate/following{/other_user}", "gists_url": "https://api.github.com/users/atemate/gists{/gist_id}", "starred_url": "https://api.github.com/users/atemate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atemate/subscriptions", "organizations_url": "https://api.github.com/users/atemate/orgs", "repos_url": "https://api.github.com/users/atemate/repos", "events_url": "https://api.github.com/users/atemate/events{/privacy}", "received_events_url": "https://api.github.com/users/atemate/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I see, master is already fixed in https://github.com/huggingface/datasets/commit/9e058f098a0919efd03a136b9b9c3dec5076f626" ]
1,609,144,534,000
1,609,144,687,000
1,609,144,687,000
NONE
null
``` >>> from datasets import load_dataset >>> dataset = load_dataset("social_bias_frames") ... Downloading and preparing dataset social_bias_frames/default ... ~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 484 ) 485 elif response is not None and response.status_code == 404: --> 486 raise FileNotFoundError("Couldn't find file at {}".format(url)) 487 raise ConnectionError("Couldn't reach {}".format(url)) 488 FileNotFoundError: Couldn't find file at https://homes.cs.washington.edu/~msap/social-bias-frames/SocialBiasFrames_v2.tgz ``` [Here](https://homes.cs.washington.edu/~msap/social-bias-frames/) we find button `Download data` with the correct URL for the data: https://homes.cs.washington.edu/~msap/social-bias-frames/SBIC.v2.tgz
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1643/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1641/comments
https://api.github.com/repos/huggingface/datasets/issues/1641/events
https://github.com/huggingface/datasets/issues/1641
775,110,872
MDU6SXNzdWU3NzUxMTA4NzI=
1,641
muchocine dataset cannot be dowloaded
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "I have encountered the same error with `v1.0.1` and `v1.0.2` on both Windows and Linux environments. However, cloning the repo and using the path to the dataset's root directory worked for me. Even after having the dataset cached - passing the path is the only way (for now) to load the dataset.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"squad\") # Works\r\ndataset = load_dataset(\"code_search_net\", \"python\") # Error\r\ndataset = load_dataset(\"covid_qa_deepset\") # Error\r\n\r\npath = \"/huggingface/datasets/datasets/{}/\"\r\ndataset = load_dataset(path.format(\"code_search_net\"), \"python\") # Works\r\ndataset = load_dataset(path.format(\"covid_qa_deepset\")) # Works\r\n```\r\n\r\n", "Hi @mrm8488 and @amoux!\r\n The datasets you are trying to load have been added to the library during the community sprint for v2 last month. They will be available with the v2 release!\r\nFor now, there are still a couple of solutions to load the datasets:\r\n1. As suggested by @amoux, you can clone the git repo and pass the local path to the script\r\n2. You can also install the latest (master) version of `datasets` using pip: `pip install git+https://github.com/huggingface/datasets.git@master`", "If you don't want to clone entire `datasets` repo, just download the `muchocine` directory and pass the local path to the directory. Cheers!", "Muchocine was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `muchocine` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"muchocine\", split=\"train\")\r\n```", "Thanks @lhoestq " ]
1,609,104,388,000
1,627,967,249,000
1,627,967,249,000
NONE
null
```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 7 frames FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/muchocine/muchocine.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/muchocine/muchocine.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 281 raise FileNotFoundError( 282 "Couldn't find file locally at {}, or remotely at {} or {}".format( --> 283 combined_path, github_file_path, file_path 284 ) 285 ) FileNotFoundError: Couldn't find file locally at muchocine/muchocine.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/muchocine/muchocine.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/muchocine/muchocine.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1641/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1639/comments
https://api.github.com/repos/huggingface/datasets/issues/1639/events
https://github.com/huggingface/datasets/issues/1639
774,903,472
MDU6SXNzdWU3NzQ5MDM0NzI=
1,639
bug with sst2 in glue
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Maybe you can use nltk's treebank detokenizer ?\r\n```python\r\nfrom nltk.tokenize.treebank import TreebankWordDetokenizer\r\n\r\nTreebankWordDetokenizer().detokenize(\"it 's a charming and often affecting journey . \".split())\r\n# \"it's a charming and often affecting journey.\"\r\n```", "I am looking for alternative file URL here instead of adding extra processing code: https://github.com/huggingface/datasets/blob/171f2bba9dd8b92006b13cf076a5bf31d67d3e69/datasets/glue/glue.py#L174", "I don't know if there exists a detokenized version somewhere. Even the version on kaggle is tokenized" ]
1,609,001,843,000
1,630,076,603,000
null
NONE
null
Hi I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below. Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on this dataset. thank you for your help. @lhoestq ``` >>> a = datasets.load_dataset('glue', 'sst2', split="validation", script_version="master") Reusing dataset glue (/julia/datasets/glue/sst2/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4) >>> a[:10] {'idx': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], 'label': [1, 0, 1, 1, 0, 1, 0, 0, 1, 0], 'sentence': ["it 's a charming and often affecting journey . ", 'unflinchingly bleak and desperate ', 'allows us to hope that nolan is poised to embark a major career as a commercial yet inventive filmmaker . ', "the acting , costumes , music , cinematography and sound are all astounding given the production 's austere locales . ", "it 's slow -- very , very slow . ", 'although laced with humor and a few fanciful touches , the film is a refreshingly serious look at young women . ', 'a sometimes tedious film . ', "or doing last year 's taxes with your ex-wife . ", "you do n't have to know about music to appreciate the film 's easygoing blend of comedy and romance . ", "in exactly 89 minutes , most of which passed as slowly as if i 'd been sitting naked on an igloo , formula 51 sank from quirky to jerky to utter turkey . "]} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1639/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1636/comments
https://api.github.com/repos/huggingface/datasets/issues/1636/events
https://github.com/huggingface/datasets/issues/1636
774,574,378
MDU6SXNzdWU3NzQ1NzQzNzg=
1,636
winogrande cannot be dowloaded
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I have same issue for other datasets (`myanmar_news` in my case).\r\n\r\nA version of `datasets` runs correctly on my local machine (**without GPU**) which looking for the dataset at \r\n```\r\nhttps://raw.githubusercontent.com/huggingface/datasets/master/datasets/myanmar_news/myanmar_news.py\r\n```\r\n\r\nMeanwhile, other version runs on Colab (**with GPU**) failed to download the dataset. It try to find the dataset at `1.1.3` instead of `master` . If I disable GPU on my Colab, the code can load the dataset without any problem.\r\n\r\nMaybe there is some version missmatch with the GPU and CPU version of code for these datasets?", "It looks like they're two different issues\r\n\r\n----------\r\n\r\nFirst for `myanmar_news`: \r\n\r\nIt must come from the way you installed `datasets`.\r\nIf you install `datasets` from source, then the `myanmar_news` script will be loaded from `master`.\r\nHowever if you install from `pip` it will get it using the version of the lib (here `1.1.3`) and `myanmar_news` is not available in `1.1.3`.\r\n\r\nThe difference between your GPU and CPU executions must be the environment, one seems to have installed `datasets` from source and not the other.\r\n\r\n----------\r\n\r\nThen for `winogrande`:\r\n\r\nThe errors says that the url https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/winogrande/winogrande.py is not reachable.\r\nHowever it works fine on my side.\r\n\r\nDoes your machine have an internet connection ? Are connections to github blocked by some sort of proxy ?\r\nCan you also try again in case github had issues when you tried the first time ?\r\n" ]
1,608,848,902,000
1,609,163,629,000
null
NONE
null
Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks] File "./finetune_trainer.py", line 148, in <listcomp> for task in data_args.tasks] File "/workdir/seq2seq/data/tasks.py", line 65, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 466, in load_dataset return datasets.load_dataset('winogrande', 'winogrande_l', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/winogrande/winogrande.py yo/0 I1224 14:17:46.419031 31226 main shadow.py:122 > Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 260, in <module> main() File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1636/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1635/comments
https://api.github.com/repos/huggingface/datasets/issues/1635/events
https://github.com/huggingface/datasets/issues/1635
774,524,492
MDU6SXNzdWU3NzQ1MjQ0OTI=
1,635
Persian Abstractive/Extractive Text Summarization
{ "login": "m3hrdadfi", "id": 2601833, "node_id": "MDQ6VXNlcjI2MDE4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/2601833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/m3hrdadfi", "html_url": "https://github.com/m3hrdadfi", "followers_url": "https://api.github.com/users/m3hrdadfi/followers", "following_url": "https://api.github.com/users/m3hrdadfi/following{/other_user}", "gists_url": "https://api.github.com/users/m3hrdadfi/gists{/gist_id}", "starred_url": "https://api.github.com/users/m3hrdadfi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/m3hrdadfi/subscriptions", "organizations_url": "https://api.github.com/users/m3hrdadfi/orgs", "repos_url": "https://api.github.com/users/m3hrdadfi/repos", "events_url": "https://api.github.com/users/m3hrdadfi/events{/privacy}", "received_events_url": "https://api.github.com/users/m3hrdadfi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,608,832,032,000
1,609,773,064,000
1,609,773,064,000
CONTRIBUTOR
null
Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included. ## Adding a Dataset - **Name:** *pn-summary* - **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.* - **Paper:** *https://arxiv.org/abs/2012.11204* - **Data:** *https://github.com/hooshvare/pn-summary/#download* - **Motivation:** *It is the first Persian abstractive/extractive Text summarization dataset (like cnn_dailymail for English)!* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1635/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1634/comments
https://api.github.com/repos/huggingface/datasets/issues/1634/events
https://github.com/huggingface/datasets/issues/1634
774,487,934
MDU6SXNzdWU3NzQ0ODc5MzQ=
1,634
Inspecting datasets per category
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "That's interesting, can you tell me what you think would be useful to access to inspect a dataset?\r\n\r\nYou can filter them in the hub with the search by the way: https://huggingface.co/datasets have you seen it?", "Hi @thomwolf \r\nthank you, I was not aware of this, I was looking into the data viewer linked into readme page. \r\n\r\nThis is exactly what I was looking for, but this does not work currently, please see the attached \r\nI am selecting to see all nli datasets in english and it retrieves none. thanks\r\n\r\n![5tarDHn9CP6ngeM](https://user-images.githubusercontent.com/53898419/103107612-1509aa80-4638-11eb-85b5-0c995a189969.png)\r\n\r\n\r\n\r\n", "I see 4 results for NLI in English but indeed some are not tagged yet and missing (GLUE), we will focus on that in January (cc @yjernite): https://huggingface.co/datasets?filter=task_ids:natural-language-inference,languages:en" ]
1,608,823,594,000
1,610,098,084,000
null
NONE
null
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1634/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1633/comments
https://api.github.com/repos/huggingface/datasets/issues/1633/events
https://github.com/huggingface/datasets/issues/1633
774,422,603
MDU6SXNzdWU3NzQ0MjI2MDM=
1,633
social_i_qa wrong format of labels
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq, should I raise a PR for this? Just a minor change while reading labels text file", "Sure feel free to open a PR thanks !" ]
1,608,815,514,000
1,609,348,729,000
1,609,348,729,000
NONE
null
Hi, there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent. so label is 'label': '1\n', not '1' thanks ``` >>> import datasets >>> from datasets import load_dataset >>> dataset = load_dataset( ... 'social_i_qa') cahce dir /julia/cache/datasets Downloading: 4.72kB [00:00, 3.52MB/s] cahce dir /julia/cache/datasets Downloading: 2.19kB [00:00, 1.81MB/s] Using custom data configuration default Reusing dataset social_i_qa (/julia/datasets/social_i_qa/default/0.1.0/4a4190cc2d2482d43416c2167c0c5dccdd769d4482e84893614bd069e5c3ba06) >>> dataset['train'][0] {'answerA': 'like attending', 'answerB': 'like staying home', 'answerC': 'a good friend to have', 'context': 'Cameron decided to have a barbecue and gathered her friends together.', 'label': '1\n', 'question': 'How would Others feel as a result?'} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1633/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1632/comments
https://api.github.com/repos/huggingface/datasets/issues/1632/events
https://github.com/huggingface/datasets/issues/1632
774,388,625
MDU6SXNzdWU3NzQzODg2MjU=
1,632
SICK dataset
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,608,813,614,000
1,612,540,165,000
1,612,540,165,000
CONTRIBUTOR
null
Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you. ## Adding a Dataset - **Name:** SICK - **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic, and semantic phenomena. - **Paper:** https://www.aclweb.org/anthology/L14-1314/ - **Data:** http://marcobaroni.org/composes/sick.html - **Motivation:** This dataset is well-known in the NLP community used for recognizing entailment between sentences. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1632/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1630/comments
https://api.github.com/repos/huggingface/datasets/issues/1630/events
https://github.com/huggingface/datasets/issues/1630
774,332,129
MDU6SXNzdWU3NzQzMzIxMjk=
1,630
Adding UKP Argument Aspect Similarity Corpus
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Adding a link to the guide on adding a dataset if someone want to give it a try: https://github.com/huggingface/datasets#add-a-new-dataset-to-the-hub\r\n\r\nwe should add this guide to the issue template @lhoestq ", "thanks @thomwolf , this is added now. The template is correct, sorry my mistake not to include it. " ]
1,608,807,691,000
1,608,809,418,000
null
CONTRIBUTOR
null
Hi, this would be great to have this dataset included. ## Adding a Dataset - **Name:** UKP Argument Aspect Similarity Corpus - **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as either “high similarity”, “some similarity”, “no similarity” or “not related” with respect to the topic. - **Paper:** https://www.aclweb.org/anthology/P19-1054/ - **Data:** https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998 - **Motivation:** this is one of the datasets currently used frequently in recent adapter papers like https://arxiv.org/pdf/2005.00247.pdf Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Thank you
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1630/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1627/comments
https://api.github.com/repos/huggingface/datasets/issues/1627/events
https://github.com/huggingface/datasets/issues/1627
773,960,255
MDU6SXNzdWU3NzM5NjAyNTU=
1,627
`Dataset.map` disable progress bar
{ "login": "Nickil21", "id": 8767964, "node_id": "MDQ6VXNlcjg3Njc5NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8767964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nickil21", "html_url": "https://github.com/Nickil21", "followers_url": "https://api.github.com/users/Nickil21/followers", "following_url": "https://api.github.com/users/Nickil21/following{/other_user}", "gists_url": "https://api.github.com/users/Nickil21/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nickil21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nickil21/subscriptions", "organizations_url": "https://api.github.com/users/Nickil21/orgs", "repos_url": "https://api.github.com/users/Nickil21/repos", "events_url": "https://api.github.com/users/Nickil21/events{/privacy}", "received_events_url": "https://api.github.com/users/Nickil21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Progress bar can be disabled like this:\r\n```python\r\nfrom datasets.utils.logging import set_verbosity_error\r\nset_verbosity_error()\r\n```\r\n\r\nThere is this line in `Dataset.map`:\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nSo any logging level higher than `WARNING` turns off the progress bar." ]
1,608,746,022,000
1,609,012,656,000
1,609,012,637,000
NONE
null
I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1627/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1627/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1624/comments
https://api.github.com/repos/huggingface/datasets/issues/1624/events
https://github.com/huggingface/datasets/issues/1624
773,669,700
MDU6SXNzdWU3NzM2Njk3MDA=
1,624
Cannot download ade_corpus_v2
{ "login": "him1411", "id": 20259310, "node_id": "MDQ6VXNlcjIwMjU5MzEw", "avatar_url": "https://avatars.githubusercontent.com/u/20259310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/him1411", "html_url": "https://github.com/him1411", "followers_url": "https://api.github.com/users/him1411/followers", "following_url": "https://api.github.com/users/him1411/following{/other_user}", "gists_url": "https://api.github.com/users/him1411/gists{/gist_id}", "starred_url": "https://api.github.com/users/him1411/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/him1411/subscriptions", "organizations_url": "https://api.github.com/users/him1411/orgs", "repos_url": "https://api.github.com/users/him1411/repos", "events_url": "https://api.github.com/users/him1411/events{/privacy}", "received_events_url": "https://api.github.com/users/him1411/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @him1411, the dataset you are trying to load has been added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`", "`ade_corpus_v2` was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `ade_corpus_v2` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"ade_corpus_v2\", \"Ade_corpos_v2_drug_ade_relation\")\r\n```\r\n\r\n(looks like there is a typo in the configuration name, we'll fix it for the v2.0 release of `datasets` soon)" ]
1,608,721,094,000
1,627,967,334,000
1,627,967,334,000
NONE
null
I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2 but received this error : `Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module combined_path, github_file_path, file_path FileNotFoundError: Couldn't find file locally at ade_corpus_v2/ade_corpus_v2.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1624/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1624/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1622/comments
https://api.github.com/repos/huggingface/datasets/issues/1622/events
https://github.com/huggingface/datasets/issues/1622
772,940,768
MDU6SXNzdWU3NzI5NDA3Njg=
1,622
Can't call shape on the output of select()
{ "login": "noaonoszko", "id": 47183162, "node_id": "MDQ6VXNlcjQ3MTgzMTYy", "avatar_url": "https://avatars.githubusercontent.com/u/47183162?v=4", "gravatar_id": "", "url": "https://api.github.com/users/noaonoszko", "html_url": "https://github.com/noaonoszko", "followers_url": "https://api.github.com/users/noaonoszko/followers", "following_url": "https://api.github.com/users/noaonoszko/following{/other_user}", "gists_url": "https://api.github.com/users/noaonoszko/gists{/gist_id}", "starred_url": "https://api.github.com/users/noaonoszko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/noaonoszko/subscriptions", "organizations_url": "https://api.github.com/users/noaonoszko/orgs", "repos_url": "https://api.github.com/users/noaonoszko/repos", "events_url": "https://api.github.com/users/noaonoszko/events{/privacy}", "received_events_url": "https://api.github.com/users/noaonoszko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Indeed that's a typo, do you want to open a PR to fix it?", "Yes, created a PR" ]
1,608,643,120,000
1,608,730,633,000
1,608,730,632,000
CONTRIBUTOR
null
I get the error `TypeError: tuple expected at most 1 argument, got 2` when calling `shape` on the output of `select()`. It's line 531 in shape in arrow_dataset.py that causes the problem: ``return tuple(self._indices.num_rows, self._data.num_columns)`` This makes sense, since `tuple(num1, num2)` is not a valid call. Full code to reproduce: ```python dataset = load_dataset("cnn_dailymail", "3.0.0") train_set = dataset["train"] t = train_set.select(range(10)) print(t.shape)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1622/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1618/comments
https://api.github.com/repos/huggingface/datasets/issues/1618/events
https://github.com/huggingface/datasets/issues/1618
772,248,730
MDU6SXNzdWU3NzIyNDg3MzA=
1,618
Can't filter language:EN on https://huggingface.co/datasets
{ "login": "davidefiocco", "id": 4547987, "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidefiocco", "html_url": "https://github.com/davidefiocco", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "repos_url": "https://api.github.com/users/davidefiocco/repos", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "cc'ing @mapmeld ", "Full language list is now deployed to https://huggingface.co/datasets ! Recommend close", "Cool @mapmeld ! My 2 cents (for a next iteration), it would be cool to have a small search widget in the filter dropdown as you have a ton of languages now here! Closing this in the meantime." ]
1,608,564,203,000
1,608,657,420,000
1,608,657,369,000
NONE
null
When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge: ![screenshot](https://user-images.githubusercontent.com/4547987/102792244-892e1f00-43a8-11eb-9e89-4826ca201a87.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1618/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1615/comments
https://api.github.com/repos/huggingface/datasets/issues/1615/events
https://github.com/huggingface/datasets/issues/1615
771,641,088
MDU6SXNzdWU3NzE2NDEwODg=
1,615
Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
{ "login": "SapirWeissbuch", "id": 44585792, "node_id": "MDQ6VXNlcjQ0NTg1Nzky", "avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SapirWeissbuch", "html_url": "https://github.com/SapirWeissbuch", "followers_url": "https://api.github.com/users/SapirWeissbuch/followers", "following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}", "gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}", "starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions", "organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs", "repos_url": "https://api.github.com/users/SapirWeissbuch/repos", "events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}", "received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @SapirWeissbuch,\r\nWhen you are saying it freezes, at that time it is unzipping the file from the zip file it downloaded. Since it's a very heavy file it'll take some time. It was taking ~11GB after unzipping when it started reading examples for me. Hope that helps!\r\n![Screenshot 2020-12-21 at 23 40 52](https://user-images.githubusercontent.com/19718818/102808355-3b380c00-43e6-11eb-81ab-c31019ae6322.png)\r\n", "Hi @bhavitvyamalik \r\nThanks for the reply!\r\nActually I let it run for 30 minutes before I killed the process. In this time, 30GB were extracted (much more than 11GB), I checked the size of the destination directory.\r\n\r\nWhat version of Datasets are you using?\r\n", "I'm using datasets version: 1.1.3. I think you should drop `cache_dir` and use only\r\n`dataset = datasets.load_dataset(\"trivia_qa\", \"rc\")`\r\n\r\nTried that on colab and it's working there too\r\n![image](https://user-images.githubusercontent.com/19718818/102814269-4db74300-43f0-11eb-8f26-ecfcf4632002.png)\r\n", "Train, Validation, and Test splits contain 138384, 18669, and 17210 samples respectively. It takes some time to read the samples. Even in your colab notebook it was reading the samples before you killed the process. Let me know if it works now!", "Hi, it works on colab but it still doesn't work on my computer, same problem as before - overly large and long extraction process.\r\nI have to use a custom 'cache_dir' because I don't have any space left in my home directory where it is defaulted, maybe this could be the issue?", "I tried running this again - More details of the problem:\r\nCode:\r\n```\r\ndatasets.load_dataset(\"trivia_qa\", \"rc\", cache_dir=\"/path/to/cache\")\r\n```\r\n\r\nThe output:\r\n```\r\nDownloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to path/to/cache/trivia_qa/rc/1.1.0/e734e28133f4d9a353af322aa52b9f266f6f27cbf2f072690a1694e577546b0d... \r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.67G/2.67G [03:38<00:00, 12.2MB/s]\r\n\r\n```\r\nThe process continues (no progress bar is visible).\r\nI tried `du -sh .` in `path/to/cache`, and the size keeps increasing, reached 35G before I killed the process.\r\n\r\nGoogle Colab with custom `cache_dir` has same issue.\r\nhttps://colab.research.google.com/drive/1nn1Lw02GhfGFylzbS2j6yksGjPo7kkN-?usp=sharing#scrollTo=2G2O0AeNIXan", "1) You can clear the huggingface folder in your `.cache` directory to use default directory for datasets. Speed of extraction and loading of samples depends a lot on your machine's configurations too.\r\n\r\n2) I tried on colab `dataset = datasets.load_dataset(\"trivia_qa\", \"rc\", cache_dir = \"./datasets\")`. After memory usage reached around 42GB (starting from 32GB used already), the dataset was loaded in the memory. Even Your colab notebook shows \r\n![image](https://user-images.githubusercontent.com/19718818/102852229-c7c4e780-4443-11eb-91d6-bf21024358a3.png)\r\nwhich means it's loaded now.", "Facing the same issue.\r\nI am able to download datasets without `cache_dir`, however, when I specify the `cache_dir`, the process hangs indefinitely after partial download. \r\nTried for `data = load_dataset(\"cnn_dailymail\", \"3.0.0\")`", "Hi @ashutoshml,\r\nI tried this and it worked for me:\r\n`data = load_dataset(\"cnn_dailymail\", \"3.0.0\", cache_dir=\"./dummy\")`\r\n\r\nI'm using datasets==1.8.0. It took around 3-4 mins for dataset to unpack and start loading examples.", "Ok. I waited for 20-30 mins, and it still is stuck.\r\nI am using datasets==1.8.0.\r\n\r\nIs there anyway to check what is happening? like a` --verbose` flag?\r\n\r\n![Screenshot 2021-06-25 at 6 37 43 PM](https://user-images.githubusercontent.com/2375919/123429653-cdfb7280-d5e4-11eb-9fa7-ff295800cc86.png)\r\n" ]
1,608,485,258,000
1,624,626,693,000
null
NONE
null
Hello, I'm having issue downloading TriviaQA dataset with `load_dataset`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```python import datasets dataset = datasets.load_dataset("trivia_qa", "rc", cache_dir = "./datasets") ``` ## The output: 1. Download begins: ``` Downloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to /cs/labs/gabis/sapirweissbuch/tr ivia_qa/rc/1.1.0/e734e28133f4d9a353af322aa52b9f266f6f27cbf2f072690a1694e577546b0d... Downloading: 17%|███████████████████▉ | 446M/2.67G [00:37<04:45, 7.77MB/s] ``` 2. 100% is reached 3. It got stuck here for about an hour, and added additional 30G of data to "./datasets" directory. I killed the process eventually. A similar issue can be observed in Google Colab: https://colab.research.google.com/drive/1nn1Lw02GhfGFylzbS2j6yksGjPo7kkN-?usp=sharing ## Expected behaviour: The dataset "TriviaQA" should be successfully downloaded.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1615/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1611/comments
https://api.github.com/repos/huggingface/datasets/issues/1611/events
https://github.com/huggingface/datasets/issues/1611
771,486,456
MDU6SXNzdWU3NzE0ODY0NTY=
1,611
shuffle with torch generator
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Is there a way one can convert the two generator? not sure overall what alternatives I could have to shuffle the datasets with a torch generator, thanks ", "@lhoestq let me please expalin in more details, maybe you could help me suggesting an alternative to solve the issue for now, I have multiple large datasets using huggingface library, then I need to define a distributed sampler on top of it, for this I need to shard the datasets and give each shard to each core, but before sharding I need to shuffle the dataset, if you are familiar with distributed sampler in pytorch, this needs to be done based on seed+epoch generator to make it consistent across the cores they do it through defining a torch generator, I was wondering if you could tell me how I can shuffle the data for now, I am unfortunately blocked by this and have a limited time left, and I greatly appreciate your help on this. thanks ", "@lhoestq Is there a way I could shuffle the datasets from this library with a custom defined shuffle function? thanks for your help on this. ", "Right now the shuffle method only accepts the `seed` (optional int) or `generator` (optional `np.random.Generator`) parameters.\r\n\r\nHere is a suggestion to shuffle the data using your own shuffle method using `select`.\r\n`select` can be used to re-order the dataset samples or simply pick a few ones if you want.\r\nIt's what is used under the hood when you call `dataset.shuffle`.\r\n\r\nTo use `select` you must have the list of re-ordered indices of your samples.\r\n\r\nLet's say you have a `shuffle` methods that you want to use. Then you can first build your shuffled list of indices:\r\n```python\r\nshuffled_indices = shuffle(range(len(dataset)))\r\n```\r\n\r\nThen you can shuffle your dataset using the shuffled indices with \r\n```python\r\nshuffled_dataset = dataset.select(shuffled_indices)\r\n```\r\n\r\nHope that helps", "thank you @lhoestq thank you very much for responding to my question, this greatly helped me and remove the blocking for continuing my work, thanks. ", "@lhoestq could you confirm the method proposed does not bring the whole data into memory? thanks ", "Yes the dataset is not loaded into memory", "great. thanks a lot." ]
1,608,425,834,000
1,608,574,339,000
null
NONE
null
Hi I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I really need to make shuffle work with this generator and I was wondering what I can do about this issue, thanks for your help @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1611/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1610/comments
https://api.github.com/repos/huggingface/datasets/issues/1610/events
https://github.com/huggingface/datasets/issues/1610
771,453,599
MDU6SXNzdWU3NzE0NTM1OTk=
1,610
shuffle does not accept seed
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi, did you check the doc on `shuffle`?\r\nhttps://huggingface.co/docs/datasets/package_reference/main_classes.html?datasets.Dataset.shuffle#datasets.Dataset.shuffle", "Hi Thomas\r\nthanks for reponse, yes, I did checked it, but this does not work for me please see \r\n\r\n```\r\n(internship) rkarimi@italix17:/idiap/user/rkarimi/dev$ python \r\nPython 3.7.9 (default, Aug 31 2020, 12:42:55) \r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import datasets \r\n2020-12-20 01:48:50.766004: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2020-12-20 01:48:50.766029: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n>>> data = datasets.load_dataset(\"scitail\", \"snli_format\")\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nReusing dataset scitail (/idiap/temp/rkarimi/cache_home_1/datasets/scitail/snli_format/1.1.0/fd8ccdfc3134ce86eb4ef10ba7f21ee2a125c946e26bb1dd3625fe74f48d3b90)\r\n>>> data.shuffle(seed=2)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nTypeError: shuffle() got an unexpected keyword argument 'seed'\r\n\r\n```\r\n\r\ndatasets version\r\n`datasets 1.1.2 <pip>\r\n`\r\n", "Thanks for reporting ! \r\n\r\nIndeed it looks like an issue with `suffle` on `DatasetDict`. We're going to fix that.\r\nIn the meantime you can shuffle each split (train, validation, test) separately:\r\n```python\r\nshuffled_train_dataset = data[\"train\"].shuffle(seed=42)\r\n```\r\n" ]
1,608,411,579,000
1,609,754,403,000
1,609,754,403,000
CONTRIBUTOR
null
Hi I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1610/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1609/comments
https://api.github.com/repos/huggingface/datasets/issues/1609/events
https://github.com/huggingface/datasets/issues/1609
771,421,881
MDU6SXNzdWU3NzE0MjE4ODE=
1,609
Not able to use 'jigsaw_toxicity_pred' dataset
{ "login": "jassimran", "id": 7424133, "node_id": "MDQ6VXNlcjc0MjQxMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7424133?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jassimran", "html_url": "https://github.com/jassimran", "followers_url": "https://api.github.com/users/jassimran/followers", "following_url": "https://api.github.com/users/jassimran/following{/other_user}", "gists_url": "https://api.github.com/users/jassimran/gists{/gist_id}", "starred_url": "https://api.github.com/users/jassimran/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jassimran/subscriptions", "organizations_url": "https://api.github.com/users/jassimran/orgs", "repos_url": "https://api.github.com/users/jassimran/repos", "events_url": "https://api.github.com/users/jassimran/events{/privacy}", "received_events_url": "https://api.github.com/users/jassimran/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @jassimran,\r\nThe `jigsaw_toxicity_pred` dataset has not been released yet, it will be available with version 2 of `datasets`, coming soon.\r\nYou can still access it by installing the master (unreleased) version of datasets directly :\r\n`pip install git+https://github.com/huggingface/datasets.git@master`\r\nPlease let me know if this helps", "Thanks.That works for now." ]
1,608,399,348,000
1,608,655,344,000
1,608,655,343,000
NONE
null
When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing): ``` from datasets import list_datasets, list_metrics, load_dataset, load_metric ds = load_dataset("jigsaw_toxicity_pred") ``` I see below error: > FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 280 raise FileNotFoundError( 281 "Couldn't find file locally at {}, or remotely at {} or {}".format( --> 282 combined_path, github_file_path, file_path 283 ) 284 ) FileNotFoundError: Couldn't find file locally at jigsaw_toxicity_pred/jigsaw_toxicity_pred.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1609/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1605/comments
https://api.github.com/repos/huggingface/datasets/issues/1605/events
https://github.com/huggingface/datasets/issues/1605
770,979,620
MDU6SXNzdWU3NzA5Nzk2MjA=
1,605
Navigation version breaking
{ "login": "mttk", "id": 3007947, "node_id": "MDQ6VXNlcjMwMDc5NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3007947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mttk", "html_url": "https://github.com/mttk", "followers_url": "https://api.github.com/users/mttk/followers", "following_url": "https://api.github.com/users/mttk/following{/other_user}", "gists_url": "https://api.github.com/users/mttk/gists{/gist_id}", "starred_url": "https://api.github.com/users/mttk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mttk/subscriptions", "organizations_url": "https://api.github.com/users/mttk/orgs", "repos_url": "https://api.github.com/users/mttk/repos", "events_url": "https://api.github.com/users/mttk/events{/privacy}", "received_events_url": "https://api.github.com/users/mttk/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,608,305,784,000
1,608,306,112,000
null
NONE
null
Hi, when navigating docs (Chrome, Ubuntu) (e.g. on this page: https://huggingface.co/docs/datasets/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version: ![image](https://user-images.githubusercontent.com/3007947/102632187-02cad080-414f-11eb-813b-28f3c8d80def.png) **Edit:** this actually happens _only_ if you open a link to a concrete subsection. IMO, the best way to fix this without getting too deep into the intricacies of retrieving version numbers from the URL would be to change [this](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L112) line to: ``` let label = (version in versionMapping) ? version : stableVersion ``` which delegates the check to the (already maintained) keys of the version mapping dictionary & should be more robust. There's a similar ternary expression [here](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L97) which should also fail in this case. I'd also suggest swapping this [block](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L80-L90) to `string.contains(version) for version in versionMapping` which might be more robust. I'd add a PR myself but I'm by no means competent in JS :) I also have a side question wrt. docs versioning: I'm trying to make docs for a project which are versioned alike to your dropdown versioning. I was wondering how do you handle storage of multiple doc versions on your server? Do you update what `https://huggingface.co/docs/datasets` points to for every stable release & manually create new folders for each released version? So far I'm building & publishing (scping) the docs to the server with a github action which works well for a single version, but would ideally need to reorder the public files triggered on a new release.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1605/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1604/comments
https://api.github.com/repos/huggingface/datasets/issues/1604/events
https://github.com/huggingface/datasets/issues/1604
770,862,112
MDU6SXNzdWU3NzA4NjIxMTI=
1,604
Add tests for the download functions ?
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,608,295,765,000
1,608,295,765,000
null
CONTRIBUTOR
null
AFAIK the download functions in `DownloadManager` are not tested yet. It could be good to add some to ensure behavior is as expected.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1604/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1600/comments
https://api.github.com/repos/huggingface/datasets/issues/1600/events
https://github.com/huggingface/datasets/issues/1600
770,582,960
MDU6SXNzdWU3NzA1ODI5NjA=
1,600
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "repos_url": "https://api.github.com/users/david-waterworth/repos", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Hi @david-waterworth!\r\n\r\nAs indicated in the error message, `load_dataset(\"csv\")` returns a `DatasetDict` object, which is mapping of `str` to `Dataset` objects. I believe in this case the behavior is to return a `train` split with all the data.\r\n`train_test_split` is a method of the `Dataset` object, so you will need to do something like this:\r\n```python\r\ndataset_dict = load_dataset(`'csv', data_files='data.txt')\r\ndataset = dataset_dict['split name, eg train']\r\ndataset.train_test_split(test_size=0.1)\r\n```\r\n\r\nPlease let me know if this helps. 🙂 ", "Thanks, that's working - the same issue also tripped me up with training. \r\n\r\nI also agree https://github.com/huggingface/datasets/issues/767 would be a useful addition. ", "Closing this now", "> ```python\r\n> dataset_dict = load_dataset(`'csv', data_files='data.txt')\r\n> dataset = dataset_dict['split name, eg train']\r\n> dataset.train_test_split(test_size=0.1)\r\n> ```\r\n\r\nI am getting error like\r\nKeyError: 'split name, eg train'\r\nCould you please tell me how to solve this?", "dataset = load_dataset('csv', data_files=['files/datasets/dataset.csv'])\r\ndataset = dataset['train']\r\ndataset = dataset.train_test_split(test_size=0.1)" ]
1,608,269,830,000
1,623,756,346,000
1,608,536,338,000
NONE
null
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong? ``` from datasets import load_dataset dataset = load_dataset('csv', data_files='data.txt') dataset = dataset.train_test_split(test_size=0.1) ``` > AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1600/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1594/comments
https://api.github.com/repos/huggingface/datasets/issues/1594/events
https://github.com/huggingface/datasets/issues/1594
769,747,767
MDU6SXNzdWU3Njk3NDc3Njc=
1,594
connection error
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This happen quite often when they are too many concurrent requests to github.\r\n\r\ni can understand it’s a bit cumbersome to handle on the user side. Maybe we should try a few times in the lib (eg with timeout) before failing, what do you think @lhoestq ?", "Yes currently there's no retry afaik. We should add retries", "Retries were added in #1603 :) \r\nIt will be available in the next release", "Hi @lhoestq thank you for the modification, I will use`script_version=\"master\"` for now :), to my experience, also setting timeout to a larger number like 3*60 which I normally use helps a lot on this.\r\n" ]
1,608,196,714,000
1,608,850,653,000
null
NONE
null
Hi I am hitting to this error, thanks ``` > Traceback (most recent call last): File "finetune_t5_trainer.py", line 379, in <module> main() File "finetune_t5_trainer.py", line 208, in main if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO File "finetune_t5_trainer.py", line 207, in <dictcomp> for task in data_args.eval_tasks} File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 66, in load_dataset return datasets.load_dataset(self.task.name, split=split, script_version="master") File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/boolq/boolq.py el/0 I1217 01:11:33.898849 354161 main shadow.py:210 Current job status: FINISHED ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1594/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1593
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1593/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1593/comments
https://api.github.com/repos/huggingface/datasets/issues/1593/events
https://github.com/huggingface/datasets/issues/1593
769,611,386
MDU6SXNzdWU3Njk2MTEzODY=
1,593
Access to key in DatasetDict map
{ "login": "ZhaofengWu", "id": 11954789, "node_id": "MDQ6VXNlcjExOTU0Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaofengWu", "html_url": "https://github.com/ZhaofengWu", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Indeed that would be cool\r\n\r\nAlso FYI right now the easiest way to do this is\r\n```python\r\ndataset_dict[\"train\"] = dataset_dict[\"train\"].map(my_transform_for_the_train_set)\r\ndataset_dict[\"test\"] = dataset_dict[\"test\"].map(my_transform_for_the_test_set)\r\n```" ]
1,608,188,540,000
1,610,534,283,000
null
NONE
null
It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be nice if there can be a flag, similar to `with_indices`, that allows the callable to know the key inside `DatasetDict`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1593/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1592/comments
https://api.github.com/repos/huggingface/datasets/issues/1592/events
https://github.com/huggingface/datasets/issues/1592
769,529,421
MDU6SXNzdWU3Njk1Mjk0MjE=
1,592
Using datasets.Metric with Trainer()
{ "login": "YipingNUS", "id": 5652584, "node_id": "MDQ6VXNlcjU2NTI1ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5652584?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YipingNUS", "html_url": "https://github.com/YipingNUS", "followers_url": "https://api.github.com/users/YipingNUS/followers", "following_url": "https://api.github.com/users/YipingNUS/following{/other_user}", "gists_url": "https://api.github.com/users/YipingNUS/gists{/gist_id}", "starred_url": "https://api.github.com/users/YipingNUS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YipingNUS/subscriptions", "organizations_url": "https://api.github.com/users/YipingNUS/orgs", "repos_url": "https://api.github.com/users/YipingNUS/repos", "events_url": "https://api.github.com/users/YipingNUS/events{/privacy}", "received_events_url": "https://api.github.com/users/YipingNUS/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "We are indeed working on the integration with `Trainer` :)" ]
1,608,182,224,000
1,608,205,744,000
null
NONE
null
## Using datasets.Metric with Trainer() Hi team, I was quite surprised in the [Metric documentation](https://huggingface.co/docs/datasets/using_metrics.html) I don't see how it can be used with `Trainer()`. That would be the most intuitive use case instead of having to iterate the batches and add predictions and references to the metric, then compute the metric manually. Ideally, any pre-built metrics can be added to `compute_metrics` argument of `Trainer()` and they will be calculated at an interval specified by `TrainingArguments.evaluation_strategy`. Is this option available but just not mentioned in the documentation or it's not possible at the moment? I notice in the [Transformer | Training and fine-tuning](https://huggingface.co/transformers/training.html) tutorial, you are using custom scripts to calculate the accuracy, P/R/F, which are already in the pre-built metrics.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1592/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1592/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1591/comments
https://api.github.com/repos/huggingface/datasets/issues/1591/events
https://github.com/huggingface/datasets/issues/1591
769,383,714
MDU6SXNzdWU3NjkzODM3MTQ=
1,591
IWSLT-17 Link Broken
{ "login": "ZhaofengWu", "id": 11954789, "node_id": "MDQ6VXNlcjExOTU0Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaofengWu", "html_url": "https://github.com/ZhaofengWu", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Sorry, this is a duplicate of #1287. Not sure why it didn't come up when I searched `iwslt` in the issues list.", "Closing this since its a duplicate" ]
1,608,166,002,000
1,608,278,796,000
1,608,278,728,000
NONE
null
``` FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1591/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1590/comments
https://api.github.com/repos/huggingface/datasets/issues/1590/events
https://github.com/huggingface/datasets/issues/1590
769,242,858
MDU6SXNzdWU3NjkyNDI4NTg=
1,590
Add helper to resolve namespace collision
{ "login": "jramapuram", "id": 8204807, "node_id": "MDQ6VXNlcjgyMDQ4MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/8204807?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jramapuram", "html_url": "https://github.com/jramapuram", "followers_url": "https://api.github.com/users/jramapuram/followers", "following_url": "https://api.github.com/users/jramapuram/following{/other_user}", "gists_url": "https://api.github.com/users/jramapuram/gists{/gist_id}", "starred_url": "https://api.github.com/users/jramapuram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jramapuram/subscriptions", "organizations_url": "https://api.github.com/users/jramapuram/orgs", "repos_url": "https://api.github.com/users/jramapuram/repos", "events_url": "https://api.github.com/users/jramapuram/events{/privacy}", "received_events_url": "https://api.github.com/users/jramapuram/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Do you have an example?", "I was thinking about using something like [importlib](https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly) to over-ride the collision. \r\n\r\n**Reason requested**: I use the [following template](https://github.com/jramapuram/ml_base/) repo where I house all my datasets as a submodule.", "Alternatively huggingface could consider some submodule type structure like:\r\n\r\n`import huggingface.datasets`\r\n`import huggingface.transformers`\r\n\r\n`datasets` is a very common module in ML and should be an end-user decision and not scope all of python ¯\\_(ツ)_/¯ \r\n", "That's a interesting option indeed. We'll think about it.", "It also wasn't initially obvious to me that the samples which contain `import datasets` were in fact importing a huggingface library (in fact all the huggingface imports are very generic - transformers, tokenizers, datasets...)" ]
1,608,149,844,000
1,608,349,238,000
null
NONE
null
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1590/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1585/comments
https://api.github.com/repos/huggingface/datasets/issues/1585/events
https://github.com/huggingface/datasets/issues/1585
768,831,171
MDU6SXNzdWU3Njg4MzExNzE=
1,585
FileNotFoundError for `amazon_polarity`
{ "login": "phtephanx", "id": 24647404, "node_id": "MDQ6VXNlcjI0NjQ3NDA0", "avatar_url": "https://avatars.githubusercontent.com/u/24647404?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phtephanx", "html_url": "https://github.com/phtephanx", "followers_url": "https://api.github.com/users/phtephanx/followers", "following_url": "https://api.github.com/users/phtephanx/following{/other_user}", "gists_url": "https://api.github.com/users/phtephanx/gists{/gist_id}", "starred_url": "https://api.github.com/users/phtephanx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phtephanx/subscriptions", "organizations_url": "https://api.github.com/users/phtephanx/orgs", "repos_url": "https://api.github.com/users/phtephanx/repos", "events_url": "https://api.github.com/users/phtephanx/events{/privacy}", "received_events_url": "https://api.github.com/users/phtephanx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @phtephanx , the `amazon_polarity` dataset has not been released yet. It will be available in the coming soon v2of `datasets` :) \r\n\r\nYou can still access it now if you want, but you will need to install datasets via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`" ]
1,608,123,065,000
1,608,134,576,000
1,608,134,576,000
NONE
null
Version: `datasets==v1.1.3` ### Reproduction ```python from datasets import load_dataset data = load_dataset("amazon_polarity") ``` crashes with ```bash FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file locally at amazon_polarity/amazon_polarity.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1585/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1581/comments
https://api.github.com/repos/huggingface/datasets/issues/1581/events
https://github.com/huggingface/datasets/issues/1581
768,320,594
MDU6SXNzdWU3NjgzMjA1OTQ=
1,581
Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
{ "login": "eduardofv", "id": 702586, "node_id": "MDQ6VXNlcjcwMjU4Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/702586?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eduardofv", "html_url": "https://github.com/eduardofv", "followers_url": "https://api.github.com/users/eduardofv/followers", "following_url": "https://api.github.com/users/eduardofv/following{/other_user}", "gists_url": "https://api.github.com/users/eduardofv/gists{/gist_id}", "starred_url": "https://api.github.com/users/eduardofv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eduardofv/subscriptions", "organizations_url": "https://api.github.com/users/eduardofv/orgs", "repos_url": "https://api.github.com/users/eduardofv/repos", "events_url": "https://api.github.com/users/eduardofv/events{/privacy}", "received_events_url": "https://api.github.com/users/eduardofv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nYou can override the directory in which cache file are stored using for example\r\n```\r\nENV HF_HOME=\"/root/cache/hf_cache_home\"\r\n```\r\n\r\nThis way both `transformers` and `datasets` will use this directory instead of the default `.cache`", "Great, thanks. I didn't see documentation about than ENV variable, looks like an obvious solution. ", "> Thanks for reporting !\r\n> You can override the directory in which cache file are stored using for example\r\n> \r\n> ```\r\n> ENV HF_HOME=\"/root/cache/hf_cache_home\"\r\n> ```\r\n> \r\n> This way both `transformers` and `datasets` will use this directory instead of the default `.cache`\r\n\r\ncan we disable caching directly?", "Hi ! Unfortunately no since we need this directory to load datasets.\r\nWhen you load a dataset, it downloads the raw data files in the cache directory inside <cache_dir>/downloads. Then it builds the dataset and saves it as arrow data inside <cache_dir>/<dataset_name>.\r\n\r\nHowever you can specify the directory of your choice, and it can be a temporary directory if you want to clean everything up at one point.", "I'm closing this to keep issues a bit cleaner" ]
1,608,076,941,000
1,623,944,445,000
1,623,944,445,000
NONE
null
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`: ``` $ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data -v $(pwd):/root -v $(pwd)/models/:/root/models -v $(pwd)/saved_models/:/root/saved_models -e "HOST_HOSTNAME=$(hostname)" hf-error:latest /bin/bash ________ _______________ ___ __/__________________________________ ____/__ /________ __ __ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / / _ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ / /_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/ You are running this container as user with ID 1000 and group 1000, which should map to the ID and group for your user on the Docker host. Great! tf-docker /root > python Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers 2020-12-15 23:53:21.165827: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 5, in <module> from .trainer_utils import EvaluationStrategy File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 25, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 88, in <module> import datasets # noqa: F401 File "/usr/local/lib/python3.6/dist-packages/datasets/__init__.py", line 26, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 40, in <module> from .arrow_reader import ArrowReader File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 31, in <module> from .utils import cached_path, logging File "/usr/local/lib/python3.6/dist-packages/datasets/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/usr/local/lib/python3.6/dist-packages/datasets/utils/download_manager.py", line 25, in <module> from .file_utils import HF_DATASETS_CACHE, cached_path, get_from_cache, hash_url_to_filename File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 118, in <module> os.makedirs(HF_MODULES_CACHE, exist_ok=True) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/.cache' ``` I've pinned the problem to `RUN pip install datasets`, and by commenting it you can actually import transformers correctly. Another workaround I've found is creating the directory and giving permissions to it directly on the Dockerfile. ``` FROM tensorflow/tensorflow:latest-gpu-jupyter WORKDIR /root EXPOSE 80 EXPOSE 8888 EXPOSE 6006 ENV SHELL /bin/bash ENV PATH="/root/.local/bin:${PATH}" ENV CUDA_CACHE_PATH="/root/cache/cuda" ENV CUDA_CACHE_MAXSIZE="4294967296" ENV TFHUB_CACHE_DIR="/root/cache/tfhub" RUN pip install --upgrade pip RUN apt update -y && apt upgrade -y RUN pip install transformers #Installing datasets will throw the error, try commenting and rebuilding RUN pip install datasets #Another workaround is creating the directory and give permissions explicitly #RUN mkdir /.cache #RUN chmod 777 /.cache ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1581/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1541/comments
https://api.github.com/repos/huggingface/datasets/issues/1541/events
https://github.com/huggingface/datasets/issues/1541
765,430,586
MDU6SXNzdWU3NjU0MzA1ODY=
1,541
connection issue while downloading data
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "could you tell me how I can avoid download, by pre-downloading the data first, put them in a folder so the code does not try to redownload? could you tell me the path to put the downloaded data, and how to do it? thanks\r\n@lhoestq ", "Does your instance have an internet connection ?\r\n\r\nIf you don't have an internet connection you'll need to have the dataset on the instance disk.\r\nTo do so first download the dataset on another machine using `load_dataset` and then you can save it in a folder using `my_dataset.save_to_disk(\"path/to/folder\")`. Once the folder is copied on your instance you can reload the dataset with `datasets.load_from_disk(\"path/to/folder\")`" ]
1,607,869,620,000
1,608,028,471,000
null
NONE
null
Hi I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. thanks ``` Traceback (most recent call last): File "finetune_t5_trainer.py", line 361, in <module> main() File "finetune_t5_trainer.py", line 269, in main add_prefix=False if training_args.train_adapters else True) File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 306, in load_dataset return datasets.load_dataset('glue', 'cola', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1541/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1514/comments
https://api.github.com/repos/huggingface/datasets/issues/1514/events
https://github.com/huggingface/datasets/issues/1514
764,017,148
MDU6SXNzdWU3NjQwMTcxNDg=
1,514
how to get all the options of a property in datasets
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[ "In a dataset, labels correspond to the `ClassLabel` feature that has the `names` property that returns string represenation of the integer classes (or `num_classes` to get the number of different classes).", "I think the `features` attribute of the dataset object is what you are looking for:\r\n```\r\n>>> dataset.features\r\n{'sentence1': Value(dtype='string', id=None),\r\n 'sentence2': Value(dtype='string', id=None),\r\n 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None),\r\n 'idx': Value(dtype='int32', id=None)\r\n}\r\n>>> dataset.features[\"label\"].names\r\n['not_equivalent', 'equivalent']\r\n```\r\n\r\nFor reference: https://huggingface.co/docs/datasets/exploring.html" ]
1,607,790,248,000
1,608,278,937,000
null
CONTRIBUTOR
null
Hi could you tell me how I can get all unique options of a property of dataset? for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1514/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1478/comments
https://api.github.com/repos/huggingface/datasets/issues/1478/events
https://github.com/huggingface/datasets/issues/1478
762,293,076
MDU6SXNzdWU3NjIyOTMwNzY=
1,478
Inconsistent argument names.
{ "login": "Fraser-Greenlee", "id": 8402500, "node_id": "MDQ6VXNlcjg0MDI1MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8402500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Fraser-Greenlee", "html_url": "https://github.com/Fraser-Greenlee", "followers_url": "https://api.github.com/users/Fraser-Greenlee/followers", "following_url": "https://api.github.com/users/Fraser-Greenlee/following{/other_user}", "gists_url": "https://api.github.com/users/Fraser-Greenlee/gists{/gist_id}", "starred_url": "https://api.github.com/users/Fraser-Greenlee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Fraser-Greenlee/subscriptions", "organizations_url": "https://api.github.com/users/Fraser-Greenlee/orgs", "repos_url": "https://api.github.com/users/Fraser-Greenlee/repos", "events_url": "https://api.github.com/users/Fraser-Greenlee/events{/privacy}", "received_events_url": "https://api.github.com/users/Fraser-Greenlee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Also for the `Accuracy` metric the `accuracy_score` method should have its args in the opposite order so `accuracy_score(predictions, references,,,)`.", "Thanks for pointing this out ! 🕵🏻 \r\nPredictions and references should indeed be swapped in the docstring.\r\nHowever, the call to `accuracy_score` should not be changed, it [signature](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score) being:\r\n```\r\nsklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None)\r\n```\r\n\r\nFeel free to open a PR if you want to fix this :)" ]
1,607,689,178,000
1,608,390,219,000
1,608,390,219,000
CONTRIBUTOR
null
Just find it a wee bit odd that in the transformers library `predictions` are those made by the model: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61 While in many datasets metrics they are the ground truth labels: https://github.com/huggingface/datasets/blob/c3f53792a744ede18d748a1133b6597fdd2d8d18/metrics/accuracy/accuracy.py#L31-L40 Do you think predictions & references should be swapped? I'd be willing to do some refactoring here if you agree.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1478/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1452/comments
https://api.github.com/repos/huggingface/datasets/issues/1452/events
https://github.com/huggingface/datasets/issues/1452
761,104,924
MDU6SXNzdWU3NjExMDQ5MjQ=
1,452
SNLI dataset contains labels with value -1
{ "login": "aarnetalman", "id": 11405654, "node_id": "MDQ6VXNlcjExNDA1NjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11405654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aarnetalman", "html_url": "https://github.com/aarnetalman", "followers_url": "https://api.github.com/users/aarnetalman/followers", "following_url": "https://api.github.com/users/aarnetalman/following{/other_user}", "gists_url": "https://api.github.com/users/aarnetalman/gists{/gist_id}", "starred_url": "https://api.github.com/users/aarnetalman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aarnetalman/subscriptions", "organizations_url": "https://api.github.com/users/aarnetalman/orgs", "repos_url": "https://api.github.com/users/aarnetalman/repos", "events_url": "https://api.github.com/users/aarnetalman/events{/privacy}", "received_events_url": "https://api.github.com/users/aarnetalman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I believe the `-1` label is used for missing/NULL data as per HuggingFace Dataset conventions. If I recall correctly SNLI has some entries with no (gold) labels in the dataset.", "Ah, you're right. The dataset has some pairs with missing labels. Thanks for reminding me." ]
1,607,595,415,000
1,607,622,595,000
1,607,622,595,000
NONE
null
``` import datasets nli_data = datasets.load_dataset("snli") train_data = nli_data['train'] train_labels = train_data['label'] label_set = set(train_labels) print(label_set) ``` **Output:** `{0, 1, 2, -1}`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1452/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1444/comments
https://api.github.com/repos/huggingface/datasets/issues/1444/events
https://github.com/huggingface/datasets/issues/1444
761,055,651
MDU6SXNzdWU3NjEwNTU2NTE=
1,444
FileNotFound remotly, can't load a dataset
{ "login": "sadakmed", "id": 18331629, "node_id": "MDQ6VXNlcjE4MzMxNjI5", "avatar_url": "https://avatars.githubusercontent.com/u/18331629?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sadakmed", "html_url": "https://github.com/sadakmed", "followers_url": "https://api.github.com/users/sadakmed/followers", "following_url": "https://api.github.com/users/sadakmed/following{/other_user}", "gists_url": "https://api.github.com/users/sadakmed/gists{/gist_id}", "starred_url": "https://api.github.com/users/sadakmed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadakmed/subscriptions", "organizations_url": "https://api.github.com/users/sadakmed/orgs", "repos_url": "https://api.github.com/users/sadakmed/repos", "events_url": "https://api.github.com/users/sadakmed/events{/privacy}", "received_events_url": "https://api.github.com/users/sadakmed/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This dataset will be available in version-2 of the library. If you want to use this dataset now, install datasets from `master` branch rather.\r\n\r\nCommand to install datasets from `master` branch:\r\n`!pip install git+https://github.com/huggingface/datasets.git@master`", "Closing this, thanks @VasudevGupta7 " ]
1,607,591,687,000
1,608,054,074,000
1,608,054,074,000
NONE
null
```py !pip install datasets import datasets as ds corpus = ds.load_dataset('large_spanish_corpus') ``` gives the error > FileNotFoundError: Couldn't find file locally at large_spanish_corpus/large_spanish_corpus.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/large_spanish_corpus/large_spanish_corpus.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/large_spanish_corpus/large_spanish_corpus.py not just `large_spanish_corpus`, `zest` too, but `squad` is available. this was using colab and localy
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1444/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1422/comments
https://api.github.com/repos/huggingface/datasets/issues/1422/events
https://github.com/huggingface/datasets/issues/1422
760,707,113
MDU6SXNzdWU3NjA3MDcxMTM=
1,422
Can't map dataset (loaded from csv)
{ "login": "SolomidHero", "id": 28161779, "node_id": "MDQ6VXNlcjI4MTYxNzc5", "avatar_url": "https://avatars.githubusercontent.com/u/28161779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SolomidHero", "html_url": "https://github.com/SolomidHero", "followers_url": "https://api.github.com/users/SolomidHero/followers", "following_url": "https://api.github.com/users/SolomidHero/following{/other_user}", "gists_url": "https://api.github.com/users/SolomidHero/gists{/gist_id}", "starred_url": "https://api.github.com/users/SolomidHero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SolomidHero/subscriptions", "organizations_url": "https://api.github.com/users/SolomidHero/orgs", "repos_url": "https://api.github.com/users/SolomidHero/repos", "events_url": "https://api.github.com/users/SolomidHero/events{/privacy}", "received_events_url": "https://api.github.com/users/SolomidHero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Please could you post the whole script? I can't reproduce your issue. After updating the feature names/labels to match with the data, everything works fine for me. Try to update datasets/transformers to the newest version.", "Actually, the problem was how `tokenize` function was defined. This was completely my side mistake, so there are really no needs in this issue anymore" ]
1,607,551,542,000
1,608,228,820,000
1,608,228,820,000
NONE
null
Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes. Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to classify lmdb loaded dataset. Only one difference it is the dataset loaded from .csv file. Here is how I load it: ```python data_path = 'data.csv' data = pd.read_csv(data_path) # process class name to indices classes = ['neg', 'pos'] class_to_idx = { cl: i for i, cl in enumerate(classes) } # now data is like {'label': int, 'text' str} data['label'] = data['label'].apply(lambda x: class_to_idx[x]) # load dataset and map it with defined `tokenize` function features = Features({ target: ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None), feature: Value(dtype='string', id=None), }) dataset = Dataset.from_pandas(data, features=features) dataset.map(tokenize, batched=True, batch_size=len(dataset)) ``` It ruins on the last line with following error: ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-112-32b6275ce418> in <module>() 9 }) 10 dataset = Dataset.from_pandas(data, features=features) ---> 11 dataset.map(tokenizer, batched=True, batch_size=len(dataset)) 2 frames /usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1237 test_inputs = self[:2] if batched else self[0] 1238 test_indices = [0, 1] if batched else 0 -> 1239 update_data = does_function_return_dict(test_inputs, test_indices) 1240 logger.info("Testing finished, running the mapping function on the dataset") 1241 /usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices) 1208 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] 1209 processed_inputs = ( -> 1210 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1211 ) 1212 does_return_dict = isinstance(processed_inputs, Mapping) /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2281 ) 2282 ), ( -> 2283 "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) " 2284 "or `List[List[str]]` (batch of pretokenized examples)." 2285 ) AssertionError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples). ``` which I think is not expected. I also tried the same steps using `Dataset.from_csv` which resulted in the same error. For reproducing this, I used [this dataset from kaggle](https://www.kaggle.com/team-ai/spam-text-message-classification)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1422/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1324/comments
https://api.github.com/repos/huggingface/datasets/issues/1324/events
https://github.com/huggingface/datasets/issues/1324
759,587,864
MDU6SXNzdWU3NTk1ODc4NjQ=
1,324
❓ Sharing ElasticSearch indexed dataset
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Hello @pietrolesci , I am not sure to understand what you are trying to do here.\r\n\r\nIf you're looking for ways to save a dataset on disk, you can you the `save_to_disk` method:\r\n```python\r\n>>> import datasets\r\n>>> loaded_dataset = datasets.load(\"dataset_name\")\r\n>>> loaded_dataset.save_to_disk(\"/path/on/your/disk\")\r\n```\r\n\r\nThe saved dataset can later be retrieved using:\r\n```python\r\n>>> loaded_dataset = datasets.Dataset.load_from_disk(\"/path/on/your/disk\")\r\n```\r\n\r\nAlso, I'd recommend posting your question directly in the issue section of the [elasticsearch repo](https://github.com/elastic/elasticsearch)", "Hi @SBrandeis,\n\nThanks a lot for picking up my request. \n\nMaybe I can clarify my use-case with a bit of context. Say I have the IMDb dataset. I create an ES index on it. Now I can save and reload the dataset from disk normally. Once I reload the dataset, it is easy to retrieve the ES index on my machine. I was wondering: is there a way I can share the (now) indexed version of the IMDb dataset with my colleagues without requiring them to re-index it?\n\nThanks a lot in advance for your consideration.\n\nBest,\n\nPietro", "Thanks for the clarification.\r\n\r\nI am not familiar with ElasticSearch, but if I understand well you're trying to migrate your data along with the ES index.\r\nMy advice would be to check out ES documentation, for instance, this might help you: https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html\r\n\r\nLet me know if it helps" ]
1,607,444,758,000
1,608,623,456,000
null
NONE
null
Hi there, First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing. **Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering - how can I know where it has been saved? - how can I share the indexed dataset with others? I tried to dig into the docs, but could not find anything about that. Thank you very much for your help. Best, Pietro Edit: apologies for the wrong label
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1324/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1299/comments
https://api.github.com/repos/huggingface/datasets/issues/1299/events
https://github.com/huggingface/datasets/issues/1299
759,414,566
MDU6SXNzdWU3NTk0MTQ1NjY=
1,299
can't load "german_legal_entity_recognition" dataset
{ "login": "nataly-obr", "id": 59837137, "node_id": "MDQ6VXNlcjU5ODM3MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/59837137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nataly-obr", "html_url": "https://github.com/nataly-obr", "followers_url": "https://api.github.com/users/nataly-obr/followers", "following_url": "https://api.github.com/users/nataly-obr/following{/other_user}", "gists_url": "https://api.github.com/users/nataly-obr/gists{/gist_id}", "starred_url": "https://api.github.com/users/nataly-obr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nataly-obr/subscriptions", "organizations_url": "https://api.github.com/users/nataly-obr/orgs", "repos_url": "https://api.github.com/users/nataly-obr/repos", "events_url": "https://api.github.com/users/nataly-obr/events{/privacy}", "received_events_url": "https://api.github.com/users/nataly-obr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Please if you could tell me more about the error? \r\n\r\n1. Please check the directory you've been working on\r\n2. Check for any typos", "> Please if you could tell me more about the error?\r\n> \r\n> 1. Please check the directory you've been working on\r\n> 2. Check for any typos\r\n\r\nError happens during the execution of this line:\r\ndataset = load_dataset(\"german_legal_entity_recognition\")\r\n\r\nAlso, when I try to open mentioned links via Opera I have errors \"404: Not Found\" and \"This XML file does not appear to have any style information associated with it. The document tree is shown below.\" respectively.", "Hello @nataly-obr, the `german_legal_entity_recognition` dataset has not yet been released (it is part of the coming soon v2 release).\r\n\r\nYou can still access it now if you want, but you will need to install `datasets` via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`\r\n\r\nPlease let me know if it solves the issue :) " ]
1,607,431,321,000
1,608,134,593,000
1,608,134,593,000
NONE
null
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1299/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1290/comments
https://api.github.com/repos/huggingface/datasets/issues/1290/events
https://github.com/huggingface/datasets/issues/1290
759,339,989
MDU6SXNzdWU3NTkzMzk5ODk=
1,290
imdb dataset cannot be downloaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @rabeehk , I am unable to reproduce your problem locally.\r\nCan you try emptying the cache (removing the content of `/idiap/temp/rkarimi/cache_home_1/datasets`) and retry ?", "Hi,\r\nthanks, I did remove the cache and still the same error here\r\n\r\n```\r\n>>> a = datasets.load_dataset(\"imdb\", split=\"train\")\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nDownloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3...\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 558, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 73, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=4902716, num_examples=3680, dataset_name='imdb')}]\r\n```\r\n\r\ndatasets version\r\n```\r\ndatasets 1.1.2 <pip>\r\ntensorflow-datasets 4.1.0 <pip>\r\n\r\n```", "resolved with moving to version 1.1.3" ]
1,607,424,456,000
1,608,831,489,000
1,608,831,489,000
CONTRIBUTOR
null
hi please find error below getting imdb train spli: thanks ` datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")` errors ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1290/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1287/comments
https://api.github.com/repos/huggingface/datasets/issues/1287/events
https://github.com/huggingface/datasets/issues/1287
759,300,992
MDU6SXNzdWU3NTkzMDA5OTI=
1,287
'iwslt2017-ro-nl', cannot be downloaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "the same issue with datasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=split), ..... ", "even with setting master like the following command, still remains \r\n\r\ndatasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=\"train\", script_version=\"master\")\r\n", "Looks like the data has been moved from its original location to google drive\r\n\r\nNew url: https://drive.google.com/u/0/uc?id=12ycYSzLIG253AFN35Y6qoyf9wtkOjakp&export=download" ]
1,607,421,415,000
1,608,056,694,000
null
CONTRIBUTOR
null
Hi I am trying `>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")` getting this error thank you for your help ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (download: 314.07 MiB, generated: 39.92 MiB, post-processed: Unknown size, total: 354.00 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/iwsl_t217/iwslt2017-ro-nl/1.0.0/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/iwslt2017/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd/iwslt2017.py", line 118, in _split_generators dl_dir = dl_manager.download_and_extract(MULTI_URL) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 216, in map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1287/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1286/comments
https://api.github.com/repos/huggingface/datasets/issues/1286/events
https://github.com/huggingface/datasets/issues/1286
759,291,509
MDU6SXNzdWU3NTkyOTE1MDk=
1,286
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I remember also getting the same issue for several other translation datasets like all the iwslt2017 group, this is blokcing me and I really need to fix it and I was wondering if you have an idea on this. @lhoestq thanks,. ", "maybe there is an empty line or something inside these datasets? could you tell me why this is happening? thanks ", "I just checked and the wmt16 en-ro doesn't have empty lines\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"wmt16\", \"ro-en\", split=\"train\")\r\nlen(d) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"en\"].strip()) > 0)) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"ro\"].strip()) > 0)) # 610320\r\n# also tested for split=\"validation\" and \"test\"\r\n```\r\n\r\nCan you open an issue on the `transformers` repo ? also cc @sgugger ", "Hi @lhoestq \r\nI am not really sure which part is causing this, to me this is more related to dataset library as this is happening for some of the datassets below please find the information to reprodcue the bug, this is really blocking me and I appreciate your help\r\n\r\n\r\n## Environment info\r\n- `transformers` version: 3.5.1\r\n- Platform: GPU\r\n- Python version: 3.7 \r\n- PyTorch version (GPU?): 1.0.4\r\n- Tensorflow version (GPU?): - \r\n- Using GPU in script?: - \r\n- Using distributed or parallel set-up in script?: - \r\n\r\n### Who can help\r\n tokenizers: @mfuntowicz\r\n Trainer: @sgugger\r\n TextGeneration: @TevenLeScao \r\n nlp datasets: [different repo](https://github.com/huggingface/nlp)\r\n rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)\r\n examples/seq2seq: @patil-suraj\r\n\r\n## Information\r\nHi\r\nI am testing seq2seq model with T5 on different datasets and this is always getting the following bug, this is really blocking me as this fails for many datasets. could you have a look please? thanks \r\n\r\n```\r\n[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n\r\n```\r\n\r\nTo reproduce the error please run on 1 GPU:\r\n```\r\ngit clone git@github.com:rabeehk/debug-seq2seq.git\r\npython setup.py develop \r\ncd seq2seq \r\npython finetune_t5_trainer.py temp.json\r\n\r\n```\r\n\r\nFull output of the program:\r\n\r\n```\r\n(internship) rkarimi@vgnh008:/idiap/user/rkarimi/dev/debug-seq2seq/seq2seq$ python finetune_t5_trainer.py temp.json \r\n2020-12-12 15:38:16.234542: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2020-12-12 15:38:16.234598: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n12/12/2020 15:38:32 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False\r\n12/12/2020 15:38:32 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='outputs/test', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=64, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.01, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=2, max_steps=-1, warmup_steps=500, logging_dir='runs/Dec12_15-38-32_vgnh008', logging_first_step=True, logging_steps=200, save_steps=200, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=200, dataloader_num_workers=0, past_index=-1, run_name='outputs/test', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, label_smoothing=0.1, sortish_sampler=False, predict_with_generate=True, adafactor=False, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear', fixed_length_emb=None, encoder_projection=None, encoder_pooling=None, projection_length=None, only_projection_bottleneck=False, concat_projection_token=False, gcs_bucket='ruse-xcloud-bucket', temperature=10, train_adapters=True, do_finetune=True, parametric_task_embedding=False, eval_output_dir='outputs/finetune-adapter/test-n-1-lr-1e-02-e-20')\r\nSome weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-6810ece2a440c3be.arrow\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-9a2822394a3a4e34.arrow\r\n12/12/2020 15:38:45 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b464cc20> for task boolq\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num examples = 10\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n{'loss': 529.79443359375, 'learning_rate': 2e-05, 'epoch': 1.0} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.37it/s]12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.43it/s]\r\n12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/test\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-164dd1d57e9fa69a.arrow\r\n12/12/2020 15:38:59 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b40c67a0> for task boolq\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num examples = 1\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from checkpoint, will skip to saved global_step\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from epoch 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from global step 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Will skip the first 0 steps in the first epoch\r\n 0%| | 0/2 [00:00<?, ?it/s]12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n 0%| | 0/2 [00:00<?, ?it/s]\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/finetune-adapter/test-n-1-lr-1e-02-e-20/boolq\r\n12/12/2020 15:39:07 - INFO - seq2seq.utils.utils - using task specific params for boolq: {'max_length': 3}\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation *****\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Num examples = 3269\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Batch size = 64\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 52/52 [00:12<00:00, 4.86it/s][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n```\r\n\r\n\r\n\r\n", "solved see https://github.com/huggingface/transformers/issues/9079?_pjax=%23js-repo-pjax-container ", "Hii please follow me" ]
1,607,420,655,000
1,607,801,782,000
1,607,790,156,000
CONTRIBUTOR
null
Hi I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help {'epoch': 20.0} 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:16<00:00, 1.22it/s] 12/08/2020 10:41:19 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/experiment/joint/finetune/lr-2e-5 12/08/2020 10:41:24 - INFO - __main__ - {'wmt16-en-ro': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1998), 'qnli': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 5462), 'scitail': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1303)} 12/08/2020 10:41:24 - INFO - __main__ - *** Evaluate *** 12/08/2020 10:41:24 - INFO - seq2seq.utils.utils - using task specific params for wmt16-en-ro: {'max_length': 300, 'num_beams': 4} 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation ***** 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Num examples = 1998 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Batch size = 64 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:37<00:00, 1.19s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1286/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1285/comments
https://api.github.com/repos/huggingface/datasets/issues/1285/events
https://github.com/huggingface/datasets/issues/1285
759,278,758
MDU6SXNzdWU3NTkyNzg3NTg=
1,285
boolq does not work
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "here is the minimal code to reproduce\r\n\r\n`datasets>>> datasets.load_dataset(\"boolq\", \"train\")\r\n\r\nthe errors\r\n\r\n```\r\n`cahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nUsing custom data configuration train\r\nDownloading and preparing dataset boolq/train (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /idiap/temp/rkarimi/cache_home_1/datasets/boolq/train/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11...\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py\", line 74, in _split_generators\r\n downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py\", line 149, in download_custom\r\n custom_download(url, path)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py\", line 516, in copy_v2\r\n compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite)\r\n\r\n\r\n\r\n```", "This has been fixed by #881 \r\nthis fix will be available in the next release soon.\r\n\r\nIf you don't want to wait for the release you can actually load the latest version of boolq by specifying `script_version=\"master\"` in `load_dataset`", "thank you this solved this issue, for now seems to work, thanks " ]
1,607,419,727,000
1,607,420,830,000
1,607,420,830,000
CONTRIBUTOR
null
Hi I am getting this error when trying to load boolq, thanks for your help ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock Traceback (most recent call last): File "finetune_t5_trainer.py", line 274, in <module> main() File "finetune_t5_trainer.py", line 147, in main for task in data_args.tasks] File "finetune_t5_trainer.py", line 147, in <listcomp> for task in data_args.tasks] File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 58, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 54, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1285/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1167/comments
https://api.github.com/repos/huggingface/datasets/issues/1167/events
https://github.com/huggingface/datasets/issues/1167
757,722,921
MDU6SXNzdWU3NTc3MjI5MjE=
1,167
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "We're working on adding on-the-fly transforms in datasets.\r\nCurrently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas.\r\nFor example\r\n```python\r\ndataset.set_format(\"torch\")\r\n```\r\napplies `torch.Tensor` to the dataset entries on-the-fly.\r\n\r\nWe plan to extend this to user-defined formatting transforms.\r\nFor example\r\n```python\r\ndataset.set_format(transform=tokenize)\r\n```\r\n\r\nWhat do you think ?" ]
1,607,187,776,000
1,610,534,888,000
null
NONE
null
Hi there, I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern. I guess the solution would entail wrapping a dataset into a Pytorch dataset. As a concrete example from the [docs](https://huggingface.co/transformers/custom_datasets.html) ```python import torch class SquadDataset(torch.utils.data.Dataset): def __init__(self, encodings): # instead of doing this beforehand, I'd like to do tokenization on the fly self.encodings = encodings def __getitem__(self, idx): return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} def __len__(self): return len(self.encodings.input_ids) train_dataset = SquadDataset(train_encodings) ``` How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers? ---- Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant ```python class CustomPytorchDataset(Dataset): def __init__(self): self.dataset = some_hf_dataset(...) self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") def __getitem__(self, batch_idx): instance = self.dataset[text_col][batch_idx] tokenized_text = self.tokenizer(instance, truncation=True, padding=True) return tokenized_text def __len__(self): return len(self.dataset) @staticmethod def collate_fn(batch): # batch is a list, however it will always contain 1 item because we should not use the # batch_size argument as batch_size is controlled by the sampler return {k: torch.tensor(v) for k, v in batch[0].items()} torch_ds = CustomPytorchDataset() # NOTE: batch_sampler returns list of integers and since here we have SequentialSampler # it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)` batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True) # NOTE: no `batch_size` as now the it is controlled by the sampler! dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1167/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1167/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1115/comments
https://api.github.com/repos/huggingface/datasets/issues/1115/events
https://github.com/huggingface/datasets/issues/1115
757,127,527
MDU6SXNzdWU3NTcxMjc1Mjc=
1,115
Incorrect URL for MRQA SQuAD train subset
{ "login": "jimmycode", "id": 6259768, "node_id": "MDQ6VXNlcjYyNTk3Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/6259768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jimmycode", "html_url": "https://github.com/jimmycode", "followers_url": "https://api.github.com/users/jimmycode/followers", "following_url": "https://api.github.com/users/jimmycode/following{/other_user}", "gists_url": "https://api.github.com/users/jimmycode/gists{/gist_id}", "starred_url": "https://api.github.com/users/jimmycode/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jimmycode/subscriptions", "organizations_url": "https://api.github.com/users/jimmycode/orgs", "repos_url": "https://api.github.com/users/jimmycode/repos", "events_url": "https://api.github.com/users/jimmycode/events{/privacy}", "received_events_url": "https://api.github.com/users/jimmycode/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "good catch !" ]
1,607,090,724,000
1,607,274,862,000
1,607,274,862,000
CONTRIBUTOR
null
https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53 The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1115/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1110/comments
https://api.github.com/repos/huggingface/datasets/issues/1110/events
https://github.com/huggingface/datasets/issues/1110
757,082,677
MDU6SXNzdWU3NTcwODI2Nzc=
1,110
Using a feature named "_type" fails with certain operations
{ "login": "dcfidalgo", "id": 15979778, "node_id": "MDQ6VXNlcjE1OTc5Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dcfidalgo", "html_url": "https://github.com/dcfidalgo", "followers_url": "https://api.github.com/users/dcfidalgo/followers", "following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}", "gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}", "starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions", "organizations_url": "https://api.github.com/users/dcfidalgo/orgs", "repos_url": "https://api.github.com/users/dcfidalgo/repos", "events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}", "received_events_url": "https://api.github.com/users/dcfidalgo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Thanks for reporting !\r\n\r\nIndeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json.\r\nWe can probably change `_type` to something that is less likely to collide with user feature names.\r\nIn this case we would want something backward compatible though.\r\n\r\nFeel free to try a fix and open a PR, and to ping me if I can help :) " ]
1,607,086,593,000
1,610,535,208,000
null
CONTRIBUTOR
null
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"_type": ["whatever"]}).map() concatenate_datasets([ds]) # or simply Dataset(ds._data) ``` Context: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column. Not sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict. Best wishes and keep up the awesome work!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1110/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1103/comments
https://api.github.com/repos/huggingface/datasets/issues/1103/events
https://github.com/huggingface/datasets/issues/1103
757,016,820
MDU6SXNzdWU3NTcwMTY4MjA=
1,103
Add support to download kaggle datasets
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`?" ]
1,607,080,117,000
1,626,889,093,000
null
MEMBER
null
We can use API key
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1103/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1103/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1102/comments
https://api.github.com/repos/huggingface/datasets/issues/1102/events
https://github.com/huggingface/datasets/issues/1102
757,016,515
MDU6SXNzdWU3NTcwMTY1MTU=
1,102
Add retries to download manager
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[ { "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false } ]
null
[]
1,607,080,091,000
1,608,651,246,000
1,608,651,246,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1102/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1064/comments
https://api.github.com/repos/huggingface/datasets/issues/1064/events
https://github.com/huggingface/datasets/issues/1064
756,382,186
MDU6SXNzdWU3NTYzODIxODY=
1,064
Not support links with 302 redirect
{ "login": "chameleonTK", "id": 6429850, "node_id": "MDQ6VXNlcjY0Mjk4NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chameleonTK", "html_url": "https://github.com/chameleonTK", "followers_url": "https://api.github.com/users/chameleonTK/followers", "following_url": "https://api.github.com/users/chameleonTK/following{/other_user}", "gists_url": "https://api.github.com/users/chameleonTK/gists{/gist_id}", "starred_url": "https://api.github.com/users/chameleonTK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chameleonTK/subscriptions", "organizations_url": "https://api.github.com/users/chameleonTK/orgs", "repos_url": "https://api.github.com/users/chameleonTK/repos", "events_url": "https://api.github.com/users/chameleonTK/events{/privacy}", "received_events_url": "https://api.github.com/users/chameleonTK/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi !\r\nThis kind of links is now supported by the library since #1316", "> Hi !\r\n> This kind of links is now supported by the library since #1316\r\n\r\nI updated links in TLC datasets to be the github links in this pull request \r\n https://github.com/huggingface/datasets/pull/1737\r\n\r\nEverything works now. Thank you." ]
1,607,015,083,000
1,610,592,685,000
1,610,592,685,000
CONTRIBUTOR
null
I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests). ``` r.head("https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz", allow_redirects=True) # <Response [403]> ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1064/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1046/comments
https://api.github.com/repos/huggingface/datasets/issues/1046/events
https://github.com/huggingface/datasets/issues/1046
756,122,709
MDU6SXNzdWU3NTYxMjI3MDk=
1,046
Dataset.map() turns tensors into lists?
{ "login": "tombosc", "id": 5270804, "node_id": "MDQ6VXNlcjUyNzA4MDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5270804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tombosc", "html_url": "https://github.com/tombosc", "followers_url": "https://api.github.com/users/tombosc/followers", "following_url": "https://api.github.com/users/tombosc/following{/other_user}", "gists_url": "https://api.github.com/users/tombosc/gists{/gist_id}", "starred_url": "https://api.github.com/users/tombosc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tombosc/subscriptions", "organizations_url": "https://api.github.com/users/tombosc/orgs", "repos_url": "https://api.github.com/users/tombosc/repos", "events_url": "https://api.github.com/users/tombosc/events{/privacy}", "received_events_url": "https://api.github.com/users/tombosc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug.", "It is expected behavior, you should set the format to `\"torch\"` as you mentioned to get pytorch tensors back.\r\nBy default datasets returns pure python objects." ]
1,606,995,826,000
1,608,731,472,000
null
NONE
null
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists! ```import datasets import torch from datasets import load_dataset print("version datasets", datasets.__version__) dataset = load_dataset("snli", split='train[0:50]') def tokenizer_fn(example): # actually uses a tokenizer which does something like: return {'input_ids': torch.tensor([[0, 1, 2]])} print("First item in dataset:\n", dataset[0]) tokenized = tokenizer_fn(dataset[0]) print("Tokenized hyp:\n", tokenized) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) ``` The output is: ``` version datasets 1.1.3 Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c) First item in dataset: {'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1} Tokenized hyp: {'input_ids': tensor([[0, 1, 2]])} Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow Tokenized using map: {'input_ids': [[0, 1, 2]]} <class 'torch.Tensor'> <class 'list'> ``` Or am I doing something wrong?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1046/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1027/comments
https://api.github.com/repos/huggingface/datasets/issues/1027/events
https://github.com/huggingface/datasets/issues/1027
755,695,420
MDU6SXNzdWU3NTU2OTU0MjA=
1,027
Hi
{ "login": "suemori87", "id": 75398394, "node_id": "MDQ6VXNlcjc1Mzk4Mzk0", "avatar_url": "https://avatars.githubusercontent.com/u/75398394?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suemori87", "html_url": "https://github.com/suemori87", "followers_url": "https://api.github.com/users/suemori87/followers", "following_url": "https://api.github.com/users/suemori87/following{/other_user}", "gists_url": "https://api.github.com/users/suemori87/gists{/gist_id}", "starred_url": "https://api.github.com/users/suemori87/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suemori87/subscriptions", "organizations_url": "https://api.github.com/users/suemori87/orgs", "repos_url": "https://api.github.com/users/suemori87/repos", "events_url": "https://api.github.com/users/suemori87/events{/privacy}", "received_events_url": "https://api.github.com/users/suemori87/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,606,952,834,000
1,607,013,761,000
1,607,013,761,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1027/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1026/comments
https://api.github.com/repos/huggingface/datasets/issues/1026/events
https://github.com/huggingface/datasets/issues/1026
755,689,195
MDU6SXNzdWU3NTU2ODkxOTU=
1,026
Lío o
{ "login": "Isaias0", "id": 73465581, "node_id": "MDQ6VXNlcjczNDY1NTgx", "avatar_url": "https://avatars.githubusercontent.com/u/73465581?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Isaias0", "html_url": "https://github.com/Isaias0", "followers_url": "https://api.github.com/users/Isaias0/followers", "following_url": "https://api.github.com/users/Isaias0/following{/other_user}", "gists_url": "https://api.github.com/users/Isaias0/gists{/gist_id}", "starred_url": "https://api.github.com/users/Isaias0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Isaias0/subscriptions", "organizations_url": "https://api.github.com/users/Isaias0/orgs", "repos_url": "https://api.github.com/users/Isaias0/repos", "events_url": "https://api.github.com/users/Isaias0/events{/privacy}", "received_events_url": "https://api.github.com/users/Isaias0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,606,951,945,000
1,607,013,767,000
1,607,013,767,000
NONE
null
````l````````` ``` O ``` ````` Ño ``` ```` ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1026/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1004/comments
https://api.github.com/repos/huggingface/datasets/issues/1004/events
https://github.com/huggingface/datasets/issues/1004
755,325,368
MDU6SXNzdWU3NTUzMjUzNjg=
1,004
how large datasets are handled under the hood
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This library uses Apache Arrow under the hood to store datasets on disk.\r\nThe advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed.\r\n\r\nFor example when you access one element or one batch\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nsquad = load_dataset(\"squad\", split=\"train\")\r\nfirst_element = squad[0]\r\none_batch = squad[:8]\r\n```\r\n\r\nthen only this element/batch is loaded in memory, while the rest of the dataset is memory mapped.", "How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.\r\n\r\nEDIT:\r\nMy fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.", "> How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.\r\n\r\nLoading arrow data from disk is done with memory-mapping. This allows to load huge datasets without filling your RAM.\r\nMemory mapping is almost instantaneous and is done within one process.\r\n\r\nThen, the speed of querying examples from the dataset is I/O bounded depending on your disk. If it's an SSD then fetching examples from the dataset will be very fast.\r\nBut since the I/O speed of an SSD is lower than the one of RAM it's expected to be slower to fetch data from disk than from memory.\r\nStill, if you load the dataset in different processes then it can be faster but there will still be the I/O bottleneck of the disk.\r\n\r\n> EDIT:\r\n> My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.\r\n\r\nOk let me know if that helps !\r\n" ]
1,606,919,560,000
1,612,175,031,000
null
NONE
null
Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1004/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/996/comments
https://api.github.com/repos/huggingface/datasets/issues/996/events
https://github.com/huggingface/datasets/issues/996
755,176,084
MDU6SXNzdWU3NTUxNzYwODQ=
996
NotADirectoryError while loading the CNN/Dailymail dataset
{ "login": "arc-bu", "id": 75367920, "node_id": "MDQ6VXNlcjc1MzY3OTIw", "avatar_url": "https://avatars.githubusercontent.com/u/75367920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arc-bu", "html_url": "https://github.com/arc-bu", "followers_url": "https://api.github.com/users/arc-bu/followers", "following_url": "https://api.github.com/users/arc-bu/following{/other_user}", "gists_url": "https://api.github.com/users/arc-bu/gists{/gist_id}", "starred_url": "https://api.github.com/users/arc-bu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arc-bu/subscriptions", "organizations_url": "https://api.github.com/users/arc-bu/orgs", "repos_url": "https://api.github.com/users/arc-bu/repos", "events_url": "https://api.github.com/users/arc-bu/events{/privacy}", "received_events_url": "https://api.github.com/users/arc-bu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Looks like the google drive download failed.\r\nI'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file.\r\n\r\nWe should consider finding a better host than google drive for this dataset imo\r\nrelated : #873 #864 ", "It is working now, thank you. \r\n\r\nShould I leave this issue open to address the Quota-exceeded error?", "Yes please. It's been happening several times, we definitely need to address it", "Any updates on this one? I'm facing a similar issue trying to add CelebA.", "I've looked into it and couldn't find a solution. This looks like a Google Drive limitation..\r\nPlease try to use other hosts when possible", "The original links are google drive links. Would it be feasible for HF to maintain their own servers for this? Also, I think the same issue must also exist with TFDS.", "It's possible to host data on our side but we should ask the authors. TFDS has the same issue and doesn't have a solution either afaik.\r\nOtherwise you can use the google drive link, but it it's not that convenient because of this quota issue.", "Okay. I imagine asking every author who shares their dataset on Google Drive will also be cumbersome.", "I am getting this error as well. Is there a fix?", "Not as long as the data is stored on GG drive unfortunately.\r\nMaybe we can ask if there's a mirror ?\r\n\r\nHi @JafferWilson is there a download link to get cnn dailymail from another host than GG drive ?\r\n\r\nTo give you some context, this library provides tools to download and process datasets. For CNN DailyMail the data are downloaded from the link you provide on your github repository. Unfortunately because of GG drive quotas, many users are not able to load this dataset.", "The following copy of CNN/DM dataset, fixed the problem for me:\r\nhttps://huggingface.co/datasets/ccdv/cnn_dailymail" ]
1,606,907,276,000
1,639,898,876,000
null
NONE
null
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/996/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/993/comments
https://api.github.com/repos/huggingface/datasets/issues/993/events
https://github.com/huggingface/datasets/issues/993
755,135,768
MDU6SXNzdWU3NTUxMzU3Njg=
993
Problem downloading amazon_reviews_multi
{ "login": "hfawaz", "id": 29229602, "node_id": "MDQ6VXNlcjI5MjI5NjAy", "avatar_url": "https://avatars.githubusercontent.com/u/29229602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hfawaz", "html_url": "https://github.com/hfawaz", "followers_url": "https://api.github.com/users/hfawaz/followers", "following_url": "https://api.github.com/users/hfawaz/following{/other_user}", "gists_url": "https://api.github.com/users/hfawaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/hfawaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hfawaz/subscriptions", "organizations_url": "https://api.github.com/users/hfawaz/orgs", "repos_url": "https://api.github.com/users/hfawaz/repos", "events_url": "https://api.github.com/users/hfawaz/events{/privacy}", "received_events_url": "https://api.github.com/users/hfawaz/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion?", "Hi, it seems a connection problem. \r\nNow it says: \r\n`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_ja_train.json`" ]
1,606,904,157,000
1,607,074,693,000
null
CONTRIBUTOR
null
Thanks for adding the dataset. After trying to load the dataset, I am getting the following error: `ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json ` I used the following code to load the dataset: `load_dataset( dataset_name, "all_languages", cache_dir=".data" )` I am using version 1.1.3 of `datasets` Note that I can perform a successfull `wget https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/993/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/993/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/988/comments
https://api.github.com/repos/huggingface/datasets/issues/988/events
https://github.com/huggingface/datasets/issues/988
755,069,159
MDU6SXNzdWU3NTUwNjkxNTk=
988
making sure datasets are not loaded in memory and distributed training of them
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "my implementation of sharding per TPU core: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L316 \r\nmy implementation of dataloader for this case https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks/tasks.py#L496 " ]
1,606,898,715,000
1,606,899,034,000
null
CONTRIBUTOR
null
Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/988/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/961/comments
https://api.github.com/repos/huggingface/datasets/issues/961/events
https://github.com/huggingface/datasets/issues/961
754,434,398
MDU6SXNzdWU3NTQ0MzQzOTg=
961
sample multiple datasets
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 \r\n\r\nI need to train my model distributedly with this dataloader, \"MultiTasksataloader\", currently this does not work in distributed fasion,\r\nto save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks \r\n" ]
1,606,832,402,000
1,606,872,764,000
null
CONTRIBUTOR
null
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/961/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/942/comments
https://api.github.com/repos/huggingface/datasets/issues/942/events
https://github.com/huggingface/datasets/issues/942
754,162,318
MDU6SXNzdWU3NTQxNjIzMTg=
942
D
{ "login": "CryptoMiKKi", "id": 74238514, "node_id": "MDQ6VXNlcjc0MjM4NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/74238514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CryptoMiKKi", "html_url": "https://github.com/CryptoMiKKi", "followers_url": "https://api.github.com/users/CryptoMiKKi/followers", "following_url": "https://api.github.com/users/CryptoMiKKi/following{/other_user}", "gists_url": "https://api.github.com/users/CryptoMiKKi/gists{/gist_id}", "starred_url": "https://api.github.com/users/CryptoMiKKi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CryptoMiKKi/subscriptions", "organizations_url": "https://api.github.com/users/CryptoMiKKi/orgs", "repos_url": "https://api.github.com/users/CryptoMiKKi/repos", "events_url": "https://api.github.com/users/CryptoMiKKi/events{/privacy}", "received_events_url": "https://api.github.com/users/CryptoMiKKi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,606,810,630,000
1,607,013,773,000
1,607,013,773,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/942/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/937/comments
https://api.github.com/repos/huggingface/datasets/issues/937/events
https://github.com/huggingface/datasets/issues/937
753,921,078
MDU6SXNzdWU3NTM5MjEwNzg=
937
Local machine/cluster Beam Datasets example/tutorial
{ "login": "shangw-nvidia", "id": 66387198, "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shangw-nvidia", "html_url": "https://github.com/shangw-nvidia", "followers_url": "https://api.github.com/users/shangw-nvidia/followers", "following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}", "gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}", "starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions", "organizations_url": "https://api.github.com/users/shangw-nvidia/orgs", "repos_url": "https://api.github.com/users/shangw-nvidia/repos", "events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}", "received_events_url": "https://api.github.com/users/shangw-nvidia/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.\r\nFrom my experience the DirectRunner is fine though, even if it's clearly not memory efficient.\r\n\r\nIt would be awesome though to make it work locally on a SparkRunner !\r\nDid you manage to make your processing work ?" ]
1,606,785,103,000
1,608,731,696,000
null
NONE
null
Hi, I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output. Thanks! Shang
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/937/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/927/comments
https://api.github.com/repos/huggingface/datasets/issues/927/events
https://github.com/huggingface/datasets/issues/927
753,679,020
MDU6SXNzdWU3NTM2NzkwMjA=
927
Hello
{ "login": "k125-ak", "id": 75259546, "node_id": "MDQ6VXNlcjc1MjU5NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/75259546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/k125-ak", "html_url": "https://github.com/k125-ak", "followers_url": "https://api.github.com/users/k125-ak/followers", "following_url": "https://api.github.com/users/k125-ak/following{/other_user}", "gists_url": "https://api.github.com/users/k125-ak/gists{/gist_id}", "starred_url": "https://api.github.com/users/k125-ak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/k125-ak/subscriptions", "organizations_url": "https://api.github.com/users/k125-ak/orgs", "repos_url": "https://api.github.com/users/k125-ak/repos", "events_url": "https://api.github.com/users/k125-ak/events{/privacy}", "received_events_url": "https://api.github.com/users/k125-ak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,606,758,605,000
1,606,758,630,000
1,606,758,630,000
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/927/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/919/comments
https://api.github.com/repos/huggingface/datasets/issues/919/events
https://github.com/huggingface/datasets/issues/919
753,434,472
MDU6SXNzdWU3NTM0MzQ0NzI=
919
wrong length with datasets
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ", "sorry I misunderstood length of dataset with dataloader, closed. thanks " ]
1,606,739,019,000
1,606,739,847,000
1,606,739,846,000
CONTRIBUTOR
null
Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, batch_size=self.args.train_batch_size, sampler=train_sampler, collate_fn=self.data_collator, drop_last=self.args.dataloader_drop_last, num_workers=self.args.dataloader_num_workers, ) ``` now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/919/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/915/comments
https://api.github.com/repos/huggingface/datasets/issues/915/events
https://github.com/huggingface/datasets/issues/915
753,118,481
MDU6SXNzdWU3NTMxMTg0ODE=
915
Shall we change the hashing to encoding to reduce potential replicated cache files?
{ "login": "zhuzilin", "id": 10428324, "node_id": "MDQ6VXNlcjEwNDI4MzI0", "avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhuzilin", "html_url": "https://github.com/zhuzilin", "followers_url": "https://api.github.com/users/zhuzilin/followers", "following_url": "https://api.github.com/users/zhuzilin/following{/other_user}", "gists_url": "https://api.github.com/users/zhuzilin/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhuzilin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhuzilin/subscriptions", "organizations_url": "https://api.github.com/users/zhuzilin/orgs", "repos_url": "https://api.github.com/users/zhuzilin/repos", "events_url": "https://api.github.com/users/zhuzilin/events{/privacy}", "received_events_url": "https://api.github.com/users/zhuzilin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?", "@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now.\r\n- or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing.\r\nIf we find one, we can adjust the list in `self._fingerprint` to it.\r\n\r\nAs for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders.\r\n\r\nAnd for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`.\r\n\r\nBecause we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset." ]
1,606,708,246,000
1,608,786,709,000
null
NONE
null
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write. If you have interest in this, I'd love to help :).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/915/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/911/comments
https://api.github.com/repos/huggingface/datasets/issues/911/events
https://github.com/huggingface/datasets/issues/911
752,806,215
MDU6SXNzdWU3NTI4MDYyMTU=
911
datasets module not found
{ "login": "sbassam", "id": 15836274, "node_id": "MDQ6VXNlcjE1ODM2Mjc0", "avatar_url": "https://avatars.githubusercontent.com/u/15836274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sbassam", "html_url": "https://github.com/sbassam", "followers_url": "https://api.github.com/users/sbassam/followers", "following_url": "https://api.github.com/users/sbassam/following{/other_user}", "gists_url": "https://api.github.com/users/sbassam/gists{/gist_id}", "starred_url": "https://api.github.com/users/sbassam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbassam/subscriptions", "organizations_url": "https://api.github.com/users/sbassam/orgs", "repos_url": "https://api.github.com/users/sbassam/repos", "events_url": "https://api.github.com/users/sbassam/events{/privacy}", "received_events_url": "https://api.github.com/users/sbassam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "nvm, I'd made an assumption that the library gets installed with transformers. " ]
1,606,613,055,000
1,606,660,389,000
1,606,660,389,000
NONE
null
Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/911/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/911/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/910/comments
https://api.github.com/repos/huggingface/datasets/issues/910/events
https://github.com/huggingface/datasets/issues/910
752,772,723
MDU6SXNzdWU3NTI3NzI3MjM=
910
Grindr meeting app web.Grindr
{ "login": "jackin34", "id": 75184749, "node_id": "MDQ6VXNlcjc1MTg0NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/75184749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jackin34", "html_url": "https://github.com/jackin34", "followers_url": "https://api.github.com/users/jackin34/followers", "following_url": "https://api.github.com/users/jackin34/following{/other_user}", "gists_url": "https://api.github.com/users/jackin34/gists{/gist_id}", "starred_url": "https://api.github.com/users/jackin34/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jackin34/subscriptions", "organizations_url": "https://api.github.com/users/jackin34/orgs", "repos_url": "https://api.github.com/users/jackin34/repos", "events_url": "https://api.github.com/users/jackin34/events{/privacy}", "received_events_url": "https://api.github.com/users/jackin34/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,606,599,383,000
1,606,644,711,000
1,606,644,711,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/910/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/900/comments
https://api.github.com/repos/huggingface/datasets/issues/900/events
https://github.com/huggingface/datasets/issues/900
752,214,066
MDU6SXNzdWU3NTIyMTQwNjY=
900
datasets.load_dataset() custom chaching directory bug
{ "login": "SapirWeissbuch", "id": 44585792, "node_id": "MDQ6VXNlcjQ0NTg1Nzky", "avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SapirWeissbuch", "html_url": "https://github.com/SapirWeissbuch", "followers_url": "https://api.github.com/users/SapirWeissbuch/followers", "following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}", "gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}", "starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions", "organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs", "repos_url": "https://api.github.com/users/SapirWeissbuch/repos", "events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}", "received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting ! I'm looking into it." ]
1,606,479,533,000
1,606,690,133,000
1,606,690,133,000
NONE
null
Hello, I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to `~/.cache`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```python import datasets from pathlib import Path validation_dataset = datasets.load_dataset("natural_questions", split="validation[:5%]", cache_dir=Path("./data")) ``` ## The output: * The dataset is downloaded to my home directory's `.cache` * A new empty directory named "`natural_questions` is created in the specified directory `.data` * `tree data` in the shell outputs: ``` data └── natural_questions └── default └── 0.0.2 3 directories, 0 files ``` The output: ``` Downloading: 8.61kB [00:00, 5.11MB/s] Downloading: 13.6kB [00:00, 7.89MB/s] Using custom data configuration default Downloading and preparing dataset natural_questions/default (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size, total: 134.92 GiB) to ./data/natural_questions/default/0.0.2/867dbbaf9137c1b8 3ecb19f5eb80559e1002ea26e702c6b919cfa81a17a8c531... Downloading: 100%|██████████████████████████████████████████████████| 13.6k/13.6k [00:00<00:00, 1.51MB/s] Downloading: 7%|███▎ | 6.70G/97.4G [03:46<1:37:05, 15.6MB/s] ``` ## Expected behaviour: The dataset "Natural Questions" should be downloaded to the directory "./data"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/900/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/897/comments
https://api.github.com/repos/huggingface/datasets/issues/897/events
https://github.com/huggingface/datasets/issues/897
752,100,256
MDU6SXNzdWU3NTIxMDAyNTY=
897
Dataset viewer issues
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?", "Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time", "9", "‏⠀‏‏‏⠀‏‏‏⠀ ‏⠀ ", "‏⠀‏‏‏⠀‏‏‏⠀ ‏⠀ " ]
1,606,468,474,000
1,635,671,521,000
1,635,671,521,000
CONTRIBUTOR
null
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user ```bash IndexError: list index out of range Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 316, in <module> st.table(style) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta rv = marshall_element(msg.delta.new_element) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element return method(dg, element, *args, **kwargs) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table data_frame_proto.marshall_data_frame(data, element.table) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame _marshall_styles(proto_df.style, df, styler) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles translated_style = styler._translate() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate * (len(clabels[0]) - len(hidden_columns)) ``` - there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/897/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/888/comments
https://api.github.com/repos/huggingface/datasets/issues/888/events
https://github.com/huggingface/datasets/issues/888
750,944,422
MDU6SXNzdWU3NTA5NDQ0MjI=
888
Nested lists are zipped unexpectedly
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.\r\nSee the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details", "Thanks.\r\nThis is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :) \r\n" ]
1,606,320,466,000
1,606,325,439,000
1,606,325,439,000
CONTRIBUTOR
null
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, {"bottom": 2} ] }] } ``` I then load my dataset: ```python train = load_dataset("my dataset")["train"] ``` and expect to be able to access `data[0]["top"][0]["middle"][0]`. That is not the case. Here is `data[0]` as JSON: ```json {"top": {"middle": [{"bottom": [1, 2]}]}} ``` Clearly different than the thing I inputted. ```json {"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/888/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/887/comments
https://api.github.com/repos/huggingface/datasets/issues/887/events
https://github.com/huggingface/datasets/issues/887
750,868,831
MDU6SXNzdWU3NTA4Njg4MzE=
887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.\r\nWith the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype=\"float32\")` for example since the [underlying arrow type](https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L236) allows dynamic sizes.\r\n\r\nFor now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset.\r\nWhat do you think ?", "> Yes right now ArrayXD can only be used as a column feature type, not a subtype. \r\n\r\nMeaning it can't be nested under `Sequence`?\r\nIf so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested.", "Yea unfortunately..\r\nThat's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects.\r\nWe already have an ExtensionArray that allows us to use them as column types but not for subtypes.\r\nMaybe we can extend it, I haven't experimented with that yet", "Cool\r\nSo please consider this issue as a feature request for:\r\n```\r\nArray3D(shape=(None, 137, 2), dtype=\"float32\")\r\n```\r\n\r\nits a way to represent videos, poses, and other cool sequences", "@lhoestq well, so sequence of sequences doesn't work either...\r\n\r\n```\r\npyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648\r\n```\r\n\r\n\r\n", "Working with Arrow can be quite fun sometimes.\r\nYou can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741).\r\n\r\nLet me know if it works.\r\nI haven't investigated yet on https://github.com/huggingface/datasets/issues/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week.", "The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus)\r\nLoading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough\r\n", "Sorry it doesn't work. Will let you know once I fixed it", "Hi @lhoestq , any update on dynamic sized arrays?\r\n(`Array3D(shape=(None, 137, 2), dtype=\"float32\")`)", "Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported.", "Hi @lhoestq,\r\nAny chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays?\r\n\r\ne.g.:\r\n`datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype=\"float32\"))`\r\n`Array3D(shape=(None, 137, 2), dtype=\"float32\")`", "Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point.", "@lhoestq, thanks for the update.\r\n\r\nI actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here?\r\nI think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c/src/datasets/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version)\r\nBelow are my modifications of this class.\r\n\r\n```\r\nclass ArrayExtensionArray(pa.ExtensionArray):\r\n def __array__(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n return self.to_numpy(zero_copy_only=zero_copy_only)\r\n\r\n def __getitem__(self, i):\r\n return self.storage[i]\r\n\r\n def to_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n size = 1\r\n for i in range(self.type.ndims):\r\n size *= self.type.shape[i]\r\n storage = storage.flatten()\r\n numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only)\r\n numpy_arr = numpy_arr.reshape(len(self), *self.type.shape)\r\n return numpy_arr\r\n\r\n def to_list_of_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n shape = self.type.shape\r\n arrays = []\r\n for dim in range(1, self.type.ndims):\r\n assert shape[dim] is not None, f\"Support only dynamic size on first dimension. Got: {shape}\"\r\n\r\n first_dim_offsets = np.array([off.as_py() for off in storage.offsets])\r\n for i in range(len(storage)):\r\n storage_el = storage[i:i+1]\r\n first_dim = first_dim_offsets[i+1] - first_dim_offsets[i]\r\n # flatten storage\r\n for dim in range(self.type.ndims):\r\n storage_el = storage_el.flatten()\r\n\r\n numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only)\r\n arrays.append(numpy_arr.reshape(first_dim, *shape[1:]))\r\n\r\n return arrays\r\n\r\n def to_pylist(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n if self.type.shape[0] is None:\r\n return self.to_list_of_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n return self.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```\r\n\r\nI ran few tests and it works as expected. Let me know what you think.", "Thanks for diving into this !\r\n\r\nIndeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint).\r\nYour code looks great :) I think it can even be extended to support several dynamic dimensions if we want to.\r\n\r\nFeel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases.\r\nIn particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet):\r\n\r\n```python\r\nfrom datasets import Dataset, Features, Array3D\r\n\r\n# this works\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(1, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix]]})\r\nprint(d.to_pandas())\r\n\r\n# this should work as well\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(None, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix] * 2]})\r\nprint(d.to_pandas())\r\n```\r\n\r\nI'll be happy to help you on this :)" ]
1,606,314,741,000
1,631,207,020,000
null
CONTRIBUTOR
null
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/887/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/885/comments
https://api.github.com/repos/huggingface/datasets/issues/885/events
https://github.com/huggingface/datasets/issues/885
750,789,052
MDU6SXNzdWU3NTA3ODkwNTI=
885
Very slow cold-start
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Good point!", "Yes indeed. We can probably improve that by using lazy imports", "#1690 added fast start-up of the library " ]
1,606,308,478,000
1,610,537,485,000
1,610,537,485,000
CONTRIBUTOR
null
Hi, I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant. When I load a metric, or a dataset, its fine that it takes time. The following ranges from 3 to 9 seconds: ``` python -m timeit -n 1 -r 1 'from datasets import load_dataset' ``` edit: sorry for the mis-tag, not sure how I added it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/885/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/885/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/883/comments
https://api.github.com/repos/huggingface/datasets/issues/883/events
https://github.com/huggingface/datasets/issues/883
749,750,801
MDU6SXNzdWU3NDk3NTA4MDE=
883
Downloading/caching only a part of a datasets' dataset.
{ "login": "SapirWeissbuch", "id": 44585792, "node_id": "MDQ6VXNlcjQ0NTg1Nzky", "avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SapirWeissbuch", "html_url": "https://github.com/SapirWeissbuch", "followers_url": "https://api.github.com/users/SapirWeissbuch/followers", "following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}", "gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}", "starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions", "organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs", "repos_url": "https://api.github.com/users/SapirWeissbuch/repos", "events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}", "received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[ "Not at the moment but we could likely support this feature.", "?", "I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.\r\nThis makes the task impossible with limited memory resources." ]
1,606,227,918,000
1,606,485,115,000
null
NONE
null
Hi, I want to use the validation data *only* (of natural question). I don't want to have the whole dataset cached in my machine, just the dev set. Is this possible? I can't find a way to do it in the docs. Thank you, Sapir
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/883/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/880/comments
https://api.github.com/repos/huggingface/datasets/issues/880/events
https://github.com/huggingface/datasets/issues/880
748,949,606
MDU6SXNzdWU3NDg5NDk2MDY=
880
Add SQA
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq ", "@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples/strings:\r\n\r\n```\r\nimport pandas as pd\r\nimport ast\r\n\r\ndata = pd.read_csv(\"/content/sqa_data/random-split-1-dev.tsv\", sep='\\t')\r\n\r\ndef _parse_answer_coordinates(answer_coordinate_str):\r\n \"\"\"Parses the answer_coordinates of a question.\r\n Args:\r\n answer_coordinate_str: A string representation of a Python list of tuple\r\n strings.\r\n For example: \"['(1, 4)','(1, 3)', ...]\"\r\n \"\"\"\r\n\r\n try:\r\n answer_coordinates = []\r\n # make a list of strings\r\n coords = ast.literal_eval(answer_coordinate_str)\r\n # parse each string as a tuple\r\n for row_index, column_index in sorted(\r\n ast.literal_eval(coord) for coord in coords):\r\n answer_coordinates.append((row_index, column_index))\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_coordinate_str)\r\n \r\n return answer_coordinates\r\n\r\n\r\ndef _parse_answer_text(answer_text):\r\n \"\"\"Populates the answer_texts field of `answer` by parsing `answer_text`.\r\n Args:\r\n answer_text: A string representation of a Python list of strings.\r\n For example: \"[u'test', u'hello', ...]\"\r\n \"\"\"\r\n try:\r\n answer = []\r\n for value in ast.literal_eval(answer_text):\r\n answer.append(value)\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_text)\r\n\r\n return answer\r\n\r\ndata['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str))\r\ndata['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt))\r\n```\r\n\r\nHere I'm using Pandas to read in one of the TSV files (the dev set). \r\n\r\n", "Closing since SQA was added in #1566 " ]
1,606,149,115,000
1,608,731,904,000
1,608,731,903,000
NONE
null
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/ - **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253 - **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71). Note 1: this dataset actually consists of 2 types of files: 1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test) 2) a folder of csv files, which contain the actual tabular data Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub. Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/880/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/879/comments
https://api.github.com/repos/huggingface/datasets/issues/879/events
https://github.com/huggingface/datasets/issues/879
748,848,847
MDU6SXNzdWU3NDg4NDg4NDc=
879
boolq does not load
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "Hi ! It runs on my side without issues. I tried\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"boolq\")\r\n```\r\n\r\nWhat version of datasets and tensorflow are your runnning ?\r\nAlso if you manage to get a minimal reproducible script (on google colab for example) that would be useful.", "hey\ni do the exact same commands. for me it fails i guess might be issues with\ncaching maybe?\nthanks\nbest\nrabeeh\n\nOn Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hi ! It runs on my side without issues. I tried\n>\n> from datasets import load_datasetload_dataset(\"boolq\")\n>\n> What version of datasets and tensorflow are your runnning ?\n> Also if you manage to get a minimal reproducible script (on google colab\n> for example) that would be useful.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/879#issuecomment-732769114>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGDR2FUMRKZTIY5CTSRN3VXANCNFSM4T7R3U6A>\n> .\n>\n", "Could you check if it works on the master branch ?\r\nYou can use `load_dataset(\"boolq\", script_version=\"master\")` to do so.\r\nWe did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https://github.com/huggingface/datasets/pull/881" ]
1,606,141,708,000
1,606,485,071,000
null
CONTRIBUTOR
null
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/879/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/878/comments
https://api.github.com/repos/huggingface/datasets/issues/878/events
https://github.com/huggingface/datasets/issues/878
748,621,981
MDU6SXNzdWU3NDg2MjE5ODE=
878
Loading Data From S3 Path in Sagemaker
{ "login": "mahesh1amour", "id": 42795522, "node_id": "MDQ6VXNlcjQyNzk1NTIy", "avatar_url": "https://avatars.githubusercontent.com/u/42795522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mahesh1amour", "html_url": "https://github.com/mahesh1amour", "followers_url": "https://api.github.com/users/mahesh1amour/followers", "following_url": "https://api.github.com/users/mahesh1amour/following{/other_user}", "gists_url": "https://api.github.com/users/mahesh1amour/gists{/gist_id}", "starred_url": "https://api.github.com/users/mahesh1amour/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mahesh1amour/subscriptions", "organizations_url": "https://api.github.com/users/mahesh1amour/orgs", "repos_url": "https://api.github.com/users/mahesh1amour/repos", "events_url": "https://api.github.com/users/mahesh1amour/events{/privacy}", "received_events_url": "https://api.github.com/users/mahesh1amour/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[ "This would be a neat feature", "> neat feature\r\n\r\nI dint get these clearly, can you please elaborate like how to work on these ", "It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no?", "Thanks thomwolf and julien-c\r\n\r\nI'm still confusion on what you guys said, \r\n\r\nI have solved the problem as follows:\r\n\r\n1. read the csv file using pandas from s3 \r\n2. Convert to dictionary key as column name and values as list column data\r\n3. convert it to Dataset using \r\n`from datasets import Dataset`\r\n`train_dataset = Dataset.from_dict(train_dict)`", "We were brainstorming around your use-case.\r\n\r\nLet's keep the issue open for now, I think this is an interesting question to think about.", "> We were brainstorming around your use-case.\r\n> \r\n> Let's keep the issue open for now, I think this is an interesting question to think about.\r\n\r\nSure thomwolf, Thanks for your concern ", "I agree it would be cool to have that feature. Also that's good to know that pandas supports this.\r\nFor the moment I'd suggest to first download the files locally as thom suggested and then load the dataset by providing paths to the local files", "Don't get\n", "Any updates on this issue?\r\nI face a similar issue. I have many parquet files in S3 and I would like to train on them. \r\nTo be honest I even face issues with only getting the last layer embedding out of them.", "Hi dorlavie, \r\nYou can find one solution that i have mentioned above, that can help you. \r\nAnd there is one more solution also which is downloading files locally\r\n", "> Hi dorlavie,\r\n> You can find one solution that i have mentioned above, that can help you.\r\n> And there is one more solution also which is downloading files locally\r\n\r\nmahesh1amour, thanks for the fast reply\r\n\r\nUnfortunately, in my case I can not read with pandas. The dataset is too big (50GB). \r\nIn addition, due to security concerns I am not allowed to save the data locally", "@dorlavie could use `boto3` to download the data to your local machine and then load it with `dataset`\r\n\r\nboto3 example [documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html)\r\n```python\r\nimport boto3\r\n\r\ns3 = boto3.client('s3')\r\ns3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME')\r\n```\r\n\r\ndatasets example [documentation](https://huggingface.co/docs/datasets/loading_datasets.html)\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files=['my_file_1.csv', 'my_file_2.csv', 'my_file_3.csv'])\r\n```\r\n", "Thanks @philschmid for the suggestion.\r\nAs I mentioned in the previous comment, due to security issues I can not save the data locally.\r\nI need to read it from S3 and process it directly.\r\n\r\nI guess that many other people try to train / fit those models on huge datasets (e.g entire Wiki), what is the best practice in those cases?", "If I understand correctly you're not allowed to write data on disk that you downloaded from S3 for example ?\r\nOr is it the use of the `boto3` library that is not allowed in your case ?", "@lhoestq yes you are correct.\r\nI am not allowed to save the \"raw text\" locally - The \"raw text\" must be saved only on S3.\r\nI am allowed to save the output of any model locally. \r\nIt doesn't matter how I do it boto3/pandas/pyarrow, it is forbidden", "@dorlavie are you using sagemaker for training too? Then you could use S3 URI, for example `s3://my-bucket/my-training-data` and pass it within the `.fit()` function when you start the sagemaker training job. Sagemaker would then download the data from s3 into the training runtime and you could load it from disk\r\n\r\n**sagemaker start training job**\r\n```python\r\npytorch_estimator.fit({'train':'s3://my-bucket/my-training-data','eval':'s3://my-bucket/my-evaluation-data'})\r\n```\r\n\r\n**in the train.py script**\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ntrain_dataset = load_from_disk(os.environ['SM_CHANNEL_TRAIN'])\r\n```\r\n\r\nI have created an example of how to use transformers and datasets with sagemaker. \r\nhttps://github.com/philschmid/huggingface-sagemaker-example/tree/main/03_huggingface_sagemaker_trainer_with_data_from_s3\r\n\r\nThe example contains a jupyter notebook `sagemaker-example.ipynb` and an `src/` folder. The sagemaker-example is a jupyter notebook that is used to create the training job on AWS Sagemaker. The `src/` folder contains the `train.py`, our training script, and `requirements.txt` for additional dependencies.\r\n\r\n" ]
1,606,123,042,000
1,608,717,188,000
null
NONE
null
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/878/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/878/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/877/comments
https://api.github.com/repos/huggingface/datasets/issues/877/events
https://github.com/huggingface/datasets/issues/877
748,234,438
MDU6SXNzdWU3NDgyMzQ0Mzg=
877
DataLoader(datasets) become more and more slowly within iterations
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shexuan/subscriptions", "organizations_url": "https://api.github.com/users/shexuan/orgs", "repos_url": "https://api.github.com/users/shexuan/repos", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "received_events_url": "https://api.github.com/users/shexuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not", "> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\n> It would be nice to know whether it comes from the dataloader or not\r\n\r\nI did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)`, around 20000it/s." ]
1,606,048,870,000
1,606,664,712,000
1,606,664,712,000
NONE
null
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s. And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s. Could you please help me with this problem? Thanks a lot!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/877/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/876/comments
https://api.github.com/repos/huggingface/datasets/issues/876/events
https://github.com/huggingface/datasets/issues/876
748,195,104
MDU6SXNzdWU3NDgxOTUxMDQ=
876
imdb dataset cannot be loaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n```\r\nto make sure it's not a corrupted file issue ?", "I was using version 1.1.2 and this resolved with version 1.1.3, thanks. ", "Hello,\r\nI have the same pb with 1.8.0", "Hi ! I just tried in 1.8.0 and it worked fine. Can you try again ? Maybe the dataset host had some issues that are fixed now", "Hello,\r\nIt works fine now :) !\r\nThanks !" ]
1,606,033,483,000
1,637,924,836,000
1,608,831,527,000
CONTRIBUTOR
null
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] >>> dataset = datasets.load_dataset("imdb", split="train") ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/876/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/875/comments
https://api.github.com/repos/huggingface/datasets/issues/875/events
https://github.com/huggingface/datasets/issues/875
748,194,311
MDU6SXNzdWU3NDgxOTQzMTE=
875
bug in boolq dataset loading
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I just opened a PR to fix this.\r\nThanks for reporting !" ]
1,606,033,114,000
1,606,212,753,000
1,606,212,753,000
CONTRIBUTOR
null
Hi I am trying to load boolq dataset: ``` import datasets datasets.load_dataset("boolq") ``` I am getting the following errors, thanks for your help ``` >>> import datasets 2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2020-11-22 09:16:30.070389: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. >>> datasets.load_dataset("boolq") cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets Using custom data configuration default Downloading and preparing dataset boolq/default (download: 8.36 MiB, generated: 7.47 MiB, post-processed: Unknown size, total: 15.83 MiB) to /idiap/temp/rkarimi/cache_home/datasets/boolq/default/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11... cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/875/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/874/comments
https://api.github.com/repos/huggingface/datasets/issues/874/events
https://github.com/huggingface/datasets/issues/874
748,193,140
MDU6SXNzdWU3NDgxOTMxNDA=
874
trec dataset unavailable
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This was fixed in #740 \r\nCould you try to update `datasets` and try again ?", "This has been fixed in datasets 1.1.3" ]
1,606,032,576,000
1,606,485,402,000
1,606,485,402,000
CONTRIBUTOR
null
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/874/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/873/comments
https://api.github.com/repos/huggingface/datasets/issues/873/events
https://github.com/huggingface/datasets/issues/873
747,959,523
MDU6SXNzdWU3NDc5NTk1MjM=
873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
{ "login": "vishal-burman", "id": 19861874, "node_id": "MDQ6VXNlcjE5ODYxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishal-burman", "html_url": "https://github.com/vishal-burman", "followers_url": "https://api.github.com/users/vishal-burman/followers", "following_url": "https://api.github.com/users/vishal-burman/following{/other_user}", "gists_url": "https://api.github.com/users/vishal-burman/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishal-burman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishal-burman/subscriptions", "organizations_url": "https://api.github.com/users/vishal-burman/orgs", "repos_url": "https://api.github.com/users/vishal-burman/repos", "events_url": "https://api.github.com/users/vishal-burman/events{/privacy}", "received_events_url": "https://api.github.com/users/vishal-burman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I get the same error. It was fixed some days ago, but again it appears", "Hi @mrm8488 it's working again today without any fix so I am closing this issue.", "I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is already up-to-date!\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNotADirectoryError Traceback (most recent call last)\r\n\r\n<ipython-input-9-cd4bf8bea840> in <module>()\r\n 22 \r\n 23 \r\n---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')\r\n 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')\r\n 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')\r\n\r\n5 frames\r\n\r\n/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n\r\nCan someone please take a look ?", "Sometimes happens. Try in a while", "It is working now, thank you. " ]
1,605,940,245,000
1,606,993,455,000
1,606,047,485,000
NONE
null
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/873/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/871/comments
https://api.github.com/repos/huggingface/datasets/issues/871/events
https://github.com/huggingface/datasets/issues/871
747,470,136
MDU6SXNzdWU3NDc0NzAxMzY=
871
terminate called after throwing an instance of 'google::protobuf::FatalException'
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)", "closing now, figured out this is because the max length of decoder was set smaller than the input_dimensions. thanks " ]
1,605,876,984,000
1,607,807,792,000
1,607,807,792,000
CONTRIBUTOR
null
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): run_t5_base_eval.sh: line 19: 5795 Aborted
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/871/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/870/comments
https://api.github.com/repos/huggingface/datasets/issues/870/events
https://github.com/huggingface/datasets/issues/870
747,021,996
MDU6SXNzdWU3NDcwMjE5OTY=
870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
{ "login": "jncasey", "id": 31020859, "node_id": "MDQ6VXNlcjMxMDIwODU5", "avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jncasey", "html_url": "https://github.com/jncasey", "followers_url": "https://api.github.com/users/jncasey/followers", "following_url": "https://api.github.com/users/jncasey/following{/other_user}", "gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}", "starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jncasey/subscriptions", "organizations_url": "https://api.github.com/users/jncasey/orgs", "repos_url": "https://api.github.com/users/jncasey/repos", "events_url": "https://api.github.com/users/jncasey/events{/privacy}", "received_events_url": "https://api.github.com/users/jncasey/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! Thanks for your message.\r\nIndeed it's a free feature we can add and that can be useful.\r\nIf you want to contribute, feel free to open a PR to add it to the text dataset script :)" ]
1,605,829,891,000
1,606,484,891,000
null
NONE
null
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of my data into a dataset, I hadn't realized the text loader script was processing the source files line-by-line and stripping off the newlines. Once I caught the issue, I made my own data loader by modifying one line in the default text loader (changing `batch = batch.splitlines()` to `batch = batch.splitlines(True)` inside `_generate_tables`). And so I'm all set as far as my project is concerned. But if my use case is more general, it seems like it'd be pretty trivial to add a kwarg to the default text loader called keeplinebreaks or something, which would default to False and get passed to `splitlines()`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/870/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/866/comments
https://api.github.com/repos/huggingface/datasets/issues/866/events
https://github.com/huggingface/datasets/issues/866
745,719,222
MDU6SXNzdWU3NDU3MTkyMjI=
866
OSCAR from Inria group
{ "login": "jchwenger", "id": 34098722, "node_id": "MDQ6VXNlcjM0MDk4NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jchwenger", "html_url": "https://github.com/jchwenger", "followers_url": "https://api.github.com/users/jchwenger/followers", "following_url": "https://api.github.com/users/jchwenger/following{/other_user}", "gists_url": "https://api.github.com/users/jchwenger/gists{/gist_id}", "starred_url": "https://api.github.com/users/jchwenger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jchwenger/subscriptions", "organizations_url": "https://api.github.com/users/jchwenger/orgs", "repos_url": "https://api.github.com/users/jchwenger/repos", "events_url": "https://api.github.com/users/jchwenger/events{/privacy}", "received_events_url": "https://api.github.com/users/jchwenger/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "PR is already open here : #348 \r\nThe only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled).\r\nAs soon as #863 is merged we can start computing them. This will take a bit of time though", "Grand, thanks for this!" ]
1,605,710,454,000
1,605,711,690,000
1,605,711,690,000
NONE
null
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.* - **Paper:** *[here](https://hal.inria.fr/hal-02148693)* - **Data:** *[here](https://oscar-corpus.com/)* - **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).* I am aware that you do offer the "colossal" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/866/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/865/comments
https://api.github.com/repos/huggingface/datasets/issues/865/events
https://github.com/huggingface/datasets/issues/865
745,430,497
MDU6SXNzdWU3NDU0MzA0OTc=
865
Have Trouble importing `datasets`
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "organizations_url": "https://api.github.com/users/forest1988/orgs", "repos_url": "https://api.github.com/users/forest1988/repos", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "received_events_url": "https://api.github.com/users/forest1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I'm sorry, this was a problem with my environment.\r\nNow that I have identified the cause of environmental dependency, I would like to fix it and try it.\r\nExcuse me for making a noise." ]
1,605,686,681,000
1,605,687,395,000
1,605,687,395,000
CONTRIBUTOR
null
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in <module> 116 sys.path.append(str(HF_MODULES_CACHE)) 117 --> 118 os.makedirs(HF_MODULES_CACHE, exist_ok=True) 119 if not os.path.exists(os.path.join(HF_MODULES_CACHE, "__init__.py")): 120 with open(os.path.join(HF_MODULES_CACHE, "__init__.py"), "w"): ~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/os.py in makedirs(name, mode, exist_ok) 221 return 222 try: --> 223 mkdir(name, mode) 224 except OSError: 225 # Cannot rely on checking for EEXIST, since the operating system FileNotFoundError: [Errno 2] No such file or directory: '<MY_HOME_DIRECTORY>/.cache/huggingface/modules' ``` The error occurs in `os.makedirs` in `file_utils.py`, even though `exist_ok = True` option is set. (I use Python 3.8, so `exist_ok` is expected to work.) I've checked some environment variables, and they are set as below. ``` *** NameError: name 'HF_MODULES_CACHE' is not defined *** NameError: name 'hf_cache_home' is not defined *** NameError: name 'XDG_CACHE_HOME' is not defined ``` Should I set some environment variables before using this library? And, do you have any idea why "No such file or directory" occurs even though the `exist_ok = True` option is set? Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/865/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/864/comments
https://api.github.com/repos/huggingface/datasets/issues/864/events
https://github.com/huggingface/datasets/issues/864
745,322,357
MDU6SXNzdWU3NDUzMjIzNTc=
864
Unable to download cnn_dailymail dataset
{ "login": "rohitashwa1907", "id": 46031058, "node_id": "MDQ6VXNlcjQ2MDMxMDU4", "avatar_url": "https://avatars.githubusercontent.com/u/46031058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohitashwa1907", "html_url": "https://github.com/rohitashwa1907", "followers_url": "https://api.github.com/users/rohitashwa1907/followers", "following_url": "https://api.github.com/users/rohitashwa1907/following{/other_user}", "gists_url": "https://api.github.com/users/rohitashwa1907/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohitashwa1907/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohitashwa1907/subscriptions", "organizations_url": "https://api.github.com/users/rohitashwa1907/orgs", "repos_url": "https://api.github.com/users/rohitashwa1907/repos", "events_url": "https://api.github.com/users/rohitashwa1907/events{/privacy}", "received_events_url": "https://api.github.com/users/rohitashwa1907/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Same error here!\r\n", "Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2", "I'm looking at it right now", "I couldn't reproduce unfortunately. I tried\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cnn_dailymail\", \"3.0.0\", download_mode=\"force_redownload\")\r\n```\r\nand it worked fine on both my env (python 3.7.2) and colab (python 3.6.9)\r\n\r\nMaybe there was an issue with the google drive download link of the dataset ?\r\nAre you still having the issue ? If so could your give me more info about your python and requests version ?", "No, It's working fine now. Very strange. Here are my python and request versions\r\n\r\nrequests 2.24.0\r\nPython 3.8.2", "It's working as expected. Closing the issue \r\n\r\nThanks everybody." ]
1,605,674,282,000
1,605,849,731,000
1,605,849,730,000
NONE
null
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/864/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/861/comments
https://api.github.com/repos/huggingface/datasets/issues/861/events
https://github.com/huggingface/datasets/issues/861
744,753,458
MDU6SXNzdWU3NDQ3NTM0NTg=
861
Possible Bug: Small training/dataset file creates gigantic output
{ "login": "NebelAI", "id": 7240417, "node_id": "MDQ6VXNlcjcyNDA0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7240417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NebelAI", "html_url": "https://github.com/NebelAI", "followers_url": "https://api.github.com/users/NebelAI/followers", "following_url": "https://api.github.com/users/NebelAI/following{/other_user}", "gists_url": "https://api.github.com/users/NebelAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/NebelAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NebelAI/subscriptions", "organizations_url": "https://api.github.com/users/NebelAI/orgs", "repos_url": "https://api.github.com/users/NebelAI/repos", "events_url": "https://api.github.com/users/NebelAI/events{/privacy}", "received_events_url": "https://api.github.com/users/NebelAI/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is why the tokenization takes so much space.\r\n\r\nI'm sure we can optimize that though\r\nWhat do you think @sgugger ?", "First I think we should disable padding in the dataset processing and let the data collator do it.\r\n\r\nThen I'm wondering if you need attention_mask and token_type_ids at this point ?\r\n\r\nFinally we can also specify the output feature types at this line https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L280 to use more optimized integer precisions for the output. Maybe something like:\r\n- input_ids: uint16 or uint32\r\n- token_type_ids: uint8 or bool\r\n- attention_mask: bool\r\n- special_tokens_mask: bool\r\n\r\nAlso IMO these changes are all on the `transformers` side. Maybe we should discuss on the `transformers` repo", "> First I think we should disable padding in the dataset processing and let the data collator do it.\r\n\r\nNo, you can't do that on TPUs as dynamic shapes will result in a very slow training. The script can however be tweaked to use the `PaddingDataCollator` with a fixed max length instead of dynamic batching.\r\n\r\nFor the other optimizations, they can be done by changing the script directly for each user's use case. Not sure we can find something that is general enough to be in transformers or the examples script.", "Oh yes right..\r\nDo you think that a lazy map feature on the `datasets` side could help to avoid storing padded tokenized texts then ?", "I think I can do the tweak mentioned above with the data collator as short fix (but fully focused on v4 right now so that will be for later this week, beginning of next week :-) ).\r\nIf it doesn't hurt performance to tokenize on the fly, that would clearly be the long-term solution however!", "> Hey guys,\r\n> \r\n> I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely.\r\n> \r\n> I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug?\r\n> \r\n> I've used the following CMD:\r\n> `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`\r\n\r\nIt's actually because of the parameter 'preprocessing_num_worker' when using TPU. \r\nI am also planning to have my model trained on the google TPU with a 11gb text corpus. With x8 cores enabled, each TPU core has its own dataset. When not using distributed training, the preprocessed file is about 77gb. On the opposite, if enable xla, the file produced will easily consume all my free space(more than 220gb, I think it will be, in the end, around 600gb ). \r\nSo I think that's maybe where the problem came from. \r\n\r\nIs there any possibility that all of the cores share the same preprocess dataset?\r\n\r\n@sgugger @RammMaschine ", "Hi @NebelAI, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs." ]
1,605,620,939,000
1,617,113,044,000
1,616,414,695,000
NONE
null
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/861/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/860/comments
https://api.github.com/repos/huggingface/datasets/issues/860/events
https://github.com/huggingface/datasets/issues/860
744,750,691
MDU6SXNzdWU3NDQ3NTA2OTE=
860
wmt16 cs-en does not donwload
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[]
1,605,620,735,000
1,606,484,824,000
null
CONTRIBUTOR
null
Hi I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks split="train", n_obs=data_args.n_train) for task in data_args.task} File "finetune_t5_trainer.py", line 109, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/home/rabeeh/internship/seq2seq/tasks/tasks.py", line 82, in get_dataset dataset = load_dataset("wmt16", self.pair, split=split) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rabeeh/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/860/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/854/comments
https://api.github.com/repos/huggingface/datasets/issues/854/events
https://github.com/huggingface/datasets/issues/854
743,675,376
MDU6SXNzdWU3NDM2NzUzNzY=
854
wmt16 does not download
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks ", "It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error).\r\nI searched a bit and couldn't find a mirror except maybe http://nlp.ffzg.hr/resources/corpora/setimes/ (the data are a cleaned version of the original ones though)\r\nShould we consider replacing the old urls with these ones even though it's not the exact same data ?", "The data storage is down at the moment. Sorry. Hopefully, it will come back soon. Apologies for the inconvenience ...", "Dear great huggingface team, this is not working yet, I really appreciate some temporary fix on this, I need this for my project and this is time sensitive and I will be grateful for your help on this. ", "We have reached out to the OPUS team which is currently working on making the data available again. Cc @jorgtied ", "thank you @thomwolf and HuggingFace team for the help. ", "OPUS is still down - hopefully back tomorrow.", "Hi, this is still down, I would be really grateful if you could ping them one more time. thank you so much. ", "Hi\r\nI am trying with multiple setting of wmt datasets and all failed so far, I need to have at least one dataset working for testing somecodes, and this is really time sensitive, I greatly appreciate letting me know of one translation datasets currently working. thanks ", "It is still down, unfortunately. I'm sorry for that. It should come up again later today or tomorrow at the latest if no additional complications will happen.", "Hi all, \r\nI pulled a request that fix this issue by replacing urls. \r\n\r\nhttps://github.com/huggingface/datasets/pull/1901\r\n\r\nThanks!\r\n", "It's still down for the wmt." ]
1,605,519,111,000
1,614,222,909,000
null
CONTRIBUTOR
null
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/854/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/853/comments
https://api.github.com/repos/huggingface/datasets/issues/853/events
https://github.com/huggingface/datasets/issues/853
743,426,583
MDU6SXNzdWU3NDM0MjY1ODM=
853
concatenate_datasets support axis=0 or 1 ?
{ "login": "renqingcolin", "id": 12437751, "node_id": "MDQ6VXNlcjEyNDM3NzUx", "avatar_url": "https://avatars.githubusercontent.com/u/12437751?v=4", "gravatar_id": "", "url": "https://api.github.com/users/renqingcolin", "html_url": "https://github.com/renqingcolin", "followers_url": "https://api.github.com/users/renqingcolin/followers", "following_url": "https://api.github.com/users/renqingcolin/following{/other_user}", "gists_url": "https://api.github.com/users/renqingcolin/gists{/gist_id}", "starred_url": "https://api.github.com/users/renqingcolin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/renqingcolin/subscriptions", "organizations_url": "https://api.github.com/users/renqingcolin/orgs", "repos_url": "https://api.github.com/users/renqingcolin/repos", "events_url": "https://api.github.com/users/renqingcolin/events{/privacy}", "received_events_url": "https://api.github.com/users/renqingcolin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892884, "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted", "name": "help wanted", "color": "008672", "default": true, "description": "Extra attention is needed" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.\r\nCurrently to add more columns to a dataset, one must use `map`.\r\nWhat you can do is somehting like this:\r\n```python\r\n# suppose you have datasets d1, d2, d3\r\ndef add_columns(example, index):\r\n example.update(d2[index])\r\n example.update(d3[index])\r\n return example\r\n\r\nfull_dataset = d1.map(add_columns, with_indices=True)\r\n```", "Closing this one, feel free to re-open if you have other questions about this issue", "That's not really difficult to add, though, no?\r\nI think it can be done without copy.\r\nMaybe let's add it to the roadmap?", "Actually it's doable but requires to update the `Dataset._data_files` schema to support this.\r\nI'm re-opening this since we may want to add this in the future", "Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows. ", "Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help !\r\n\r\nHere is a few things about the current implementation:\r\n- A dataset object is a wrapper of one `pyarrow.Table` that contains the data\r\n- Pyarrow offers an API that allows to transform Table objects. For example there are functions like `concat_tables`, `Table.rename_columns`, `Table.add_column` etc.\r\n\r\nTherefore adding columns from another dataset is possible thanks to the pyarrow API and in particular `Table.add_column` :) \r\n\r\nHowever this breaks some features we have regarding pickle. A dataset object can be pickled and unpickled without loading all the data in memory. It is useful for multiprocessing for example. Pickling a dataset object is possible thanks to the `Dataset._data_files` which defines the list of arrow files that will be used to form the final Table (basically all the data from each files are concatenated on axis 0).\r\n\r\nTherefore to be able to add columns to a Dataset and still be able to work with it in a multiprocessing setup, we need to extend this last aspect to be able to reconstruct a Table object from multiple arrow files that are combined in both axis 0 and 1. Currently this reconstruction mechanism only supports axis 0.\r\n\r\nI'm sure we can figure something out that enables users to add columns from another dataset while keeping the multiprocessing support.", "@lhoestq, we have two Pull Requests to implement:\r\n- Dataset.add_item: #1870\r\n- Dataset.add_column: #2145\r\nwhich add a single row or column, repectively.\r\n\r\nThe request here is to implement the concatenation of *multiple* rows/columns. Am I right?\r\n\r\nWe should agree on the API:\r\n- `concatenate_datasets` with `axis`?\r\n- other Dataset method name?", "For the API, I like `concatenate_datasets` with `axis` personally :)\r\nFrom a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (append columns).\r\n\r\nRegarding what we need to implement:\r\nThe axis=0 is already supported and is the current behavior of `concatenate_datasets`.\r\nAlso `add_item` is not needed to implement axis=1 (though it's an awesome addition to this library).\r\n\r\nTo implement axis=1, we either need `add_column` or a `ConcatenationTable` constructor to concatenate tables horizontally.\r\nI have a preference for using a `ConcatenationTable` constructor because this way we can end up with a `ConcatenationTable` with only 1 additional block per table, while `add_column` would add 1 block per new column.\r\n\r\nMaybe we can simply have an equivalent of `ConcatenationTable.from_tables` but for axis=1 ?\r\n`axis` could also be an argument of `ConcatenationTable.from_tables`", "@lhoestq I think I guessed your suggestions in advance... 😉 #2151", "Cool ! Sorry I missed this one ^^\r\nI'm taking a look ;)" ]
1,605,494,783,000
1,618,848,438,000
1,618,848,438,000
NONE
null
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/853/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/853/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/852/comments
https://api.github.com/repos/huggingface/datasets/issues/852/events
https://github.com/huggingface/datasets/issues/852
743,396,240
MDU6SXNzdWU3NDMzOTYyNDA=
852
wmt cannot be downloaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,605,488,681,000
1,605,519,118,000
1,605,519,118,000
CONTRIBUTOR
null
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/852/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/851/comments
https://api.github.com/repos/huggingface/datasets/issues/851/events
https://github.com/huggingface/datasets/issues/851
743,343,278
MDU6SXNzdWU3NDMzNDMyNzg=
851
Add support for other languages for rouge
{ "login": "alexyalunin", "id": 23011284, "node_id": "MDQ6VXNlcjIzMDExMjg0", "avatar_url": "https://avatars.githubusercontent.com/u/23011284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexyalunin", "html_url": "https://github.com/alexyalunin", "followers_url": "https://api.github.com/users/alexyalunin/followers", "following_url": "https://api.github.com/users/alexyalunin/following{/other_user}", "gists_url": "https://api.github.com/users/alexyalunin/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexyalunin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexyalunin/subscriptions", "organizations_url": "https://api.github.com/users/alexyalunin/orgs", "repos_url": "https://api.github.com/users/alexyalunin/repos", "events_url": "https://api.github.com/users/alexyalunin/events{/privacy}", "received_events_url": "https://api.github.com/users/alexyalunin/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400959, "node_id": "MDU6TGFiZWwyMDY3NDAwOTU5", "url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion", "name": "Metric discussion", "color": "d722e8", "default": false, "description": "Discussions on the metrics" } ]
open
false
null
[]
null
[ "@alexyalunin \r\n\r\nI did something similar for others languages.\r\n\r\n[Repo: rouge-metric](https://github.com/m3hrdadfi/rouge-metric)" ]
1,605,473,865,000
1,622,970,472,000
null
NONE
null
I calculate rouge with ``` from datasets import load_metric rouge = load_metric("rouge") rouge_output = rouge.compute(predictions=['тест тест привет'], references=['тест тест пока'], rouge_types=[ "rouge2"])["rouge2"].mid print(rouge_output) ``` the result is `Score(precision=0.0, recall=0.0, fmeasure=0.0)` It seems like the `rouge_score` library that this metric uses filters all non-alphanueric latin characters in `rouge_scorer/tokenize.py` with `text = re.sub(r"[^a-z0-9]+", " ", six.ensure_str(text))`. Please add support for other languages.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/851/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/851/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/849/comments
https://api.github.com/repos/huggingface/datasets/issues/849/events
https://github.com/huggingface/datasets/issues/849
742,263,333
MDU6SXNzdWU3NDIyNjMzMzM=
849
Load amazon dataset
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nWe plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls.\r\n\r\nAlso I think the bullet points formatting has been fixed" ]
1,605,256,464,000
1,605,597,779,000
1,605,597,779,000
CONTRIBUTOR
null
Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews") ``` How it is when I tried (the error generated does point me to the right direction though) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews", 'Books_v1_00') ``` Also, there is some issue with formatting as it's not showing bullet list in description with new line. Can I work on it?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/849/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/848/comments
https://api.github.com/repos/huggingface/datasets/issues/848/events
https://github.com/huggingface/datasets/issues/848
742,240,942
MDU6SXNzdWU3NDIyNDA5NDI=
848
Error when concatenate_datasets
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shexuan/subscriptions", "organizations_url": "https://api.github.com/users/shexuan/orgs", "repos_url": "https://api.github.com/users/shexuan/repos", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "received_events_url": "https://api.github.com/users/shexuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n\r\nThe indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test.\r\n\r\nBefore saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.\r\n", "> As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n> \r\n> The indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test.\r\n> \r\n> Before saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.\r\n\r\n`dataset.flatten_indices()` solved my problem, thanks so much!", "@lhoestq we can add a mention of `dataset.flatten_indices()` in the error message (no rush, just put it on your TODO list or I can do it when I come at it)", "Yup I agree ! And in the docs as well" ]
1,605,254,162,000
1,605,289,259,000
1,605,282,910,000
NONE
null
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-74fa525512ca> in <module> ----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset]) /opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split) 2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format( 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]], -> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]], 2550 ) 2551 ) ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error: ``` trn_dataset._data_files # output [{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}] test_dataset._data_files # output [{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}] print([not dset._data_files for dset in [trn_dataset, test_dataset]]) # [False, False] # And I tested the code the same as arrow_dataset, but nothing happened dsets = [trn_dataset, test_dataset] dsets_in_memory = [not dset._data_files for dset in dsets] if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory): raise ValueError( "Datasets should ALL come from memory, or should ALL come from disk.\n" "However datasets {} come from memory and datasets {} come from disk.".format( [i for i in range(len(dsets)) if dsets_in_memory[i]], [i for i in range(len(dsets)) if not dsets_in_memory[i]], ) ) ``` Any suggestions would be greatly appreciated! Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/848/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/847/comments
https://api.github.com/repos/huggingface/datasets/issues/847/events
https://github.com/huggingface/datasets/issues/847
742,179,495
MDU6SXNzdWU3NDIxNzk0OTU=
847
multiprocessing in dataset map "can only test a child process"
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "It looks like an issue with wandb/tqdm here.\r\nWe're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility.\r\n\r\nCould you make a minimal script to reproduce or a google colab ?", "hi facing the same issue here - \r\n\r\n`AssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 996, in emit\r\n stream.write(msg)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/usr/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"<ipython-input-8-a4d9a08d114e>\", line 20, in __getitem__\r\n return_token_type_ids=True\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\", line 2405, in encode_plus\r\n **kwargs,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\", line 2125, in _get_padding_truncation_strategies\r\n \"Truncation was not explicitly activated but `max_length` is provided a specific value, \"\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1320, in warning\r\n self._log(WARNING, msg, args, **kwargs)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1444, in _log\r\n self.handle(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1454, in handle\r\n self.callHandlers(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1516, in callHandlers\r\n hdlr.handle(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 865, in handle\r\n self.emit(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1000, in emit\r\n self.handleError(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 917, in handleError\r\n sys.stderr.write('--- Logging error ---\\n')\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/usr/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process`\r\n", "It looks like this warning : \r\n\"Truncation was not explicitly activated but max_length is provided a specific value, \"\r\nis not handled well by wandb.\r\n\r\nThe error occurs when calling the tokenizer.\r\nMaybe you can try to specify `truncation=True` when calling the tokenizer to remove the warning ?\r\nOtherwise I don't know why wandb would fail on a warning. Maybe one of its logging handlers have some issues with the logging of tokenizers. Maybe @n1t0 knows more about this ?", "I'm having a similar issue but when I try to do multiprocessing with the `DataLoader`\r\n\r\nCode to reproduce:\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=5000)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\nfrom transformers import DataCollatorForWholeWordMask\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\ndata_collator = DataCollatorForWholeWordMask(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./mobile_linear_att_8L_128_128_03layerdrop_shared\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=64,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n gradient_accumulation_steps=1,\r\n fp16=True,\r\n **dataloader_num_workers=10**,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n\r\ntrainer.train()\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<timed eval> in <module>\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path, trial)\r\n 869 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)\r\n 870 \r\n--> 871 for step, inputs in enumerate(epoch_iterator):\r\n 872 \r\n 873 # Skip past any already trained steps if resuming training\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self)\r\n 433 if self._sampler_iter is None:\r\n 434 self._reset()\r\n--> 435 data = self._next_data()\r\n 436 self._num_yielded += 1\r\n 437 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _next_data(self)\r\n 1083 else:\r\n 1084 del self._task_info[idx]\r\n-> 1085 return self._process_data(data)\r\n 1086 \r\n 1087 def _try_put_index(self):\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_data(self, data)\r\n 1109 self._try_put_index()\r\n 1110 if isinstance(data, ExceptionWrapper):\r\n-> 1111 data.reraise()\r\n 1112 return data\r\n 1113 \r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/_utils.py in reraise(self)\r\n 426 # have message field\r\n 427 raise self.exc_type(message=msg)\r\n--> 428 raise self.exc_type(msg)\r\n 429 \r\n 430 \r\n\r\nAssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1087, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1074, in _getitem\r\n format_kwargs=format_kwargs,\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 890, in _convert_outputs\r\n v = map_nested(command, v, **map_nested_kwargs)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/utils/py_utils.py\", line 225, in map_nested\r\n return function(data_struct)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 851, in command\r\n return torch.tensor(x, **format_kwargs)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py\", line 101, in _showwarnmsg\r\n _showwarnmsg_impl(msg)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py\", line 30, in _showwarnmsg_impl\r\n file.write(text)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n```\r\n\r\nAs a workaround I have commented line 456 and 457 in `/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py`", "Isn't it more the pytorch warning on the use of non-writable memory for tensor that trigger this here @lhoestq? (since it seems to be a warning triggered in `torch.tensor()`", "Yep this time this is a warning from pytorch that causes wandb to not work properly.\r\nCould this by a wandb issue ?", "Hi @timothyjlaurent @gaceladri \r\nIf you're running `transformers` from `master` you can try setting the env var `WAND_DISABLE=true` (from https://github.com/huggingface/transformers/pull/9896) and try again ?\r\nThis issue might be related to https://github.com/huggingface/transformers/issues/9623 ", "I have commented the lines that cause my code break. I'm now seeing my reports on Wandb and my code does not break. I am training now, so I will check probably in 6 hours. I suppose that setting wandb disable will work as well." ]
1,605,247,264,000
1,612,198,408,000
null
NONE
null
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/847/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/847/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/846/comments
https://api.github.com/repos/huggingface/datasets/issues/846/events
https://github.com/huggingface/datasets/issues/846
741,885,174
MDU6SXNzdWU3NDE4ODUxNzQ=
846
Add HoVer multi-hop fact verification dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?", "Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), then follow the steps here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md", "Closed by #1399 " ]
1,605,210,946,000
1,607,636,853,000
1,607,636,853,000
MEMBER
null
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding) Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/846/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/843/comments
https://api.github.com/repos/huggingface/datasets/issues/843/events
https://github.com/huggingface/datasets/issues/843
741,531,121
MDU6SXNzdWU3NDE1MzExMjE=
843
use_custom_baseline still produces errors for bertscore
{ "login": "penatbater", "id": 37921244, "node_id": "MDQ6VXNlcjM3OTIxMjQ0", "avatar_url": "https://avatars.githubusercontent.com/u/37921244?v=4", "gravatar_id": "", "url": "https://api.github.com/users/penatbater", "html_url": "https://github.com/penatbater", "followers_url": "https://api.github.com/users/penatbater/followers", "following_url": "https://api.github.com/users/penatbater/following{/other_user}", "gists_url": "https://api.github.com/users/penatbater/gists{/gist_id}", "starred_url": "https://api.github.com/users/penatbater/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/penatbater/subscriptions", "organizations_url": "https://api.github.com/users/penatbater/orgs", "repos_url": "https://api.github.com/users/penatbater/repos", "events_url": "https://api.github.com/users/penatbater/events{/privacy}", "received_events_url": "https://api.github.com/users/penatbater/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
null
[ "Thanks for reporting ! That's a bug indeed\r\nIf you want to contribute, feel free to fix this issue and open a PR :)", "This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem. ", "Thanks for the heads up @pvl and for the PR as well :)", "Hello everyone,\r\n\r\nI think the problem is not solved: \r\n\r\n```\r\nfrom datasets import load_metric\r\nmetric=load_metric('bertscore')\r\nmetric.compute(\r\n predictions=predictions,\r\n references=references,\r\n lang='fr',\r\n rescale_with_baseline=True\r\n)\r\nTypeError: get_hash() missing 2 required positional arguments: 'use_custom_baseline' and 'use_fast_tokenizer'\r\n```\r\nThis code is produced using `Python 3.6.9 datasets==1.1.2 and bert_score==0.3.10`", "Hi ! This has been fixed by https://github.com/huggingface/datasets/pull/2770, we'll do a new release soon to make the fix available :)\r\n\r\nIn the meantime please use an older version of `bert_score`" ]
1,605,181,472,000
1,630,404,404,000
1,612,880,508,000
NONE
null
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/843/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/842/comments
https://api.github.com/repos/huggingface/datasets/issues/842/events
https://github.com/huggingface/datasets/issues/842
741,208,428
MDU6SXNzdWU3NDEyMDg0Mjg=
842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
{ "login": "shangw-nvidia", "id": 66387198, "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shangw-nvidia", "html_url": "https://github.com/shangw-nvidia", "followers_url": "https://api.github.com/users/shangw-nvidia/followers", "following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}", "gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}", "starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions", "organizations_url": "https://api.github.com/users/shangw-nvidia/orgs", "repos_url": "https://api.github.com/users/shangw-nvidia/repos", "events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}", "received_events_url": "https://api.github.com/users/shangw-nvidia/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Right now multiprocessing only runs on single node.\r\n\r\nHowever it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about pathos [on the pathos repo](https://github.com/uqfoundation/pathos).\r\n\r\nIf you're familiar with pathos or if you want to give it a try, it could be a nice addition to the library :)" ]
1,605,146,678,000
1,605,223,707,000
null
NONE
null
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/842/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/841/comments
https://api.github.com/repos/huggingface/datasets/issues/841/events
https://github.com/huggingface/datasets/issues/841
740,737,448
MDU6SXNzdWU3NDA3Mzc0NDg=
841
Can not reuse datasets already downloaded
{ "login": "jc-hou", "id": 30210529, "node_id": "MDQ6VXNlcjMwMjEwNTI5", "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jc-hou", "html_url": "https://github.com/jc-hou", "followers_url": "https://api.github.com/users/jc-hou/followers", "following_url": "https://api.github.com/users/jc-hou/following{/other_user}", "gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}", "starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions", "organizations_url": "https://api.github.com/users/jc-hou/orgs", "repos_url": "https://api.github.com/users/jc-hou/repos", "events_url": "https://api.github.com/users/jc-hou/events{/privacy}", "received_events_url": "https://api.github.com/users/jc-hou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py'\r\nWhere and how to assign this ```wikipedia.py``` after I manually download it ?", "\r\ndownload the ```wikipedia.py``` at the working directory and go with ```dataset = load_dataset('wikipedia.py', '20200501.en')``` works." ]
1,605,098,535,000
1,605,118,636,000
1,605,118,636,000
NONE
null
Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but failed and end with time out error. On frontal node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Reusing dataset wikipedia (/linkhome/rech/genini01/uua34ms/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/f92599dfccab29832c442b82870fa8f6983e5b4ebbf5e6e2dcbe894e325339cd) /linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0 ``` On gpu node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn conn.connect() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 309, in connect conn = self._new_conn() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 172, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/retry.py", line 446, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 590, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 264, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return requests.head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) ``` Any advice?Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/841/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/839/comments
https://api.github.com/repos/huggingface/datasets/issues/839/events
https://github.com/huggingface/datasets/issues/839
740,355,270
MDU6SXNzdWU3NDAzNTUyNzA=
839
XSum dataset missing spaces between sentences
{ "login": "loganlebanoff", "id": 10007282, "node_id": "MDQ6VXNlcjEwMDA3Mjgy", "avatar_url": "https://avatars.githubusercontent.com/u/10007282?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loganlebanoff", "html_url": "https://github.com/loganlebanoff", "followers_url": "https://api.github.com/users/loganlebanoff/followers", "following_url": "https://api.github.com/users/loganlebanoff/following{/other_user}", "gists_url": "https://api.github.com/users/loganlebanoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/loganlebanoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loganlebanoff/subscriptions", "organizations_url": "https://api.github.com/users/loganlebanoff/orgs", "repos_url": "https://api.github.com/users/loganlebanoff/repos", "events_url": "https://api.github.com/users/loganlebanoff/events{/privacy}", "received_events_url": "https://api.github.com/users/loganlebanoff/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,605,054,883,000
1,605,054,883,000
null
NONE
null
I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set): `The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like this morning 'Oh I think you're nominated'", said Dappy."And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!"Bandmate Fazer added: "We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations."The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around."At the end of the day we're grateful to be where we are in our careers."If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans."Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border."We just done Edinburgh the other day," said Dappy."We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!"`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/839/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/836/comments
https://api.github.com/repos/huggingface/datasets/issues/836/events
https://github.com/huggingface/datasets/issues/836
740,187,613
MDU6SXNzdWU3NDAxODc2MTM=
836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
{ "login": "randubin", "id": 8919490, "node_id": "MDQ6VXNlcjg5MTk0OTA=", "avatar_url": "https://avatars.githubusercontent.com/u/8919490?v=4", "gravatar_id": "", "url": "https://api.github.com/users/randubin", "html_url": "https://github.com/randubin", "followers_url": "https://api.github.com/users/randubin/followers", "following_url": "https://api.github.com/users/randubin/following{/other_user}", "gists_url": "https://api.github.com/users/randubin/gists{/gist_id}", "starred_url": "https://api.github.com/users/randubin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/randubin/subscriptions", "organizations_url": "https://api.github.com/users/randubin/orgs", "repos_url": "https://api.github.com/users/randubin/repos", "events_url": "https://api.github.com/users/randubin/events{/privacy}", "received_events_url": "https://api.github.com/users/randubin/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?", "Thanks for the fast response. I have the latest version '2.0.0' (I tried to update)\r\nI am working with Python 3.8.5", "I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612\r\nThe problem is in arrow when the column data contains long strings.\r\nAny ideas on how to bypass this?", "We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).\r\n\r\n\r\nIn the meantime you can specify yourself the `ReadOptions` config like this:\r\n```python\r\nimport pyarrow.csv as pac # PyArrow is installed with `datasets`\r\n\r\nread_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case\r\ndataset = load_dataset('csv', data_files=files, read_options=read_options)\r\n```\r\n", "This did help to load the data. But the problem now is that I get:\r\nArrowInvalid: CSV parse error: Expected 5 columns, got 187\r\n\r\nIt seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow\r\nBut I got a similar error, again it loaded fine in pandas so I am not sure what to do.\r\n\r\n\r\n\r\n", "Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error.", "> We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).\r\n> \r\n> In the meantime you can specify yourself the `ReadOptions` config like this:\r\n> \r\n> ```python\r\n> import pyarrow.csv as pac # PyArrow is installed with `datasets`\r\n> \r\n> read_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case\r\n> dataset = load_dataset('csv', data_files=files, read_options=read_options)\r\n> ```\r\n\r\nThis did not work for me, I got\r\n`TypeError: __init__() got an unexpected keyword argument 'read_options'`", "Hi ! Yes because of issues with PyArrow's CSV reader we switched to using the Pandas CSV reader. In particular the `read_options` argument is not supported anymore, but you can pass any parameter of Pandas' `read_csv` function (see the list here in [Pandas documentation](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html))" ]
1,605,036,940,000
1,637,773,159,000
1,605,807,338,000
NONE
null
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/836/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/835/comments
https://api.github.com/repos/huggingface/datasets/issues/835/events
https://github.com/huggingface/datasets/issues/835
740,102,210
MDU6SXNzdWU3NDAxMDIyMTA=
835
Wikipedia postprocessing
{ "login": "bminixhofer", "id": 13353204, "node_id": "MDQ6VXNlcjEzMzUzMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/13353204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bminixhofer", "html_url": "https://github.com/bminixhofer", "followers_url": "https://api.github.com/users/bminixhofer/followers", "following_url": "https://api.github.com/users/bminixhofer/following{/other_user}", "gists_url": "https://api.github.com/users/bminixhofer/gists{/gist_id}", "starred_url": "https://api.github.com/users/bminixhofer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bminixhofer/subscriptions", "organizations_url": "https://api.github.com/users/bminixhofer/orgs", "repos_url": "https://api.github.com/users/bminixhofer/repos", "events_url": "https://api.github.com/users/bminixhofer/events{/privacy}", "received_events_url": "https://api.github.com/users/bminixhofer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect.\r\n\r\nAs an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool", "Ok, thanks! I'll try the Wiki40b dataset.", "If anyone else is concerned about this, `wiki40b` does indeed seem very well cleaned." ]
1,605,029,198,000
1,605,032,600,000
1,605,030,561,000
NONE
null
Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des Gemälde „Tierra y Libertad“ von Idelfonso Carrara (?) von 1930. Ricardo Flores Magón (* 16. September 1874 in San Antonio Eloxochitlán im mexikanischen Bundesstaat Oaxaca; † 22. November 1922 im Bundesgefängnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein führender anarchistischer Theoretiker und Aktivist, der die revolutionäre mexikanische Bewegung radikal beeinflusste. Magón war Gründer der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World. Politische Biografie Journalistisch und politisch kämpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung „Land und Freiheit“ (Tierra y Libertad) populär. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte großen Einfluss auf die Landarbeiter. 1904 floh er in die USA und gründete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in Gefängnissen und im Exil und wurde 1918 in den USA wegen „Behinderung der Kriegsanstrengungen“ zu zwanzig Jahren Gefängnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass Magón von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM veröffentlichte 1923 einen Beitrag, nachdem Magón von einem Gefängniswärter erschlagen wurde. mini|Die Brüder Ricardo (links) und Enrique Flores Magón (rechts) vor dem Los Angeles County Jail, 1917 [...] ``` so some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup? Apologies if this has been asked before.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/835/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/834/comments
https://api.github.com/repos/huggingface/datasets/issues/834/events
https://github.com/huggingface/datasets/issues/834
740,082,890
MDU6SXNzdWU3NDAwODI4OTA=
834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?", "Hi @KMFODA ! A version of WikiLingua is actually already accessible in the [GEM dataset](https://huggingface.co/datasets/gem)\r\n\r\nYou can use it for example to load the French to English translation with:\r\n```python\r\nfrom datasets import load_dataset\r\nwikilingua = load_dataset(\"gem\", \"wiki_lingua_french_fr\")\r\n```\r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1807" ]
1,605,027,643,000
1,618,488,249,000
1,618,488,098,000
MEMBER
null
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** https://arxiv.org/pdf/2010.03093.pdf - **Data:** https://github.com/esdurmus/Wikilingua - **Motivation:** Included in the GEM shared task. Multilingual. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/834/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/833/comments
https://api.github.com/repos/huggingface/datasets/issues/833/events
https://github.com/huggingface/datasets/issues/833
740,079,692
MDU6SXNzdWU3NDAwNzk2OTI=
833
[GEM] add ASSET text simplification dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,605,027,390,000
1,607,002,695,000
1,607,002,695,000
MEMBER
null
## Adding a Dataset - **Name:** ASSET - **Description:** ASSET is a crowdsourced multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf - **Data:** https://github.com/facebookresearch/asset - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/833/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/832/comments
https://api.github.com/repos/huggingface/datasets/issues/832/events
https://github.com/huggingface/datasets/issues/832
740,077,228
MDU6SXNzdWU3NDAwNzcyMjg=
832
[GEM] add WikiAuto text simplification dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,605,027,203,000
1,607,002,688,000
1,607,002,688,000
MEMBER
null
## Adding a Dataset - **Name:** WikiAuto - **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.709.pdf - **Data:** https://github.com/chaojiang06/wiki-auto - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/832/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/831/comments
https://api.github.com/repos/huggingface/datasets/issues/831/events
https://github.com/huggingface/datasets/issues/831
740,071,697
MDU6SXNzdWU3NDAwNzE2OTc=
831
[GEM] Add WebNLG dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,605,026,808,000
1,607,002,681,000
1,607,002,681,000
MEMBER
null
## Adding a Dataset - **Name:** WebNLG - **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian - **Paper:** https://www.aclweb.org/anthology/P17-1017.pdf - **Data:** https://webnlg-challenge.loria.fr/download/ - **Motivation:** Included in the GEM shared task, multilingual Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/831/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/830/comments
https://api.github.com/repos/huggingface/datasets/issues/830/events
https://github.com/huggingface/datasets/issues/830
740,065,376
MDU6SXNzdWU3NDAwNjUzNzY=
830
[GEM] add ToTTo Table-to-text dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "closed via #1098 " ]
1,605,026,314,000
1,607,605,562,000
1,607,605,561,000
MEMBER
null
## Adding a Dataset - **Name:** ToTTo - **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. - **Paper:** https://arxiv.org/abs/2004.14373 - **Data:** https://github.com/google-research-datasets/totto - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/830/timeline
null
null
null
false