url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
1.69B
node_id
stringlengths
18
24
number
int64
2
5.8k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
comments
sequencelengths
0
30
created_at
int64
1,587B
1,683B
updated_at
int64
1,588B
1,683B
closed_at
int64
1,588B
1,683B
author_association
stringclasses
3 values
draft
float64
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
state_reason
stringclasses
3 values
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/876/comments
https://api.github.com/repos/huggingface/datasets/issues/876/events
https://github.com/huggingface/datasets/issues/876
748,195,104
MDU6SXNzdWU3NDgxOTUxMDQ=
876
imdb dataset cannot be loaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n```\r\nto make sure it's not a corrupted file issue ?", "I was using version 1.1.2 and this resolved with version 1.1.3, thanks. ", "Hello,\r\nI have the same pb with 1.8.0", "Hi ! I just tried in 1.8.0 and it worked fine. Can you try again ? Maybe the dataset host had some issues that are fixed now", "Hello,\r\nIt works fine now :) !\r\nThanks !" ]
1,606,033,483,000
1,637,924,836,000
1,608,831,527,000
CONTRIBUTOR
null
null
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] >>> dataset = datasets.load_dataset("imdb", split="train") ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/876/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/875/comments
https://api.github.com/repos/huggingface/datasets/issues/875/events
https://github.com/huggingface/datasets/issues/875
748,194,311
MDU6SXNzdWU3NDgxOTQzMTE=
875
bug in boolq dataset loading
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I just opened a PR to fix this.\r\nThanks for reporting !" ]
1,606,033,114,000
1,606,212,753,000
1,606,212,753,000
CONTRIBUTOR
null
null
Hi I am trying to load boolq dataset: ``` import datasets datasets.load_dataset("boolq") ``` I am getting the following errors, thanks for your help ``` >>> import datasets 2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2020-11-22 09:16:30.070389: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. >>> datasets.load_dataset("boolq") cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets Using custom data configuration default Downloading and preparing dataset boolq/default (download: 8.36 MiB, generated: 7.47 MiB, post-processed: Unknown size, total: 15.83 MiB) to /idiap/temp/rkarimi/cache_home/datasets/boolq/default/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11... cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/875/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/874/comments
https://api.github.com/repos/huggingface/datasets/issues/874/events
https://github.com/huggingface/datasets/issues/874
748,193,140
MDU6SXNzdWU3NDgxOTMxNDA=
874
trec dataset unavailable
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This was fixed in #740 \r\nCould you try to update `datasets` and try again ?", "This has been fixed in datasets 1.1.3" ]
1,606,032,576,000
1,606,485,402,000
1,606,485,402,000
CONTRIBUTOR
null
null
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/874/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/873/comments
https://api.github.com/repos/huggingface/datasets/issues/873/events
https://github.com/huggingface/datasets/issues/873
747,959,523
MDU6SXNzdWU3NDc5NTk1MjM=
873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
{ "login": "vishal-burman", "id": 19861874, "node_id": "MDQ6VXNlcjE5ODYxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishal-burman", "html_url": "https://github.com/vishal-burman", "followers_url": "https://api.github.com/users/vishal-burman/followers", "following_url": "https://api.github.com/users/vishal-burman/following{/other_user}", "gists_url": "https://api.github.com/users/vishal-burman/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishal-burman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishal-burman/subscriptions", "organizations_url": "https://api.github.com/users/vishal-burman/orgs", "repos_url": "https://api.github.com/users/vishal-burman/repos", "events_url": "https://api.github.com/users/vishal-burman/events{/privacy}", "received_events_url": "https://api.github.com/users/vishal-burman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I get the same error. It was fixed some days ago, but again it appears", "Hi @mrm8488 it's working again today without any fix so I am closing this issue.", "I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is already up-to-date!\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNotADirectoryError Traceback (most recent call last)\r\n\r\n<ipython-input-9-cd4bf8bea840> in <module>()\r\n 22 \r\n 23 \r\n---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')\r\n 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')\r\n 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')\r\n\r\n5 frames\r\n\r\n/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n\r\nCan someone please take a look ?", "Sometimes happens. Try in a while", "It is working now, thank you. ", "Has anyone solved this ? I still get this error ", "> atal(\"Unsupported publisher: %s\", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = []\r\n> \r\n> NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n> \r\n> Can someone please take a look ?\r\n\r\n2 short-term workarounds:\r\n\r\n1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works.\r\n2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in #996, I was getting the \"can't scan this file for viruses\" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by:\r\n 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer.\r\n 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67.\r\n 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts.\r\n\r\nEither method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not.", "experience the same problem, ccdv/cnn_dailymail not working either. \r\n\r\nSolve this problem by installing datasets library from the master branch:\r\npython -m pip install git+https://github.com/huggingface/datasets.git@master", "Seem to be getting this again even with 1.18.4. I believe it worked yesterday.", "Hitting this one as well.", ">Hitting this one as well.\r\n\r\nHas anyone solved this ? I still get this error", "@yoheimiyamoto The solution provided by @davidshinn (i.e. `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`) worked for me." ]
1,605,940,245,000
1,651,735,199,000
1,606,047,485,000
NONE
null
null
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/873/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/871/comments
https://api.github.com/repos/huggingface/datasets/issues/871/events
https://github.com/huggingface/datasets/issues/871
747,470,136
MDU6SXNzdWU3NDc0NzAxMzY=
871
terminate called after throwing an instance of 'google::protobuf::FatalException'
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)", "closing now, figured out this is because the max length of decoder was set smaller than the input_dimensions. thanks " ]
1,605,876,984,000
1,607,807,792,000
1,607,807,792,000
CONTRIBUTOR
null
null
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): run_t5_base_eval.sh: line 19: 5795 Aborted
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/871/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/870/comments
https://api.github.com/repos/huggingface/datasets/issues/870/events
https://github.com/huggingface/datasets/issues/870
747,021,996
MDU6SXNzdWU3NDcwMjE5OTY=
870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
{ "login": "jncasey", "id": 31020859, "node_id": "MDQ6VXNlcjMxMDIwODU5", "avatar_url": "https://avatars.githubusercontent.com/u/31020859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jncasey", "html_url": "https://github.com/jncasey", "followers_url": "https://api.github.com/users/jncasey/followers", "following_url": "https://api.github.com/users/jncasey/following{/other_user}", "gists_url": "https://api.github.com/users/jncasey/gists{/gist_id}", "starred_url": "https://api.github.com/users/jncasey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jncasey/subscriptions", "organizations_url": "https://api.github.com/users/jncasey/orgs", "repos_url": "https://api.github.com/users/jncasey/repos", "events_url": "https://api.github.com/users/jncasey/events{/privacy}", "received_events_url": "https://api.github.com/users/jncasey/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
[ "Hi ! Thanks for your message.\r\nIndeed it's a free feature we can add and that can be useful.\r\nIf you want to contribute, feel free to open a PR to add it to the text dataset script :)", "Resolved via #1913." ]
1,605,829,891,000
1,654,097,153,000
1,654,097,152,000
NONE
null
null
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of my data into a dataset, I hadn't realized the text loader script was processing the source files line-by-line and stripping off the newlines. Once I caught the issue, I made my own data loader by modifying one line in the default text loader (changing `batch = batch.splitlines()` to `batch = batch.splitlines(True)` inside `_generate_tables`). And so I'm all set as far as my project is concerned. But if my use case is more general, it seems like it'd be pretty trivial to add a kwarg to the default text loader called keeplinebreaks or something, which would default to False and get passed to `splitlines()`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/870/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/866/comments
https://api.github.com/repos/huggingface/datasets/issues/866/events
https://github.com/huggingface/datasets/issues/866
745,719,222
MDU6SXNzdWU3NDU3MTkyMjI=
866
OSCAR from Inria group
{ "login": "jchwenger", "id": 34098722, "node_id": "MDQ6VXNlcjM0MDk4NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jchwenger", "html_url": "https://github.com/jchwenger", "followers_url": "https://api.github.com/users/jchwenger/followers", "following_url": "https://api.github.com/users/jchwenger/following{/other_user}", "gists_url": "https://api.github.com/users/jchwenger/gists{/gist_id}", "starred_url": "https://api.github.com/users/jchwenger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jchwenger/subscriptions", "organizations_url": "https://api.github.com/users/jchwenger/orgs", "repos_url": "https://api.github.com/users/jchwenger/repos", "events_url": "https://api.github.com/users/jchwenger/events{/privacy}", "received_events_url": "https://api.github.com/users/jchwenger/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "PR is already open here : #348 \r\nThe only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled).\r\nAs soon as #863 is merged we can start computing them. This will take a bit of time though", "Grand, thanks for this!" ]
1,605,710,454,000
1,605,711,690,000
1,605,711,690,000
NONE
null
null
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.* - **Paper:** *[here](https://hal.inria.fr/hal-02148693)* - **Data:** *[here](https://oscar-corpus.com/)* - **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).* I am aware that you do offer the "colossal" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/866/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/865/comments
https://api.github.com/repos/huggingface/datasets/issues/865/events
https://github.com/huggingface/datasets/issues/865
745,430,497
MDU6SXNzdWU3NDU0MzA0OTc=
865
Have Trouble importing `datasets`
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "organizations_url": "https://api.github.com/users/forest1988/orgs", "repos_url": "https://api.github.com/users/forest1988/repos", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "received_events_url": "https://api.github.com/users/forest1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm sorry, this was a problem with my environment.\r\nNow that I have identified the cause of environmental dependency, I would like to fix it and try it.\r\nExcuse me for making a noise." ]
1,605,686,681,000
1,605,687,395,000
1,605,687,395,000
CONTRIBUTOR
null
null
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in <module> 116 sys.path.append(str(HF_MODULES_CACHE)) 117 --> 118 os.makedirs(HF_MODULES_CACHE, exist_ok=True) 119 if not os.path.exists(os.path.join(HF_MODULES_CACHE, "__init__.py")): 120 with open(os.path.join(HF_MODULES_CACHE, "__init__.py"), "w"): ~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/os.py in makedirs(name, mode, exist_ok) 221 return 222 try: --> 223 mkdir(name, mode) 224 except OSError: 225 # Cannot rely on checking for EEXIST, since the operating system FileNotFoundError: [Errno 2] No such file or directory: '<MY_HOME_DIRECTORY>/.cache/huggingface/modules' ``` The error occurs in `os.makedirs` in `file_utils.py`, even though `exist_ok = True` option is set. (I use Python 3.8, so `exist_ok` is expected to work.) I've checked some environment variables, and they are set as below. ``` *** NameError: name 'HF_MODULES_CACHE' is not defined *** NameError: name 'hf_cache_home' is not defined *** NameError: name 'XDG_CACHE_HOME' is not defined ``` Should I set some environment variables before using this library? And, do you have any idea why "No such file or directory" occurs even though the `exist_ok = True` option is set? Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/865/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/864/comments
https://api.github.com/repos/huggingface/datasets/issues/864/events
https://github.com/huggingface/datasets/issues/864
745,322,357
MDU6SXNzdWU3NDUzMjIzNTc=
864
Unable to download cnn_dailymail dataset
{ "login": "rohitashwa1907", "id": 46031058, "node_id": "MDQ6VXNlcjQ2MDMxMDU4", "avatar_url": "https://avatars.githubusercontent.com/u/46031058?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohitashwa1907", "html_url": "https://github.com/rohitashwa1907", "followers_url": "https://api.github.com/users/rohitashwa1907/followers", "following_url": "https://api.github.com/users/rohitashwa1907/following{/other_user}", "gists_url": "https://api.github.com/users/rohitashwa1907/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohitashwa1907/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohitashwa1907/subscriptions", "organizations_url": "https://api.github.com/users/rohitashwa1907/orgs", "repos_url": "https://api.github.com/users/rohitashwa1907/repos", "events_url": "https://api.github.com/users/rohitashwa1907/events{/privacy}", "received_events_url": "https://api.github.com/users/rohitashwa1907/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "Same error here!\r\n", "Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2", "I'm looking at it right now", "I couldn't reproduce unfortunately. I tried\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cnn_dailymail\", \"3.0.0\", download_mode=\"force_redownload\")\r\n```\r\nand it worked fine on both my env (python 3.7.2) and colab (python 3.6.9)\r\n\r\nMaybe there was an issue with the google drive download link of the dataset ?\r\nAre you still having the issue ? If so could your give me more info about your python and requests version ?", "No, It's working fine now. Very strange. Here are my python and request versions\r\n\r\nrequests 2.24.0\r\nPython 3.8.2", "It's working as expected. Closing the issue \r\n\r\nThanks everybody." ]
1,605,674,282,000
1,605,849,731,000
1,605,849,730,000
NONE
null
null
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/864/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/861/comments
https://api.github.com/repos/huggingface/datasets/issues/861/events
https://github.com/huggingface/datasets/issues/861
744,753,458
MDU6SXNzdWU3NDQ3NTM0NTg=
861
Possible Bug: Small training/dataset file creates gigantic output
{ "login": "NebelAI", "id": 7240417, "node_id": "MDQ6VXNlcjcyNDA0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7240417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NebelAI", "html_url": "https://github.com/NebelAI", "followers_url": "https://api.github.com/users/NebelAI/followers", "following_url": "https://api.github.com/users/NebelAI/following{/other_user}", "gists_url": "https://api.github.com/users/NebelAI/gists{/gist_id}", "starred_url": "https://api.github.com/users/NebelAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NebelAI/subscriptions", "organizations_url": "https://api.github.com/users/NebelAI/orgs", "repos_url": "https://api.github.com/users/NebelAI/repos", "events_url": "https://api.github.com/users/NebelAI/events{/privacy}", "received_events_url": "https://api.github.com/users/NebelAI/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is why the tokenization takes so much space.\r\n\r\nI'm sure we can optimize that though\r\nWhat do you think @sgugger ?", "First I think we should disable padding in the dataset processing and let the data collator do it.\r\n\r\nThen I'm wondering if you need attention_mask and token_type_ids at this point ?\r\n\r\nFinally we can also specify the output feature types at this line https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L280 to use more optimized integer precisions for the output. Maybe something like:\r\n- input_ids: uint16 or uint32\r\n- token_type_ids: uint8 or bool\r\n- attention_mask: bool\r\n- special_tokens_mask: bool\r\n\r\nAlso IMO these changes are all on the `transformers` side. Maybe we should discuss on the `transformers` repo", "> First I think we should disable padding in the dataset processing and let the data collator do it.\r\n\r\nNo, you can't do that on TPUs as dynamic shapes will result in a very slow training. The script can however be tweaked to use the `PaddingDataCollator` with a fixed max length instead of dynamic batching.\r\n\r\nFor the other optimizations, they can be done by changing the script directly for each user's use case. Not sure we can find something that is general enough to be in transformers or the examples script.", "Oh yes right..\r\nDo you think that a lazy map feature on the `datasets` side could help to avoid storing padded tokenized texts then ?", "I think I can do the tweak mentioned above with the data collator as short fix (but fully focused on v4 right now so that will be for later this week, beginning of next week :-) ).\r\nIf it doesn't hurt performance to tokenize on the fly, that would clearly be the long-term solution however!", "> Hey guys,\r\n> \r\n> I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely.\r\n> \r\n> I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug?\r\n> \r\n> I've used the following CMD:\r\n> `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`\r\n\r\nIt's actually because of the parameter 'preprocessing_num_worker' when using TPU. \r\nI am also planning to have my model trained on the google TPU with a 11gb text corpus. With x8 cores enabled, each TPU core has its own dataset. When not using distributed training, the preprocessed file is about 77gb. On the opposite, if enable xla, the file produced will easily consume all my free space(more than 220gb, I think it will be, in the end, around 600gb ). \r\nSo I think that's maybe where the problem came from. \r\n\r\nIs there any possibility that all of the cores share the same preprocess dataset?\r\n\r\n@sgugger @RammMaschine ", "Hi @NebelAI, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs." ]
1,605,620,939,000
1,617,113,044,000
1,616,414,695,000
NONE
null
null
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/861/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/860/comments
https://api.github.com/repos/huggingface/datasets/issues/860/events
https://github.com/huggingface/datasets/issues/860
744,750,691
MDU6SXNzdWU3NDQ3NTA2OTE=
860
wmt16 cs-en does not donwload
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
[ "We know host this file, so downloading should be more robust." ]
1,605,620,735,000
1,664,972,820,000
1,664,972,819,000
CONTRIBUTOR
null
null
Hi I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks split="train", n_obs=data_args.n_train) for task in data_args.task} File "finetune_t5_trainer.py", line 109, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/home/rabeeh/internship/seq2seq/tasks/tasks.py", line 82, in get_dataset dataset = load_dataset("wmt16", self.pair, split=split) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rabeeh/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/860/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/854/comments
https://api.github.com/repos/huggingface/datasets/issues/854/events
https://github.com/huggingface/datasets/issues/854
743,675,376
MDU6SXNzdWU3NDM2NzUzNzY=
854
wmt16 does not download
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
[ "Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks ", "It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error).\r\nI searched a bit and couldn't find a mirror except maybe http://nlp.ffzg.hr/resources/corpora/setimes/ (the data are a cleaned version of the original ones though)\r\nShould we consider replacing the old urls with these ones even though it's not the exact same data ?", "The data storage is down at the moment. Sorry. Hopefully, it will come back soon. Apologies for the inconvenience ...", "Dear great huggingface team, this is not working yet, I really appreciate some temporary fix on this, I need this for my project and this is time sensitive and I will be grateful for your help on this. ", "We have reached out to the OPUS team which is currently working on making the data available again. Cc @jorgtied ", "thank you @thomwolf and HuggingFace team for the help. ", "OPUS is still down - hopefully back tomorrow.", "Hi, this is still down, I would be really grateful if you could ping them one more time. thank you so much. ", "Hi\r\nI am trying with multiple setting of wmt datasets and all failed so far, I need to have at least one dataset working for testing somecodes, and this is really time sensitive, I greatly appreciate letting me know of one translation datasets currently working. thanks ", "It is still down, unfortunately. I'm sorry for that. It should come up again later today or tomorrow at the latest if no additional complications will happen.", "Hi all, \r\nI pulled a request that fix this issue by replacing urls. \r\n\r\nhttps://github.com/huggingface/datasets/pull/1901\r\n\r\nThanks!\r\n", "It's still down for the wmt." ]
1,605,519,111,000
1,664,972,862,000
1,664,972,862,000
CONTRIBUTOR
null
null
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/854/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/853/comments
https://api.github.com/repos/huggingface/datasets/issues/853/events
https://github.com/huggingface/datasets/issues/853
743,426,583
MDU6SXNzdWU3NDM0MjY1ODM=
853
concatenate_datasets support axis=0 or 1 ?
{ "login": "renqingcolin", "id": 12437751, "node_id": "MDQ6VXNlcjEyNDM3NzUx", "avatar_url": "https://avatars.githubusercontent.com/u/12437751?v=4", "gravatar_id": "", "url": "https://api.github.com/users/renqingcolin", "html_url": "https://github.com/renqingcolin", "followers_url": "https://api.github.com/users/renqingcolin/followers", "following_url": "https://api.github.com/users/renqingcolin/following{/other_user}", "gists_url": "https://api.github.com/users/renqingcolin/gists{/gist_id}", "starred_url": "https://api.github.com/users/renqingcolin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/renqingcolin/subscriptions", "organizations_url": "https://api.github.com/users/renqingcolin/orgs", "repos_url": "https://api.github.com/users/renqingcolin/repos", "events_url": "https://api.github.com/users/renqingcolin/events{/privacy}", "received_events_url": "https://api.github.com/users/renqingcolin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892884, "node_id": "MDU6TGFiZWwxOTM1ODkyODg0", "url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted", "name": "help wanted", "color": "008672", "default": true, "description": "Extra attention is needed" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.\r\nCurrently to add more columns to a dataset, one must use `map`.\r\nWhat you can do is somehting like this:\r\n```python\r\n# suppose you have datasets d1, d2, d3\r\ndef add_columns(example, index):\r\n example.update(d2[index])\r\n example.update(d3[index])\r\n return example\r\n\r\nfull_dataset = d1.map(add_columns, with_indices=True)\r\n```", "Closing this one, feel free to re-open if you have other questions about this issue", "That's not really difficult to add, though, no?\r\nI think it can be done without copy.\r\nMaybe let's add it to the roadmap?", "Actually it's doable but requires to update the `Dataset._data_files` schema to support this.\r\nI'm re-opening this since we may want to add this in the future", "Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows. ", "Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help !\r\n\r\nHere is a few things about the current implementation:\r\n- A dataset object is a wrapper of one `pyarrow.Table` that contains the data\r\n- Pyarrow offers an API that allows to transform Table objects. For example there are functions like `concat_tables`, `Table.rename_columns`, `Table.add_column` etc.\r\n\r\nTherefore adding columns from another dataset is possible thanks to the pyarrow API and in particular `Table.add_column` :) \r\n\r\nHowever this breaks some features we have regarding pickle. A dataset object can be pickled and unpickled without loading all the data in memory. It is useful for multiprocessing for example. Pickling a dataset object is possible thanks to the `Dataset._data_files` which defines the list of arrow files that will be used to form the final Table (basically all the data from each files are concatenated on axis 0).\r\n\r\nTherefore to be able to add columns to a Dataset and still be able to work with it in a multiprocessing setup, we need to extend this last aspect to be able to reconstruct a Table object from multiple arrow files that are combined in both axis 0 and 1. Currently this reconstruction mechanism only supports axis 0.\r\n\r\nI'm sure we can figure something out that enables users to add columns from another dataset while keeping the multiprocessing support.", "@lhoestq, we have two Pull Requests to implement:\r\n- Dataset.add_item: #1870\r\n- Dataset.add_column: #2145\r\nwhich add a single row or column, repectively.\r\n\r\nThe request here is to implement the concatenation of *multiple* rows/columns. Am I right?\r\n\r\nWe should agree on the API:\r\n- `concatenate_datasets` with `axis`?\r\n- other Dataset method name?", "For the API, I like `concatenate_datasets` with `axis` personally :)\r\nFrom a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (append columns).\r\n\r\nRegarding what we need to implement:\r\nThe axis=0 is already supported and is the current behavior of `concatenate_datasets`.\r\nAlso `add_item` is not needed to implement axis=1 (though it's an awesome addition to this library).\r\n\r\nTo implement axis=1, we either need `add_column` or a `ConcatenationTable` constructor to concatenate tables horizontally.\r\nI have a preference for using a `ConcatenationTable` constructor because this way we can end up with a `ConcatenationTable` with only 1 additional block per table, while `add_column` would add 1 block per new column.\r\n\r\nMaybe we can simply have an equivalent of `ConcatenationTable.from_tables` but for axis=1 ?\r\n`axis` could also be an argument of `ConcatenationTable.from_tables`", "@lhoestq I think I guessed your suggestions in advance... 😉 #2151", "Cool ! Sorry I missed this one ^^\r\nI'm taking a look ;)" ]
1,605,494,783,000
1,618,848,438,000
1,618,848,438,000
NONE
null
null
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/853/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/853/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/852/comments
https://api.github.com/repos/huggingface/datasets/issues/852/events
https://github.com/huggingface/datasets/issues/852
743,396,240
MDU6SXNzdWU3NDMzOTYyNDA=
852
wmt cannot be downloaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[]
1,605,488,681,000
1,605,519,118,000
1,605,519,118,000
CONTRIBUTOR
null
null
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/852/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/849/comments
https://api.github.com/repos/huggingface/datasets/issues/849/events
https://github.com/huggingface/datasets/issues/849
742,263,333
MDU6SXNzdWU3NDIyNjMzMzM=
849
Load amazon dataset
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting !\r\nWe plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls.\r\n\r\nAlso I think the bullet points formatting has been fixed" ]
1,605,256,464,000
1,605,597,779,000
1,605,597,779,000
CONTRIBUTOR
null
null
Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews") ``` How it is when I tried (the error generated does point me to the right direction though) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews", 'Books_v1_00') ``` Also, there is some issue with formatting as it's not showing bullet list in description with new line. Can I work on it?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/849/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/848/comments
https://api.github.com/repos/huggingface/datasets/issues/848/events
https://github.com/huggingface/datasets/issues/848
742,240,942
MDU6SXNzdWU3NDIyNDA5NDI=
848
Error when concatenate_datasets
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shexuan/subscriptions", "organizations_url": "https://api.github.com/users/shexuan/orgs", "repos_url": "https://api.github.com/users/shexuan/repos", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "received_events_url": "https://api.github.com/users/shexuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n\r\nThe indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test.\r\n\r\nBefore saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.\r\n", "> As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n> \r\n> The indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test.\r\n> \r\n> Before saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.\r\n\r\n`dataset.flatten_indices()` solved my problem, thanks so much!", "@lhoestq we can add a mention of `dataset.flatten_indices()` in the error message (no rush, just put it on your TODO list or I can do it when I come at it)", "Yup I agree ! And in the docs as well" ]
1,605,254,162,000
1,605,289,259,000
1,605,282,910,000
NONE
null
null
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-74fa525512ca> in <module> ----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset]) /opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split) 2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format( 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]], -> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]], 2550 ) 2551 ) ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error: ``` trn_dataset._data_files # output [{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}] test_dataset._data_files # output [{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}] print([not dset._data_files for dset in [trn_dataset, test_dataset]]) # [False, False] # And I tested the code the same as arrow_dataset, but nothing happened dsets = [trn_dataset, test_dataset] dsets_in_memory = [not dset._data_files for dset in dsets] if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory): raise ValueError( "Datasets should ALL come from memory, or should ALL come from disk.\n" "However datasets {} come from memory and datasets {} come from disk.".format( [i for i in range(len(dsets)) if dsets_in_memory[i]], [i for i in range(len(dsets)) if not dsets_in_memory[i]], ) ) ``` Any suggestions would be greatly appreciated! Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/848/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/847/comments
https://api.github.com/repos/huggingface/datasets/issues/847/events
https://github.com/huggingface/datasets/issues/847
742,179,495
MDU6SXNzdWU3NDIxNzk0OTU=
847
multiprocessing in dataset map "can only test a child process"
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like an issue with wandb/tqdm here.\r\nWe're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility.\r\n\r\nCould you make a minimal script to reproduce or a google colab ?", "hi facing the same issue here - \r\n\r\n`AssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 996, in emit\r\n stream.write(msg)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/usr/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"<ipython-input-8-a4d9a08d114e>\", line 20, in __getitem__\r\n return_token_type_ids=True\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\", line 2405, in encode_plus\r\n **kwargs,\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py\", line 2125, in _get_padding_truncation_strategies\r\n \"Truncation was not explicitly activated but `max_length` is provided a specific value, \"\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1320, in warning\r\n self._log(WARNING, msg, args, **kwargs)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1444, in _log\r\n self.handle(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1454, in handle\r\n self.callHandlers(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1516, in callHandlers\r\n hdlr.handle(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 865, in handle\r\n self.emit(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 1000, in emit\r\n self.handleError(record)\r\n File \"/usr/lib/python3.6/logging/__init__.py\", line 917, in handleError\r\n sys.stderr.write('--- Logging error ---\\n')\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/usr/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process`\r\n", "It looks like this warning : \r\n\"Truncation was not explicitly activated but max_length is provided a specific value, \"\r\nis not handled well by wandb.\r\n\r\nThe error occurs when calling the tokenizer.\r\nMaybe you can try to specify `truncation=True` when calling the tokenizer to remove the warning ?\r\nOtherwise I don't know why wandb would fail on a warning. Maybe one of its logging handlers have some issues with the logging of tokenizers. Maybe @n1t0 knows more about this ?", "I'm having a similar issue but when I try to do multiprocessing with the `DataLoader`\r\n\r\nCode to reproduce:\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=5000)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\nfrom transformers import DataCollatorForWholeWordMask\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\ndata_collator = DataCollatorForWholeWordMask(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./mobile_linear_att_8L_128_128_03layerdrop_shared\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=64,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n gradient_accumulation_steps=1,\r\n fp16=True,\r\n **dataloader_num_workers=10**,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n\r\ntrainer.train()\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<timed eval> in <module>\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path, trial)\r\n 869 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)\r\n 870 \r\n--> 871 for step, inputs in enumerate(epoch_iterator):\r\n 872 \r\n 873 # Skip past any already trained steps if resuming training\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self)\r\n 433 if self._sampler_iter is None:\r\n 434 self._reset()\r\n--> 435 data = self._next_data()\r\n 436 self._num_yielded += 1\r\n 437 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _next_data(self)\r\n 1083 else:\r\n 1084 del self._task_info[idx]\r\n-> 1085 return self._process_data(data)\r\n 1086 \r\n 1087 def _try_put_index(self):\r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_data(self, data)\r\n 1109 self._try_put_index()\r\n 1110 if isinstance(data, ExceptionWrapper):\r\n-> 1111 data.reraise()\r\n 1112 return data\r\n 1113 \r\n\r\n~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/_utils.py in reraise(self)\r\n 426 # have message field\r\n 427 raise self.exc_type(message=msg)\r\n--> 428 raise self.exc_type(msg)\r\n 429 \r\n 430 \r\n\r\nAssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1087, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 1074, in _getitem\r\n format_kwargs=format_kwargs,\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 890, in _convert_outputs\r\n v = map_nested(command, v, **map_nested_kwargs)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/utils/py_utils.py\", line 225, in map_nested\r\n return function(data_struct)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py\", line 851, in command\r\n return torch.tensor(x, **format_kwargs)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py\", line 101, in _showwarnmsg\r\n _showwarnmsg_impl(msg)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py\", line 30, in _showwarnmsg_impl\r\n file.write(text)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"/home/ad/anaconda3/envs/tfm/lib/python3.6/multiprocessing/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n```\r\n\r\nAs a workaround I have commented line 456 and 457 in `/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py`", "Isn't it more the pytorch warning on the use of non-writable memory for tensor that trigger this here @lhoestq? (since it seems to be a warning triggered in `torch.tensor()`", "Yep this time this is a warning from pytorch that causes wandb to not work properly.\r\nCould this by a wandb issue ?", "Hi @timothyjlaurent @gaceladri \r\nIf you're running `transformers` from `master` you can try setting the env var `WAND_DISABLE=true` (from https://github.com/huggingface/transformers/pull/9896) and try again ?\r\nThis issue might be related to https://github.com/huggingface/transformers/issues/9623 ", "I have commented the lines that cause my code break. I'm now seeing my reports on Wandb and my code does not break. I am training now, so I will check probably in 6 hours. I suppose that setting wandb disable will work as well.", "This seems to be a bug in `wandb` (see https://github.com/wandb/wandb/issues/1994)." ]
1,605,247,264,000
1,664,972,571,000
1,664,972,571,000
NONE
null
null
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/847/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/847/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/846/comments
https://api.github.com/repos/huggingface/datasets/issues/846/events
https://github.com/huggingface/datasets/issues/846
741,885,174
MDU6SXNzdWU3NDE4ODUxNzQ=
846
Add HoVer multi-hop fact verification dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?", "Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), then follow the steps here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md", "Closed by #1399 " ]
1,605,210,946,000
1,607,636,853,000
1,607,636,853,000
MEMBER
null
null
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding) Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/846/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/843/comments
https://api.github.com/repos/huggingface/datasets/issues/843/events
https://github.com/huggingface/datasets/issues/843
741,531,121
MDU6SXNzdWU3NDE1MzExMjE=
843
use_custom_baseline still produces errors for bertscore
{ "login": "penatbater", "id": 37921244, "node_id": "MDQ6VXNlcjM3OTIxMjQ0", "avatar_url": "https://avatars.githubusercontent.com/u/37921244?v=4", "gravatar_id": "", "url": "https://api.github.com/users/penatbater", "html_url": "https://github.com/penatbater", "followers_url": "https://api.github.com/users/penatbater/followers", "following_url": "https://api.github.com/users/penatbater/following{/other_user}", "gists_url": "https://api.github.com/users/penatbater/gists{/gist_id}", "starred_url": "https://api.github.com/users/penatbater/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/penatbater/subscriptions", "organizations_url": "https://api.github.com/users/penatbater/orgs", "repos_url": "https://api.github.com/users/penatbater/repos", "events_url": "https://api.github.com/users/penatbater/events{/privacy}", "received_events_url": "https://api.github.com/users/penatbater/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
[ "Thanks for reporting ! That's a bug indeed\r\nIf you want to contribute, feel free to fix this issue and open a PR :)", "This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem. ", "Thanks for the heads up @pvl and for the PR as well :)", "Hello everyone,\r\n\r\nI think the problem is not solved: \r\n\r\n```\r\nfrom datasets import load_metric\r\nmetric=load_metric('bertscore')\r\nmetric.compute(\r\n predictions=predictions,\r\n references=references,\r\n lang='fr',\r\n rescale_with_baseline=True\r\n)\r\nTypeError: get_hash() missing 2 required positional arguments: 'use_custom_baseline' and 'use_fast_tokenizer'\r\n```\r\nThis code is produced using `Python 3.6.9 datasets==1.1.2 and bert_score==0.3.10`", "Hi ! This has been fixed by https://github.com/huggingface/datasets/pull/2770, we'll do a new release soon to make the fix available :)\r\n\r\nIn the meantime please use an older version of `bert_score`" ]
1,605,181,472,000
1,630,404,404,000
1,612,880,508,000
NONE
null
null
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/843/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/842/comments
https://api.github.com/repos/huggingface/datasets/issues/842/events
https://github.com/huggingface/datasets/issues/842
741,208,428
MDU6SXNzdWU3NDEyMDg0Mjg=
842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
{ "login": "shangw-nvidia", "id": 66387198, "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shangw-nvidia", "html_url": "https://github.com/shangw-nvidia", "followers_url": "https://api.github.com/users/shangw-nvidia/followers", "following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}", "gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}", "starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions", "organizations_url": "https://api.github.com/users/shangw-nvidia/orgs", "repos_url": "https://api.github.com/users/shangw-nvidia/repos", "events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}", "received_events_url": "https://api.github.com/users/shangw-nvidia/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Right now multiprocessing only runs on single node.\r\n\r\nHowever it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about pathos [on the pathos repo](https://github.com/uqfoundation/pathos).\r\n\r\nIf you're familiar with pathos or if you want to give it a try, it could be a nice addition to the library :)", "Curious to hear if anything on that side changed or if you suggestions to do it changed @lhoestq :)\r\n\r\nFor our use-case, we are entering the regime where trading a few more instances to save a few days would be nice :)", "Currently for multi-node setups we're mostly going towards a nice integration with Dask. But I wouldn't exclude exploring `pathos` more at one point" ]
1,605,146,678,000
1,665,591,051,000
null
NONE
null
null
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/842/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/841/comments
https://api.github.com/repos/huggingface/datasets/issues/841/events
https://github.com/huggingface/datasets/issues/841
740,737,448
MDU6SXNzdWU3NDA3Mzc0NDg=
841
Can not reuse datasets already downloaded
{ "login": "jc-hou", "id": 30210529, "node_id": "MDQ6VXNlcjMwMjEwNTI5", "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jc-hou", "html_url": "https://github.com/jc-hou", "followers_url": "https://api.github.com/users/jc-hou/followers", "following_url": "https://api.github.com/users/jc-hou/following{/other_user}", "gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}", "starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions", "organizations_url": "https://api.github.com/users/jc-hou/orgs", "repos_url": "https://api.github.com/users/jc-hou/repos", "events_url": "https://api.github.com/users/jc-hou/events{/privacy}", "received_events_url": "https://api.github.com/users/jc-hou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py'\r\nWhere and how to assign this ```wikipedia.py``` after I manually download it ?", "\r\ndownload the ```wikipedia.py``` at the working directory and go with ```dataset = load_dataset('wikipedia.py', '20200501.en')``` works." ]
1,605,098,535,000
1,605,118,636,000
1,605,118,636,000
NONE
null
null
Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but failed and end with time out error. On frontal node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Reusing dataset wikipedia (/linkhome/rech/genini01/uua34ms/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/f92599dfccab29832c442b82870fa8f6983e5b4ebbf5e6e2dcbe894e325339cd) /linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0 ``` On gpu node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn conn.connect() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 309, in connect conn = self._new_conn() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 172, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/retry.py", line 446, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 590, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 264, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return requests.head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) ``` Any advice?Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/841/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/839/comments
https://api.github.com/repos/huggingface/datasets/issues/839/events
https://github.com/huggingface/datasets/issues/839
740,355,270
MDU6SXNzdWU3NDAzNTUyNzA=
839
XSum dataset missing spaces between sentences
{ "login": "loganlebanoff", "id": 10007282, "node_id": "MDQ6VXNlcjEwMDA3Mjgy", "avatar_url": "https://avatars.githubusercontent.com/u/10007282?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loganlebanoff", "html_url": "https://github.com/loganlebanoff", "followers_url": "https://api.github.com/users/loganlebanoff/followers", "following_url": "https://api.github.com/users/loganlebanoff/following{/other_user}", "gists_url": "https://api.github.com/users/loganlebanoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/loganlebanoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loganlebanoff/subscriptions", "organizations_url": "https://api.github.com/users/loganlebanoff/orgs", "repos_url": "https://api.github.com/users/loganlebanoff/repos", "events_url": "https://api.github.com/users/loganlebanoff/events{/privacy}", "received_events_url": "https://api.github.com/users/loganlebanoff/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,605,054,883,000
1,605,054,883,000
null
NONE
null
null
I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set): `The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like this morning 'Oh I think you're nominated'", said Dappy."And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!"Bandmate Fazer added: "We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations."The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around."At the end of the day we're grateful to be where we are in our careers."If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans."Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border."We just done Edinburgh the other day," said Dappy."We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!"`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/839/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/836/comments
https://api.github.com/repos/huggingface/datasets/issues/836/events
https://github.com/huggingface/datasets/issues/836
740,187,613
MDU6SXNzdWU3NDAxODc2MTM=
836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
{ "login": "randubin", "id": 8919490, "node_id": "MDQ6VXNlcjg5MTk0OTA=", "avatar_url": "https://avatars.githubusercontent.com/u/8919490?v=4", "gravatar_id": "", "url": "https://api.github.com/users/randubin", "html_url": "https://github.com/randubin", "followers_url": "https://api.github.com/users/randubin/followers", "following_url": "https://api.github.com/users/randubin/following{/other_user}", "gists_url": "https://api.github.com/users/randubin/gists{/gist_id}", "starred_url": "https://api.github.com/users/randubin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/randubin/subscriptions", "organizations_url": "https://api.github.com/users/randubin/orgs", "repos_url": "https://api.github.com/users/randubin/repos", "events_url": "https://api.github.com/users/randubin/events{/privacy}", "received_events_url": "https://api.github.com/users/randubin/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
[ "Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?", "Thanks for the fast response. I have the latest version '2.0.0' (I tried to update)\r\nI am working with Python 3.8.5", "I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612\r\nThe problem is in arrow when the column data contains long strings.\r\nAny ideas on how to bypass this?", "We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).\r\n\r\n\r\nIn the meantime you can specify yourself the `ReadOptions` config like this:\r\n```python\r\nimport pyarrow.csv as pac # PyArrow is installed with `datasets`\r\n\r\nread_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case\r\ndataset = load_dataset('csv', data_files=files, read_options=read_options)\r\n```\r\n", "This did help to load the data. But the problem now is that I get:\r\nArrowInvalid: CSV parse error: Expected 5 columns, got 187\r\n\r\nIt seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow\r\nBut I got a similar error, again it loaded fine in pandas so I am not sure what to do.\r\n\r\n\r\n\r\n", "Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error.", "> We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py).\r\n> \r\n> In the meantime you can specify yourself the `ReadOptions` config like this:\r\n> \r\n> ```python\r\n> import pyarrow.csv as pac # PyArrow is installed with `datasets`\r\n> \r\n> read_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case\r\n> dataset = load_dataset('csv', data_files=files, read_options=read_options)\r\n> ```\r\n\r\nThis did not work for me, I got\r\n`TypeError: __init__() got an unexpected keyword argument 'read_options'`", "Hi ! Yes because of issues with PyArrow's CSV reader we switched to using the Pandas CSV reader. In particular the `read_options` argument is not supported anymore, but you can pass any parameter of Pandas' `read_csv` function (see the list here in [Pandas documentation](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html))" ]
1,605,036,940,000
1,637,773,159,000
1,605,807,338,000
NONE
null
null
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/836/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/835/comments
https://api.github.com/repos/huggingface/datasets/issues/835/events
https://github.com/huggingface/datasets/issues/835
740,102,210
MDU6SXNzdWU3NDAxMDIyMTA=
835
Wikipedia postprocessing
{ "login": "bminixhofer", "id": 13353204, "node_id": "MDQ6VXNlcjEzMzUzMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/13353204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bminixhofer", "html_url": "https://github.com/bminixhofer", "followers_url": "https://api.github.com/users/bminixhofer/followers", "following_url": "https://api.github.com/users/bminixhofer/following{/other_user}", "gists_url": "https://api.github.com/users/bminixhofer/gists{/gist_id}", "starred_url": "https://api.github.com/users/bminixhofer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bminixhofer/subscriptions", "organizations_url": "https://api.github.com/users/bminixhofer/orgs", "repos_url": "https://api.github.com/users/bminixhofer/repos", "events_url": "https://api.github.com/users/bminixhofer/events{/privacy}", "received_events_url": "https://api.github.com/users/bminixhofer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect.\r\n\r\nAs an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool", "Ok, thanks! I'll try the Wiki40b dataset.", "If anyone else is concerned about this, `wiki40b` does indeed seem very well cleaned." ]
1,605,029,198,000
1,605,032,600,000
1,605,030,561,000
NONE
null
null
Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des Gemälde „Tierra y Libertad“ von Idelfonso Carrara (?) von 1930. Ricardo Flores Magón (* 16. September 1874 in San Antonio Eloxochitlán im mexikanischen Bundesstaat Oaxaca; † 22. November 1922 im Bundesgefängnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein führender anarchistischer Theoretiker und Aktivist, der die revolutionäre mexikanische Bewegung radikal beeinflusste. Magón war Gründer der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World. Politische Biografie Journalistisch und politisch kämpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung „Land und Freiheit“ (Tierra y Libertad) populär. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte großen Einfluss auf die Landarbeiter. 1904 floh er in die USA und gründete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in Gefängnissen und im Exil und wurde 1918 in den USA wegen „Behinderung der Kriegsanstrengungen“ zu zwanzig Jahren Gefängnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass Magón von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM veröffentlichte 1923 einen Beitrag, nachdem Magón von einem Gefängniswärter erschlagen wurde. mini|Die Brüder Ricardo (links) und Enrique Flores Magón (rechts) vor dem Los Angeles County Jail, 1917 [...] ``` so some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup? Apologies if this has been asked before.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/835/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/834/comments
https://api.github.com/repos/huggingface/datasets/issues/834/events
https://github.com/huggingface/datasets/issues/834
740,082,890
MDU6SXNzdWU3NDAwODI4OTA=
834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?", "Hi @KMFODA ! A version of WikiLingua is actually already accessible in the [GEM dataset](https://huggingface.co/datasets/gem)\r\n\r\nYou can use it for example to load the French to English translation with:\r\n```python\r\nfrom datasets import load_dataset\r\nwikilingua = load_dataset(\"gem\", \"wiki_lingua_french_fr\")\r\n```\r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1807" ]
1,605,027,643,000
1,618,488,249,000
1,618,488,098,000
MEMBER
null
null
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** https://arxiv.org/pdf/2010.03093.pdf - **Data:** https://github.com/esdurmus/Wikilingua - **Motivation:** Included in the GEM shared task. Multilingual. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/834/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/833/comments
https://api.github.com/repos/huggingface/datasets/issues/833/events
https://github.com/huggingface/datasets/issues/833
740,079,692
MDU6SXNzdWU3NDAwNzk2OTI=
833
[GEM] add ASSET text simplification dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[]
1,605,027,390,000
1,607,002,695,000
1,607,002,695,000
MEMBER
null
null
## Adding a Dataset - **Name:** ASSET - **Description:** ASSET is a crowdsourced multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf - **Data:** https://github.com/facebookresearch/asset - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/833/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/832/comments
https://api.github.com/repos/huggingface/datasets/issues/832/events
https://github.com/huggingface/datasets/issues/832
740,077,228
MDU6SXNzdWU3NDAwNzcyMjg=
832
[GEM] add WikiAuto text simplification dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[]
1,605,027,203,000
1,607,002,688,000
1,607,002,688,000
MEMBER
null
null
## Adding a Dataset - **Name:** WikiAuto - **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.709.pdf - **Data:** https://github.com/chaojiang06/wiki-auto - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/832/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/831/comments
https://api.github.com/repos/huggingface/datasets/issues/831/events
https://github.com/huggingface/datasets/issues/831
740,071,697
MDU6SXNzdWU3NDAwNzE2OTc=
831
[GEM] Add WebNLG dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[]
1,605,026,808,000
1,607,002,681,000
1,607,002,681,000
MEMBER
null
null
## Adding a Dataset - **Name:** WebNLG - **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian - **Paper:** https://www.aclweb.org/anthology/P17-1017.pdf - **Data:** https://webnlg-challenge.loria.fr/download/ - **Motivation:** Included in the GEM shared task, multilingual Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/831/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/830/comments
https://api.github.com/repos/huggingface/datasets/issues/830/events
https://github.com/huggingface/datasets/issues/830
740,065,376
MDU6SXNzdWU3NDAwNjUzNzY=
830
[GEM] add ToTTo Table-to-text dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "closed via #1098 " ]
1,605,026,314,000
1,607,605,562,000
1,607,605,561,000
MEMBER
null
null
## Adding a Dataset - **Name:** ToTTo - **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. - **Paper:** https://arxiv.org/abs/2004.14373 - **Data:** https://github.com/google-research-datasets/totto - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/830/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/829/comments
https://api.github.com/repos/huggingface/datasets/issues/829/events
https://github.com/huggingface/datasets/issues/829
740,061,699
MDU6SXNzdWU3NDAwNjE2OTk=
829
[GEM] add Schema-Guided Dialogue
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[]
1,605,026,024,000
1,607,002,670,000
1,607,002,670,000
MEMBER
null
null
## Adding a Dataset - **Name:** The Schema-Guided Dialogue Dataset - **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, ranging from banks and events to media, calendar, travel, and weather. - **Paper:** https://arxiv.org/pdf/2002.01359.pdf https://arxiv.org/pdf/2004.15006.pdf - **Data:** https://github.com/google-research-datasets/dstc8-schema-guided-dialogue - **Motivation:** Included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/829/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/827/comments
https://api.github.com/repos/huggingface/datasets/issues/827/events
https://github.com/huggingface/datasets/issues/827
739,983,024
MDU6SXNzdWU3Mzk5ODMwMjQ=
827
[GEM] MultiWOZ dialogue dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Hi @yjernite can I help in adding this dataset? \r\n\r\nI am excited about this because this will be my first contribution to the datasets library as well as to hugginface.", "Resolved via https://github.com/huggingface/datasets/pull/979" ]
1,605,020,270,000
1,664,973,073,000
1,664,973,073,000
MEMBER
null
null
## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user side. - **Paper:** https://arxiv.org/pdf/2007.12720.pdf - **Data:** https://github.com/budzianowski/multiwoz - **Motivation:** Will likely be part of the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/827/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/826/comments
https://api.github.com/repos/huggingface/datasets/issues/826/events
https://github.com/huggingface/datasets/issues/826
739,976,716
MDU6SXNzdWU3Mzk5NzY3MTY=
826
[GEM] Add E2E dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[]
1,605,019,840,000
1,607,002,677,000
1,607,002,677,000
MEMBER
null
null
## Adding a Dataset - **Name:** E2E NLG dataset (for End-to-end natural language generation) - **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 reference free-text utterances per dialogue-act on average - **Paper:** https://arxiv.org/pdf/1706.09254.pdf https://arxiv.org/abs/1901.07931 - **Data:** http://www.macs.hw.ac.uk/InteractionLab/E2E/#data - **Motivation:** This dataset will likely be included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/826/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/824/comments
https://api.github.com/repos/huggingface/datasets/issues/824/events
https://github.com/huggingface/datasets/issues/824
739,896,526
MDU6SXNzdWU3Mzk4OTY1MjY=
824
Discussion using datasets in offline mode
{ "login": "mandubian", "id": 77193, "node_id": "MDQ6VXNlcjc3MTkz", "avatar_url": "https://avatars.githubusercontent.com/u/77193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mandubian", "html_url": "https://github.com/mandubian", "followers_url": "https://api.github.com/users/mandubian/followers", "following_url": "https://api.github.com/users/mandubian/following{/other_user}", "gists_url": "https://api.github.com/users/mandubian/gists{/gist_id}", "starred_url": "https://api.github.com/users/mandubian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mandubian/subscriptions", "organizations_url": "https://api.github.com/users/mandubian/orgs", "repos_url": "https://api.github.com/users/mandubian/repos", "events_url": "https://api.github.com/users/mandubian/events{/privacy}", "received_events_url": "https://api.github.com/users/mandubian/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
closed
false
null
[]
[ "No comments ?", "I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the data already. I'm going to try option 2 you mention for now though! Thanks ;)", "Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine.\r\n\r\n@mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?", "here is my way to load a dataset offline, but it **requires** an online machine\r\n1. (online machine)\r\n```\r\nimport datasets\r\ndata = datasets.load_dataset(...)\r\ndata.save_to_disk(/YOUR/DATASET/DIR)\r\n```\r\n2. copy the dir from online to the offline machine\r\n3. (offline machine)\r\n```\r\nimport datasets\r\ndata = datasets.load_from_disk(/SAVED/DATA/DIR)\r\n```\r\n\r\nHTH.", "> here is my way to load a dataset offline, but it **requires** an online machine\n> \n> 1. (online machine)\n> \n> ```\n> \n> import datasets\n> \n> data = datasets.load_dataset(...)\n> \n> data.save_to_disk(/YOUR/DATASET/DIR)\n> \n> ```\n> \n> 2. copy the dir from online to the offline machine\n> \n> 3. (offline machine)\n> \n> ```\n> \n> import datasets\n> \n> data = datasets.load_from_disk(/SAVED/DATA/DIR)\n> \n> ```\n> \n> \n> \n> HTH.\n\n", "I opened a PR that allows to reload modules that have already been loaded once even if there's no internet.\r\n\r\nLet me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :) \r\n\r\nI already note the \"freeze\" modules option, to prevent local modules updates. It would be a cool feature.\r\n\r\n----------\r\n\r\n> @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?\r\n\r\nIndeed `load_dataset` allows to load remote dataset script (squad, glue, etc.) but also you own local ones.\r\nFor example if you have a dataset script at `./my_dataset/my_dataset.py` then you can do\r\n```python\r\nload_dataset(\"./my_dataset\")\r\n```\r\nand the dataset script will generate your dataset once and for all.\r\n\r\n----------\r\n\r\nAbout I'm looking into having `csv`, `json`, `text`, `pandas` dataset builders already included in the `datasets` package, so that they are available offline by default, as opposed to the other datasets that require the script to be downloaded.\r\ncf #1724 ", "The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\nYou can now use them offline\r\n```python\r\ndatasets = load_dataset('text', data_files=data_files)\r\n```\r\n\r\nWe'll do a new release soon", "Already fixed by:\r\n- #1726" ]
1,605,013,851,000
1,644,921,156,000
1,644,921,156,000
NONE
null
null
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/824/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/824/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/823/comments
https://api.github.com/repos/huggingface/datasets/issues/823/events
https://github.com/huggingface/datasets/issues/823
739,815,763
MDU6SXNzdWU3Mzk4MTU3NjM=
823
how processing in batch works in datasets
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Hi I don’t think this is a request for a dataset like you labeled it.\r\n\r\nI also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks.", "Hi Thomas,\nwhat I do not get from documentation is that why when you set batched=True,\nthis is processed in batch, while data is not divided to batched\nbeforehand, basically this is a question on the documentation and I do not\nget the batched=True, but sure, if you think this is more appropriate in\nforum I will post it there.\nthanks\nBest\nRabeeh\n\nOn Tue, Nov 10, 2020 at 12:21 PM Thomas Wolf <notifications@github.com>\nwrote:\n\n> Hi I don’t think this is a request for a dataset like you labeled it.\n>\n> I also think this would be better suited for the forum at\n> https://discuss.huggingface.co. we try to keep the issue for the repo for\n> bug reports and new features/dataset requests and have usage questions\n> discussed on the forum. Thanks.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/823#issuecomment-724639476>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHH4FIPFHVVUHANAE4F3SPEO2JANCNFSM4TQQVEXQ>\n> .\n>\n", "Yes the forum is perfect for that. You can post in the `datasets` section.\r\nThanks a lot!" ]
1,605,006,677,000
1,605,013,870,000
1,605,013,869,000
NONE
null
null
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented max_source_length: str = NotImplemented max_target_length: str = NotImplemented # TODO: should not be a task item, but cannot see other ways. tpu_num_cores: int = None # The arguments set are for all tasks and needs to be kept common. def __init__(self, config): self.max_source_length = config['max_source_length'] self.max_target_length = config['max_target_length'] self.tokenizer = config['tokenizer'] self.tpu_num_cores = config['tpu_num_cores'] def _encode(self, batch) -> Dict[str, torch.Tensor]: batch_encoding = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack return_tensors="pt" ) return batch_encoding.data def data_split(self, split): return self.split_to_data_split[split] def get_dataset(self, split, n_obs=None): split = self.data_split(split) if n_obs is not None: split = split+"[:{}]".format(n_obs) dataset = load_dataset(self.task_name, split=split) dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names) dataset = dataset.map(lambda batch: self._encode(batch), batched=True) dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) return dataset ``` I call it like `AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train) ` This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks File "finetune_multitask_trainer.py", line 192, in main if training_args.do_train else None File "finetune_multitask_trainer.py", line 191, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda> dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode [x["src_texts"] for x in batch], File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp> [x["src_texts"] for x in batch], TypeError: string indices must be integers
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/823/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/822/comments
https://api.github.com/repos/huggingface/datasets/issues/822/events
https://github.com/huggingface/datasets/issues/822
739,579,314
MDU6SXNzdWU3Mzk1NzkzMTQ=
822
datasets freezes
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
[ "Pytorch is unable to convert strings to tensors unfortunately.\r\nYou can use `set_format(type=\"torch\")` on columns that can be converted to tensors, such as token ids.\r\n\r\nThis makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns" ]
1,604,985,019,000
1,605,223,383,000
null
NONE
null
null
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_dataset("imdb", split="train[:10]") dataset2 = dataset2.set_format(type="torch", columns=["text", "label"]) print(len(dataset1))
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/822/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/821/comments
https://api.github.com/repos/huggingface/datasets/issues/821/events
https://github.com/huggingface/datasets/issues/821
739,506,859
MDU6SXNzdWU3Mzk1MDY4NTk=
821
`kor_nli` dataset doesn't being loaded properly
{ "login": "sackoh", "id": 30492059, "node_id": "MDQ6VXNlcjMwNDkyMDU5", "avatar_url": "https://avatars.githubusercontent.com/u/30492059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sackoh", "html_url": "https://github.com/sackoh", "followers_url": "https://api.github.com/users/sackoh/followers", "following_url": "https://api.github.com/users/sackoh/following{/other_user}", "gists_url": "https://api.github.com/users/sackoh/gists{/gist_id}", "starred_url": "https://api.github.com/users/sackoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sackoh/subscriptions", "organizations_url": "https://api.github.com/users/sackoh/orgs", "repos_url": "https://api.github.com/users/sackoh/repos", "events_url": "https://api.github.com/users/sackoh/events{/privacy}", "received_events_url": "https://api.github.com/users/sackoh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604,973,852,000
1,605,535,152,000
1,605,535,152,000
NONE
null
null
There are two issues from `kor_nli` dataset 1. csv.DictReader failed to split features by tab - Should not exist `None` value in label feature, but there it is. ```python kor_nli_train['train'].unique('gold_label') # ['neutral', 'entailment', 'contradiction', None] ``` - I found a reason why there is `None` values in label feature as following code ```python from datasets import load_dataset kor_nli_train = load_dataset('kor_nli', 'multi_nli') for idx, example in enumerate(kor_nli_train['train']): if example['gold_label'] is None: print(idx, example) break # 16835 {'gold_label': None, 'sentence1': '그는 전쟁 전에 가벼운 벅스킨 암말을 가지고 달리기 위해 우유처럼 하얀 스터드를 넣었다.\t전쟁 전에 다인종 여성들과 함께 있는 백인 남자가 있었다.\tentailment\n슬림은 재빨리 옷을 입었고, 순간적으로 미지근한 물을 뿌릴 수 있는 아침 세탁물을 기꺼이 가두었다.\t슬림은 직장에 늦었다.\tneutral\n뉴욕에서 그 식사를 해봤는데, 거기서 소고기의 멋진 소고기 부분을 요리하고 바베큐로 만든 널빤지 같은 걸 가져왔는데, 정말 대단해.\t그들이 거기서 요리하는 쇠고기는 역겹다. 거기서 절대 먹지 마라.\tcontradiction\n판매원의 죽음에서 브라이언 데네히... 크리스 켈리\t크리스 켈리는 세일즈맨의 죽음을 언급하지 않는다.\tcontradiction\n그러는 동안 요리사는 그냥 화가 났어.\t스튜가 끓는 동안 요리사는 화가 났다.\tneutral\n마지막 로마의 맹공격 전날 밤, 900명 이상의 유대인 수비수들이 로마인들에게 그들을 사로잡는 승리를 주기 보다는 대량 자살을 저질렀다.\t로마인들이 그들의 포획에 승리하도록 내버려두기 보다는 900명의 유대인 수비수들이 자살했다.\tentailment\n앞으로 발사하라.\t발사.\tneutral\n그리고 당신은 우리 땅이 에이커에 있다는 것을 알고 있다. 우리 사람들은 어떤 것이 얼마나 많은지 이해하지 못할 것이다.\t모든 사람들은 우리의 측정 시스템이 어떻게 작동하는지 알고 이해합니다.\tcontradiction\n주미게스\tJumiyges는 도시의 이름이다.\tneutral\n사람은 자기 민족을 돌봐야 한다...\t사람은 조국에 공감해야 한다.\tentailment\n또한 PDD 63은 정부와 업계가 컴퓨터 기반 공격에 대해 경고하고 방어할 준비를 더 잘할 수 있도록 시스템 취약성, 위협, 침입 및 이상에 대한 정보를 공유하는 메커니즘을 수립하는 것이 중요하다는 것을 인식했습니다.\t정보 전송 프로토콜을 만드는 것은 중요하다.\tentailment\n카페 링 피아자 델라 레퓌블리카 바로 남쪽에는 피렌체가 알려진 짚 제품 때문에 한때 스트로 마켓이라고 불렸던 16세기 로지아인 메르카토 누오보(Mercato Nuovo)가 있다.\t피아자 델라 레퓌블리카에는 카페가 많이 있다.\tentailment\n우리가 여기 있는 한 트린판이 뭘 주웠는지 살펴봐야겠어\t우리는 트린판이 무엇을 주웠는지 보는 데 시간을 낭비하지 않을 것이다.\tcontradiction\n그러나 켈트족의 문화적 기반을 가진 아일랜드 교회는 유럽의 신흥 기독교 세계와는 다르게 발전했고 결국 로마와 중앙집권적 행정으로 대체되었다.\t아일랜드 교회에는 켈트족의 기지가 있었다.\tentailment\n글쎄, 넌 선택의 여지가 없어\t글쎄, 너에겐 많은 선택권이 있어.\tcontradiction\n사실, 공식적인 보장은 없다.\t내가 산 물건에 대한 보증이 없었다.\tneutral\n덜 활기차긴 하지만, 안시와 르 부르젯의 사랑스러운 호수에서도 삶은 똑같이 상쾌하다.\t안시와 르 부르겟에서는 호수에서의 활동이 서두르고 바쁜 분위기를 연출한다.\tcontradiction\n그의 여행 소식이 이미 퍼졌다면 공격 소식도 퍼졌을 테지만 마을에서는 전혀 공황의 기미가 보이지 않았다.\t그는 왜 마을이 당황하지 않았는지 알 수 없었다.\tneutral\n과거에는 죽음의 위협이 토지의 판매를 막는 데 거의 도움이 되지 않았다.\t토지 판매는 어떠한 위협도 교환하지 않고 이루어진다.\tcontradiction\n어느 시점에 이르러 나는 지금 다가오는 새로운 것들과 나오는 많은 새로운 것들이 내가 늙어가고 있다고 말하는 시대로 접어들고 있다.\t나는 여전히 내가 보는 모든 새로운 것을 사랑한다.\tcontradiction\n뉴스위크는 물리학자들이 경기장 행사에서 고속도로의 자동차 교통과 보행자 교통을 개선하기 위해 새떼의 움직임을 연구하고 있다고 말한다.\t고속도로의 자동차 교통 흐름을 개선하는 것은 물리학자들이 새떼를 연구하는 이유 중 하나이다.\tentailment\n얼마나 다른가? 그는 잠시 말을 멈추었다가 말을 이었다.\t그는 그 소녀가 어디에 있는지 알고 싶었다.\tentailment\n글쎄, 그에게 너무 많은 것을 주지마.\t그는 훨씬 더 많은 것을 요구할 것이다.\tneutral\n아무리 그의 창작물이 완벽해 보인다고 해도, 그들을 믿는 것은 아마도 좋은 생각이 아닐 것이다.\'\t도자기를 잘 만든다고 해서 누군가를 믿는 것은 아마 좋지 않을 것이다.\tneutral\n버스틀링 그란 비아(Bustling Gran Via)는 호텔, 상점, 극장, 나이트클럽, 카페 등이 어우러져 산책과 창가를 볼 수 있다.\tGran Via는 호텔, 상점, 극장, 나이트클럽, 카페의 번화한 조합이다.\tentailment\n정부 인쇄소\t그 사무실은 워싱턴에 위치해 있다.\tneutral\n실제 문화 전쟁이 어디 있는지 알고 싶다면 학원을 잊어버리고 실리콘 밸리와 레드몬드를 생각해 보라.\t실제 문화 전쟁은 레드몬드에서 일어난다.\tentailment\n그리고 페니실린을 주지 않기 위해 침대 위에 올려놨어\t그녀의 방에는 페니실린이 없다는 징후가 전혀 없었다.\tcontradiction\nL.A.의 야외 시장을 활보하는 것은 맛있고 저렴한 그루브를 잡고, 끝이 없는 햇빛을 즐기고, 신선한 농산물, 꽃, 향, 그리고 가젯 갈로어를 구입하면서 현지인들과 어울릴 수 있는 훌륭한 방법이다.\tLA의 야외 시장을 돌아다니는 것은 시간 낭비다.\tcontradiction\n안나는 밖으로 나와 안도의 한숨을 내쉬었다. 단 한 번, 그리고 마리후아쉬 맛의 술로 끝내자는 결심이 뒤섞여 있었다.\t안나는 안심하고 마리후아쉬 맛의 술을 다 마시기로 결심했다.\tentailment\n5 월에 Vajpayee는 핵 실험의 성공적인 완료를 발표했는데, 인도인들은 주권의 표시로 선전했지만 이웃 국가와 서구와의 인도 관계를 복잡하게 만들 수 있습니다.\t인도는 성공적인 핵실험을 한 적이 없다.\tcontradiction\n플라노 원에서 보통 얼마나 많은 것을 가지고 있는가?\t저 사람들 중에 플라노 원에 가본 사람 있어?\tcontradiction\n그것의 전체적인 형태의 우아함은 운하 건너편에서 가장 잘 볼 수 있다. 왜냐하면, 로마에 있는 성 베드로처럼, 돔은 길쭉한 본당 뒤로 더 가까운 곳에 사라지기 때문이다.\t성 베드로의 길쭉한 본당은 돔을 가린다.\tentailment\n당신은 수틴이 살에 강박적인 기쁨을 가지고 누드를 그릴 것이라고 생각하겠지만, 아니오; 그는 그의 모든 경력에서 단 한 점만을 그렸고, 그것은 사소한 그림이다.\t그는 그것이 그를 불편하게 만들었기 때문에 하나만 그렸다.\tneutral\n이 인상적인 풍경은 원래 나포 레온이 루브르 박물관의 침실에서 볼 수 있도록 계획되었는데, 그 당시 궁전이었습니다.\t나폴레옹은 그의 모든 궁전에 있는 그의 침실에서 보는 경치에 많은 관심을 가졌다.\tneutral\n그는 우리에게 문 열쇠를 건네주고는 급히 떠났다.\t그는 긴장해서 우리에게 열쇠를 빨리 주었다.\tneutral\n위원회는 또한 최종 규칙을 OMB에 제출했다.\t위원회는 또한 이 규칙을 다른 그룹에 제출했지만 최종 규칙은 OMB가 평가하기 위한 것이 었습니다.\tneutral\n정원가게에 가보면 올리비아의 복제 화합물 같은 유쾌한 이름을 가진 제품들을 찾을 수 있을 겁니다.이 제품이 뿌리를 내리도록 돕기 위해 촬영의 절단된 끝에 덩크슛을 하는 호르몬의 혼합물이죠.\t정원 가꾸기 가게의 제품들은 종종 그들의 목적을 설명하기 위해 기술적으로나 과학적으로 파생된 이름(올리비아의 복제 화합물처럼)을 부여받는다.\tneutral\n스타는 스틸 자신이나 왜 그녀의 이야기를 바꾸었는지에 훨씬 더 관심이 있을 것이다.\t스틸의 이야기는 조금도 변하지 않았다.\tcontradiction\n남편과의 마지막 대결로 맥티어는 노라의 변신을 너무나 능숙하게 예고해 왔기 때문에, 그녀에게는 당황스러울 정도로 갑작스러운 것처럼 보이지만, 우리에게는 감정적으로 불가피해 보인다.\t노라의 변신은 분명하고 필연적이었다.\tcontradiction\n이집트 최남단 도시인 아스완은 오랜 역사를 통해 중요한 역할을 해왔다.\t아스완은 이집트 국경 바로 위에 위치해 있습니다.\tneutral\n그러나 훨씬 더 우아한 건축적 터치는 신성한 춤인 Bharatanatyam에서 수행된 108 가지 기본 포즈를 시바 패널에서 볼 수 있습니다.\t패널에 대한 시바의 묘사는 일반적인 모티브다.\tneutral\n호화롭게 심어진 계단식 정원은 이탈리아 형식의 가장 훌륭한 앙상블 중 하나입니다.\t아름다운 정원과 희귀한 꽃꽂이 모두 이탈리아의 형식적인 스타일을 보여준다.\tneutral\n음, 그랬으면 좋았을 텐데\t나는 그것을 다르게 할 기회를 몹시 갈망한다.\tentailment\n폐허가 된 성의 기슭에 자리잡고 있는 예쁜 중세 도시 케이서스버그는 노벨 평화상 수상자 알버트 슈바이처(1875년)의 출생지로 널리 알려져 있다.\t알버트 슈바이처는 둘 다 케이서스버그 마을에 있었다.\tentailment\n고감도는 문제가 있는 대부분의 환자들이 발견될 것을 보장한다.\t장비 민감도는 문제 탐지와 관련이 없습니다.\tcontradiction\n오늘은 확실히 반바지 같은 날이었어\t오늘 사무실에 있는 모든 사람들은 반바지를 입었다.\tneutral\n못생긴 턱시도를 입고.\t그것은 분홍색과 주황색입니다.\tneutral\n이주 노동 수용소 오 마이 갓 그들은 판지 상자에 산다.\t노동 수용소에는 판지 상자에 사는 이주 노동자들의 사진이 있다.\tneutral\n그래, 그가 전 세계를 여행한 후에 그런 거야\t그것은 사람들의 세계 여행을 따른다.\tentailment\n건너편에 크고 큰 참나무 몇 그루가 있다.\t우리는 여기 오크나 어떤 종류의 미국 나무도 없다.\tcontradiction\nFort-de-France에서 출발하는 자동차나 여객선으로, 당신은 안세 ? 바다 포도가 그늘을 제공하는 쾌적한 갈색 모래 해변과 피크닉 테이블, 어린이 미끄럼틀, 식당이 있는 안느에 도착할 수 있다.\t프랑스 요새에서 자동차나 페리를 타고 안세로 갈 수 있다.\tentailment\n그리고 그것은 앨라배마주가 예상했던 대로 예산에서 50만 달러를 삭감하지 않을 것이라는 것을 의미한다.\t앨라배마 주는 예산 삭감을 하지 않았다. 왜냐하면 그렇게 하는 것에 대한 초기 정당성이 정밀 조사에 맞서지 않았기 때문이다.\tneutral\n알았어 먼저 어 .. 어 .. 노인이나 가족을 요양원에 보내는 것에 대해 어떻게 생각하니?\t가족을 요양원에 보내서 사는 것에 대해 어떻게 생각하는지 알 필요가 없다.\tcontradiction\n나머지는 너에게 달렸어.\t나머지는 너에게 달렸지만 시간이 많지 않다.\tneutral\n음-흠, 3월에 햇볕에 타는 것에 대해 걱정하면 안 된다는 것을 알고 있는 3월이야.\t3월은 그렇게 덥지 않다.\tneutral\n그리고 어, 그런 작은 것들로 다시 시작해봐. 아직 훨씬 싸. 어, 그 특별한 모델 차는 150달러야.\t그 모형차는 4천 달러가 든다.\tcontradiction\n내일 돌아가야 한다면, 칼이 말했다.\t돌아갈 수 없어. 오늘은 안 돼. 내일은 안 돼. 절대 안 돼." 칼이 말했다.', 'sentence2': 'contradiction'} ``` 2. (Optional) Preferred to change the name of the features for the compatibility with `run_glue.py` in 🤗 Transformers - `kor_nli` dataset has same data structure of multi_nli, xnli - Changing the name of features and the feature type of 'gold_label' to ClassLabel might be helpful ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "premise": datasets.Value("string"), "hypothesis": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]), } ), ``` If you don't mind, I would like to fix this. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/821/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/817/comments
https://api.github.com/repos/huggingface/datasets/issues/817/events
https://github.com/huggingface/datasets/issues/817
739,145,369
MDU6SXNzdWU3MzkxNDUzNjk=
817
Add MRQA dataset
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Done! cf #1117 and #1022" ]
1,604,937,139,000
1,607,096,682,000
1,607,096,681,000
MEMBER
null
null
## Adding a Dataset - **Name:** MRQA - **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task - **Paper:** https://arxiv.org/abs/1910.09753 - **Data:** https://github.com/mrqa/MRQA-Shared-Task-2019 - **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/817/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/817/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/816/comments
https://api.github.com/repos/huggingface/datasets/issues/816/events
https://github.com/huggingface/datasets/issues/816
739,102,686
MDU6SXNzdWU3MzkxMDI2ODY=
816
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "To show the issue:\r\n```\r\npython -c \"from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))\"\r\n```\r\ndoesn't always return the same ouput since `globs` is a dictionary with \"a\" and \"len\" as keys but sometimes not in the same order" ]
1,604,934,080,000
1,605,108,050,000
1,605,108,050,000
MEMBER
null
null
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues. To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/816/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/815/comments
https://api.github.com/repos/huggingface/datasets/issues/815/events
https://github.com/huggingface/datasets/issues/815
738,842,092
MDU6SXNzdWU3Mzg4NDIwOTI=
815
Is dataset iterative or not?
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Hello !\r\nCould you give more details ?\r\n\r\nIf you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use \r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\n\r\nIf you want to iter through several datasets you can first concatenate them\r\n```python\r\nfrom datasets import concatenate_datasets\r\n\r\nnew_dataset = concatenate_datasets([dataset1, dataset2])\r\n```\r\nLet me know if this helps !", "Hi Huggingface/Datasets team,\nI want to use the datasets inside Seq2SeqDataset here\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py\nand there I need to return back each line from the datasets and I am not\nsure how to access each line and implement this?\nIt seems it also has get_item attribute? so I was not sure if this is\niterative dataset? or if this is non-iterable datasets?\nthanks.\n\n\n\nOn Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hello !\n> Could you give more details ?\n>\n> If you mean iter through one dataset then yes, Dataset object does\n> implement the __iter__ method so you can use\n>\n> for example in dataset:\n> # do something\n>\n> If you want to iter through several datasets you can first concatenate them\n>\n> from datasets import concatenate_datasets\n> new_dataset = concatenate_datasets([dataset1, dataset2])\n>\n> Let me know if this helps !\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n> .\n>\n", "could you tell me please if datasets also has __getitem__ any idea on how\nto integrate it with Seq2SeqDataset is appreciated thanks\n\nOn Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <rabeeh@google.com>\nwrote:\n\n> Hi Huggingface/Datasets team,\n> I want to use the datasets inside Seq2SeqDataset here\n> https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py\n> and there I need to return back each line from the datasets and I am not\n> sure how to access each line and implement this?\n> It seems it also has get_item attribute? so I was not sure if this is\n> iterative dataset? or if this is non-iterable datasets?\n> thanks.\n>\n>\n>\n> On Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com>\n> wrote:\n>\n>> Hello !\n>> Could you give more details ?\n>>\n>> If you mean iter through one dataset then yes, Dataset object does\n>> implement the __iter__ method so you can use\n>>\n>> for example in dataset:\n>> # do something\n>>\n>> If you want to iter through several datasets you can first concatenate\n>> them\n>>\n>> from datasets import concatenate_datasets\n>> new_dataset = concatenate_datasets([dataset1, dataset2])\n>>\n>> Let me know if this helps !\n>>\n>> —\n>> You are receiving this because you authored the thread.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n>> .\n>>\n>\n", "`datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column.\r\n\r\nWe've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files.\r\n\r\nHowever as soon as you have a `datasets.Dataset` with columns \"tgt_texts\" (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement your own Seq2SeqDataset class that wraps your dataset object. Does that make sense to you ?", "Hi\nI am sorry for asking it multiple times but I am not getting the dataloader\ntype, could you confirm if the dataset library returns back an iterable\ntype dataloader or a mapping type one where one has access to __getitem__,\nin the former case, one can iterate with __iter__, and how I can configure\nit to return the data back as the iterative type? I am dealing with\nlarge-scale datasets and I do not want to bring all in memory\nthanks for your help\nBest regards\nRabeeh\n\nOn Mon, Nov 9, 2020 at 11:17 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> datasets.Dataset objects implement indeed __getitem__. It returns a\n> dictionary with one field per column.\n>\n> We've not added the integration of the datasets library for the seq2seq\n> utilities yet. The current seq2seq utilities are based on text files.\n>\n> However as soon as you have a datasets.Dataset with columns \"tgt_texts\"\n> (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement\n> your own Seq2SeqDataset class that wraps your dataset object. Does that\n> make sense ?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/815#issuecomment-723915556>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHYOC22EM7F666BZSOTSO66R3ANCNFSM4TPB7OWA>\n> .\n>\n", "`datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__`\r\nFor example you can do\r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\nor\r\n```python\r\nfor i in range(len(dataset)):\r\n example = dataset[i]\r\n # do something\r\n```\r\nWhen you do that, one and only one example is loaded into memory at a time.", "Hi there, \r\nHere is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library. thanks \r\n\r\n\r\n```\r\nimport datasets\r\ndataset1 = load_dataset(\"squad\", split=\"train[:10]\")\r\ndataset1 = dataset1.map(lambda example: {\"src_texts\": \"question: {0} context: {1} \".format(\r\n example[\"question\"], example[\"context\"]),\r\n \"tgt_texts\": example[\"answers\"][\"text\"][0]}, remove_columns=dataset1.column_names)\r\ndataset2 = load_dataset(\"imdb\", split=\"train[:10]\")\r\ndataset2 = dataset2.map(lambda example: {\"src_texts\": \"imdb: \" + example[\"text\"],\r\n \"tgt_texts\": str(example[\"label\"])}, remove_columns=dataset2.column_names)\r\ntrain_dataset = datasets.concatenate_datasets([dataset1, dataset2])\r\ntrain_dataset.set_format(type='torch', columns=['src_texts', 'tgt_texts'])\r\ndataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32)\r\nfor id, batch in enumerate(dataloader):\r\n print(batch)\r\n\r\n```", "closed since I found this response on the issue https://github.com/huggingface/datasets/issues/469" ]
1,604,913,108,000
1,605,005,403,000
1,605,005,403,000
NONE
null
null
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/815/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/814/comments
https://api.github.com/repos/huggingface/datasets/issues/814/events
https://github.com/huggingface/datasets/issues/814
738,500,443
MDU6SXNzdWU3Mzg1MDA0NDM=
814
Joining multiple datasets
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "found a solution here https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/35, closed for now, thanks " ]
1,604,852,370,000
1,604,864,328,000
1,604,864,328,000
NONE
null
null
Hi I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/814/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/813/comments
https://api.github.com/repos/huggingface/datasets/issues/813/events
https://github.com/huggingface/datasets/issues/813
738,489,852
MDU6SXNzdWU3Mzg0ODk4NTI=
813
How to implement DistributedSampler with datasets
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. ", "Hey @rabeehkarimimahabadi I'm actually looking for the same feature. Did you manage to get somewhere?", "@rabeehkarimimahabadi need the same feature", "Hi! I think you can use the `accelerate` library for that, which implements distributed sampling." ]
1,604,849,231,000
1,664,974,463,000
1,664,974,463,000
NONE
null
null
Hi, I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them. I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/813/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/812/comments
https://api.github.com/repos/huggingface/datasets/issues/812/events
https://github.com/huggingface/datasets/issues/812
738,340,217
MDU6SXNzdWU3MzgzNDAyMTc=
812
Too much logging
{ "login": "dspoka", "id": 6183050, "node_id": "MDQ6VXNlcjYxODMwNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dspoka", "html_url": "https://github.com/dspoka", "followers_url": "https://api.github.com/users/dspoka/followers", "following_url": "https://api.github.com/users/dspoka/following{/other_user}", "gists_url": "https://api.github.com/users/dspoka/gists{/gist_id}", "starred_url": "https://api.github.com/users/dspoka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dspoka/subscriptions", "organizations_url": "https://api.github.com/users/dspoka/orgs", "repos_url": "https://api.github.com/users/dspoka/repos", "events_url": "https://api.github.com/users/dspoka/events{/privacy}", "received_events_url": "https://api.github.com/users/dspoka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! Thanks for reporting :) \r\nI agree these one should be hidden when the logging level is warning, we'll fix that", "+1, the amount of logging is excessive.\r\n\r\nMost of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`)\r\n\r\n```\r\nI1109 21:26:01.742688 139785006901056 filelock.py:318] Lock 139778216292192 released on /home/kitaev/.cache/huggingface/datasets/9ed4f2e133395826175a892c70611f68522c7bc61a35476e8b51a31afb76e4bf.e6f3e3f3e3875a07469d1cfd32e16e1d06b149616b11eef2d081c43d515b492d.py.lock\r\nI1109 21:26:01.747898 139785006901056 filelock.py:274] Lock 139778216290176 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748258 139785006901056 filelock.py:318] Lock 139778216290176 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748412 139785006901056 filelock.py:274] Lock 139778215853024 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748497 139785006901056 filelock.py:318] Lock 139778215853024 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:07:17.029001 140301730502464 filelock.py:274] Lock 140289479304360 acquired on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.029341 140301730502464 filelock.py:318] Lock 140289479304360 released on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.058964 140301730502464 filelock.py:274] Lock 140251889388120 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.060933 140301730502464 filelock.py:318] Lock 140251889388120 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.061067 140301730502464 filelock.py:274] Lock 140296072521488 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.069736 140301730502464 metric.py:400] Removing /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow\r\nI1109 21:07:17.069949 140301730502464 filelock.py:318] Lock 140296072521488 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\n```", "So how to solve this problem?", "In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default.\r\nAlso `set_verbosity_warning` does take into account these logs now.\r\nCan you try to update the lib ?\r\n```\r\npip install --upgrade datasets\r\n```", "Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick?\r\n\r\nI'm still using 1.13 version datasets.", "On older versions you can use\r\n```python\r\nimport logging\r\n\r\nlogging.getLogger(\"filelock\").setLevel(logging.WARNING)\r\n```", "Whoa Thank you! It works!" ]
1,604,793,390,000
1,611,671,494,000
1,605,546,402,000
NONE
null
null
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/812/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/811/comments
https://api.github.com/repos/huggingface/datasets/issues/811/events
https://github.com/huggingface/datasets/issues/811
738,280,132
MDU6SXNzdWU3MzgyODAxMzI=
811
nlp viewer error
{ "login": "jc-hou", "id": 30210529, "node_id": "MDQ6VXNlcjMwMjEwNTI5", "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jc-hou", "html_url": "https://github.com/jc-hou", "followers_url": "https://api.github.com/users/jc-hou/followers", "following_url": "https://api.github.com/users/jc-hou/following{/other_user}", "gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}", "starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions", "organizations_url": "https://api.github.com/users/jc-hou/orgs", "repos_url": "https://api.github.com/users/jc-hou/repos", "events_url": "https://api.github.com/users/jc-hou/events{/privacy}", "received_events_url": "https://api.github.com/users/jc-hou/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
[ "and also for 'blog_authorship_corpus'\r\nhttps://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus\r\n![image](https://user-images.githubusercontent.com/30210529/98557329-5c182800-22a4-11eb-9b01-5b910fb8fcd4.png)\r\n", "Is this the problem of my local computer or ??", "Related to:\r\n- #673" ]
1,604,768,938,000
1,644,922,304,000
1,644,852,260,000
NONE
null
null
Hello, when I select amazon_us_reviews in nlp viewer, it shows error. https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews ![image](https://user-images.githubusercontent.com/30210529/98447334-4aa81200-2124-11eb-9dca-82c3ab34ccc2.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/811/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/809/comments
https://api.github.com/repos/huggingface/datasets/issues/809/events
https://github.com/huggingface/datasets/issues/809
737,832,701
MDU6SXNzdWU3Mzc4MzI3MDE=
809
Add Google Taskmaster dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?", "You are absolutely right :) \r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1193 https://github.com/huggingface/datasets/pull/1197 https://github.com/huggingface/datasets/pull/1213" ]
1,604,675,441,000
1,618,924,166,000
1,618,924,166,000
MEMBER
null
null
## Adding a Dataset - **Name:** Taskmaster - **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations) - **Paper:** https://arxiv.org/abs/1909.05358 - **Data:** https://github.com/google-research-datasets/Taskmaster - **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/809/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/807/comments
https://api.github.com/repos/huggingface/datasets/issues/807/events
https://github.com/huggingface/datasets/issues/807
737,509,954
MDU6SXNzdWU3Mzc1MDk5NTQ=
807
load_dataset for LOCAL CSV files report CONNECTION ERROR
{ "login": "shexuan", "id": 25664170, "node_id": "MDQ6VXNlcjI1NjY0MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shexuan", "html_url": "https://github.com/shexuan", "followers_url": "https://api.github.com/users/shexuan/followers", "following_url": "https://api.github.com/users/shexuan/following{/other_user}", "gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shexuan/subscriptions", "organizations_url": "https://api.github.com/users/shexuan/orgs", "repos_url": "https://api.github.com/users/shexuan/repos", "events_url": "https://api.github.com/users/shexuan/events{/privacy}", "received_events_url": "https://api.github.com/users/shexuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi !\r\nThe url works on my side.\r\n\r\nIs the url working in your navigator ?\r\nAre you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?", "> Hi !\r\n> The url works on my side.\r\n> \r\n> Is the url working in your navigator ?\r\n> Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n\r\nI tried another server, it's working now. Thanks a lot.\r\n\r\nAnd I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?", "It seems my network frequently crashed so most time it cannot work.", "\r\n\r\n\r\n> > Hi !\r\n> > The url works on my side.\r\n> > Is the url working in your navigator ?\r\n> > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> \r\n> I tried another server, it's working now. Thanks a lot.\r\n> \r\n> And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n\r\nI download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? \r\n\r\nThanks :D", "hello, how did you solve this problems?\r\n\r\n> > > Hi !\r\n> > > The url works on my side.\r\n> > > Is the url working in your navigator ?\r\n> > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > \r\n> > \r\n> > I tried another server, it's working now. Thanks a lot.\r\n> > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> \r\n> I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> \r\n> Thanks :D\r\n\r\nhello, I tried this. but it still failed. how do you fix this error?", "> hello, how did you solve this problems?\r\n> \r\n> > > > Hi !\r\n> > > > The url works on my side.\r\n> > > > Is the url working in your navigator ?\r\n> > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > \r\n> > > \r\n> > > I tried another server, it's working now. Thanks a lot.\r\n> > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > \r\n> > \r\n> > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> > Thanks :D\r\n> \r\n> hello, I tried this. but it still failed. how do you fix this error?\r\n\r\n你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`\r\n\r\n", "> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`\r\n\r\n好的好的!解决了,感谢感谢!!!", "> \r\n> \r\n> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`\r\n\r\n我照着做了,然后报错。\r\nValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-5-fd2106a3f053> in <module>\r\n----> 1 dataset = load_dataset('C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets/csv.py', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 588 # Download/copy dataset processing script\r\n 589 module_path, hash = prepare_module(\r\n--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n 591 )\r\n 592 \r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 296 local_dataset_infos_path = cached_path(\r\n 297 dataset_infos,\r\n--> 298 download_config=download_config,\r\n 299 )\r\n 300 except (FileNotFoundError, ConnectionError):\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\utils\\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 316 else:\r\n 317 # Something unknown\r\n--> 318 raise ValueError(\"unable to parse {} as a URL or as a local path\".format(url_or_filename))\r\n 319 \r\n 320 if download_config.extract_compressed_file and output_path is not None:\r\n\r\nValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`", "I also experienced this issue this morning. Looks like something specific to windows.\r\nI'm working on a fix", "I opened a PR @wn1652400018", "> \r\n> \r\n> I opened a PR @wn1652400018\r\n\r\nThanks you!, It works very well." ]
1,604,644,384,000
1,610,328,627,000
1,605,331,834,000
NONE
null
null
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/807/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/806/comments
https://api.github.com/repos/huggingface/datasets/issues/806/events
https://github.com/huggingface/datasets/issues/806
737,215,430
MDU6SXNzdWU3MzcyMTU0MzA=
806
Quail dataset urls are out of date
{ "login": "ngdodd", "id": 4889636, "node_id": "MDQ6VXNlcjQ4ODk2MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ngdodd", "html_url": "https://github.com/ngdodd", "followers_url": "https://api.github.com/users/ngdodd/followers", "following_url": "https://api.github.com/users/ngdodd/following{/other_user}", "gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}", "starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions", "organizations_url": "https://api.github.com/users/ngdodd/orgs", "repos_url": "https://api.github.com/users/ngdodd/repos", "events_url": "https://api.github.com/users/ngdodd/events{/privacy}", "received_events_url": "https://api.github.com/users/ngdodd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! Thanks for reporting.\r\nWe should fix the urls and use quail 1.3.\r\nIf you want to contribute feel free to fix the urls and open a PR :) ", "Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820)\r\n\r\nUpdated links and also regenerated the metadata and dummy data for v1.3 in order to pass verifications as described here: [https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset](https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset). ", "Closing since #820 is merged.\r\nThanks again for fixing the urls :)" ]
1,604,605,219,000
1,605,016,971,000
1,605,016,971,000
CONTRIBUTOR
null
null
<h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/806/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/805/comments
https://api.github.com/repos/huggingface/datasets/issues/805/events
https://github.com/huggingface/datasets/issues/805
737,019,360
MDU6SXNzdWU3MzcwMTkzNjA=
805
On loading a metric from datasets, I get the following error
{ "login": "laibamehnaz", "id": 36405283, "node_id": "MDQ6VXNlcjM2NDA1Mjgz", "avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laibamehnaz", "html_url": "https://github.com/laibamehnaz", "followers_url": "https://api.github.com/users/laibamehnaz/followers", "following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}", "gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions", "organizations_url": "https://api.github.com/users/laibamehnaz/orgs", "repos_url": "https://api.github.com/users/laibamehnaz/repos", "events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}", "received_events_url": "https://api.github.com/users/laibamehnaz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```" ]
1,604,589,278,000
1,644,852,779,000
1,644,852,779,000
NONE
null
null
`from datasets import load_metric` `metric = load_metric('bleurt')` Traceback: 210 class _ArrayXDExtensionType(pa.PyExtensionType): 211 212 ndims: int = None AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' Any help will be appreciated. Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/805/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/804/comments
https://api.github.com/repos/huggingface/datasets/issues/804/events
https://github.com/huggingface/datasets/issues/804
736,858,507
MDU6SXNzdWU3MzY4NTg1MDc=
804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @yjernite is this expected ?", "Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208)\r\n\r\nFor the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md", "Oh ok, I guess I read the paper too fast 😅, thank you for your answer!" ]
1,604,576,281,000
1,604,931,299,000
1,604,931,298,000
CONTRIBUTOR
null
null
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tasks = load_dataset("kilt_tasks") trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext') # both in "kilt_tasks" In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']]) Out[18]: False # and "trivia_qa" In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']]) Out[13]: True # appears to be fine on the train and validation sets. In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']]) Out[14]: False In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']]) Out[15]: False In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']]) Out[16]: True In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']]) Out[17]: True ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/804/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/801
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/801/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/801/comments
https://api.github.com/repos/huggingface/datasets/issues/801/events
https://github.com/huggingface/datasets/issues/801
735,790,876
MDU6SXNzdWU3MzU3OTA4NzY=
801
How to join two datasets?
{ "login": "shangw-nvidia", "id": 66387198, "node_id": "MDQ6VXNlcjY2Mzg3MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shangw-nvidia", "html_url": "https://github.com/shangw-nvidia", "followers_url": "https://api.github.com/users/shangw-nvidia/followers", "following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}", "gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}", "starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions", "organizations_url": "https://api.github.com/users/shangw-nvidia/orgs", "repos_url": "https://api.github.com/users/shangw-nvidia/repos", "events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}", "received_events_url": "https://api.github.com/users/shangw-nvidia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi this is also my question. thanks ", "Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset\r\n", "Closing this one. Feel free to re-open if you have other questions about this issue.\r\n\r\nAlso linking another discussion about joining datasets: #853 " ]
1,604,461,991,000
1,608,732,178,000
1,608,732,178,000
NONE
null
null
Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/801/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/798/comments
https://api.github.com/repos/huggingface/datasets/issues/798/events
https://github.com/huggingface/datasets/issues/798
735,518,805
MDU6SXNzdWU3MzU1MTg4MDU=
798
Cannot load TREC dataset: ConnectionError
{ "login": "kaletap", "id": 25740957, "node_id": "MDQ6VXNlcjI1NzQwOTU3", "avatar_url": "https://avatars.githubusercontent.com/u/25740957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kaletap", "html_url": "https://github.com/kaletap", "followers_url": "https://api.github.com/users/kaletap/followers", "following_url": "https://api.github.com/users/kaletap/following{/other_user}", "gists_url": "https://api.github.com/users/kaletap/gists{/gist_id}", "starred_url": "https://api.github.com/users/kaletap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kaletap/subscriptions", "organizations_url": "https://api.github.com/users/kaletap/orgs", "repos_url": "https://api.github.com/users/kaletap/repos", "events_url": "https://api.github.com/users/kaletap/events{/privacy}", "received_events_url": "https://api.github.com/users/kaletap/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
[ "Hi ! Indeed there's an issue with those links.\r\nWe should probably use the target urls of the redirections instead", "Hi, the same issue here, could you tell me how to download it through datasets? thanks ", "Same issue. ", "Actually it's already fixed on the master branch since #740 \r\nI'll do the 1.1.3 release soon", "Hi\nthanks, but I did tried to install from the pip install git+... and it does\nnot work for me,. thanks for the help. I have the same issue with wmt16,\n\"ro-en\"\nthanks.\nBest\nRabeeh\n\nOn Mon, Nov 16, 2020 at 10:29 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Actually it's already fixed on the master branch since #740\n> <https://github.com/huggingface/datasets/pull/740>\n> I'll do the 1.1.3 release soon\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/798#issuecomment-727854736>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCEUBJKPOCLABXCKMPDSQDWH3ANCNFSM4TJBUKSA>\n> .\n>\n", "I just tested on google colab using\r\n```python\r\n!pip install git+https://github.com/huggingface/datasets.git\r\nfrom datasets import load_dataset\r\nload_dataset(\"trec\")\r\n```\r\nand it works.\r\nCan you detail how you got the issue even when using the latest version on master ?\r\n\r\nAlso about wmt we'll look into it, thanks for reporting !", "I think the new url with .edu is also broken:\r\n```\r\nConnectionError: Couldn't reach https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label\r\n```\r\nCant download the dataset anymore.", "Hi ! The URL seems to work fine on my side, can you try again ?", "Forgot to update, i wrote an email to the webmaster of seas.upenn.edu because i couldnt reach the url on any machine. This was the answer:\r\n```\r\nThank you for your report. The server was offline for maintenance and is now available again.\r\n```\r\nGuess all back to normal now 🙂 " ]
1,604,425,522,000
1,644,852,862,000
1,644,852,862,000
NONE
null
null
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/798/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/798/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/797/comments
https://api.github.com/repos/huggingface/datasets/issues/797/events
https://github.com/huggingface/datasets/issues/797
735,420,332
MDU6SXNzdWU3MzU0MjAzMzI=
797
Token classification labels are strings and we don't have the list of labels
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067401494, "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion", "name": "Dataset discussion", "color": "72f99f", "default": false, "description": "Discussions on the datasets" } ]
closed
false
null
[]
[ "Indeed. Pinging @stefan-it here if he want to give an expert opinion :)", "Related is https://github.com/huggingface/datasets/pull/636", "Should definitely be a ClassLabel 👍 ", "Already done." ]
1,604,417,610,000
1,644,853,314,000
1,644,853,313,000
MEMBER
null
null
Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels. The main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/797/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/795
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/795/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/795/comments
https://api.github.com/repos/huggingface/datasets/issues/795/events
https://github.com/huggingface/datasets/issues/795
735,198,265
MDU6SXNzdWU3MzUxOTgyNjU=
795
Descriptions of raw and processed versions of wikitext are inverted
{ "login": "fraboniface", "id": 16835358, "node_id": "MDQ6VXNlcjE2ODM1MzU4", "avatar_url": "https://avatars.githubusercontent.com/u/16835358?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fraboniface", "html_url": "https://github.com/fraboniface", "followers_url": "https://api.github.com/users/fraboniface/followers", "following_url": "https://api.github.com/users/fraboniface/following{/other_user}", "gists_url": "https://api.github.com/users/fraboniface/gists{/gist_id}", "starred_url": "https://api.github.com/users/fraboniface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fraboniface/subscriptions", "organizations_url": "https://api.github.com/users/fraboniface/orgs", "repos_url": "https://api.github.com/users/fraboniface/repos", "events_url": "https://api.github.com/users/fraboniface/events{/privacy}", "received_events_url": "https://api.github.com/users/fraboniface/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
[ "Yes indeed ! Thanks for reporting", "Fixed by:\r\n- #3241" ]
1,604,399,091,000
1,644,853,581,000
1,644,853,581,000
NONE
null
null
Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselves. Also it would be nice if those descriptions appeared in the dataset explorer. https://github.com/huggingface/datasets/blob/87bd0864845ea0a1dd7167918dc5f341bf807bd3/datasets/wikitext/wikitext.py#L52
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/795/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/794/comments
https://api.github.com/repos/huggingface/datasets/issues/794/events
https://github.com/huggingface/datasets/issues/794
735,158,725
MDU6SXNzdWU3MzUxNTg3MjU=
794
self.options cannot be converted to a Python object for pickling
{ "login": "hzqjyyx", "id": 9635713, "node_id": "MDQ6VXNlcjk2MzU3MTM=", "avatar_url": "https://avatars.githubusercontent.com/u/9635713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hzqjyyx", "html_url": "https://github.com/hzqjyyx", "followers_url": "https://api.github.com/users/hzqjyyx/followers", "following_url": "https://api.github.com/users/hzqjyyx/following{/other_user}", "gists_url": "https://api.github.com/users/hzqjyyx/gists{/gist_id}", "starred_url": "https://api.github.com/users/hzqjyyx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hzqjyyx/subscriptions", "organizations_url": "https://api.github.com/users/hzqjyyx/orgs", "repos_url": "https://api.github.com/users/hzqjyyx/repos", "events_url": "https://api.github.com/users/hzqjyyx/events{/privacy}", "received_events_url": "https://api.github.com/users/hzqjyyx/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
[ "Hi ! Thanks for reporting that's a bug on master indeed.\r\nWe'll fix that soon" ]
1,604,395,654,000
1,605,807,338,000
1,605,807,338,000
NONE
null
null
Hi, Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object. Here is a code snippet ```python from datasets import load_dataset from pyarrow.csv import ReadOptions load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024)) ``` error is `self.options cannot be converted to a Python object for pickling` Would you mind to take a look? Thanks! ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-28-ab83fec2ded4> in <module> ----> 1 load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024)) /tmp/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 602 hash=hash, 603 features=features, --> 604 **config_kwargs, 605 ) 606 /tmp/datasets/src/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs) 162 name, 163 custom_features=features, --> 164 **config_kwargs, 165 ) 166 /tmp/datasets/src/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 281 ) 282 else: --> 283 suffix = Hasher.hash(config_kwargs_to_add_to_suffix) 284 285 if builder_config.data_files is not None: /tmp/datasets/src/datasets/fingerprint.py in hash(cls, value) 51 return cls.dispatch[type(value)](cls, value) 52 else: ---> 53 return cls.hash_default(value) 54 55 def update(self, value): /tmp/datasets/src/datasets/fingerprint.py in hash_default(cls, value) 44 @classmethod 45 def hash_default(cls, value): ---> 46 return cls.hash_bytes(dumps(value)) 47 48 @classmethod /tmp/datasets/src/datasets/utils/py_utils.py in dumps(obj) 365 file = StringIO() 366 with _no_cache_fields(obj): --> 367 dump(obj, file) 368 return file.getvalue() 369 /tmp/datasets/src/datasets/utils/py_utils.py in dump(obj, file) 337 def dump(obj, file): 338 """pickle an object to a file""" --> 339 Pickler(file, recurse=True).dump(obj) 340 return 341 ~/.local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj) 444 raise PicklingError(msg) 445 else: --> 446 StockPickler.dump(self, obj) 447 stack.clear() # clear record of 'recursion-sensitive' pickled objects 448 return /usr/lib/python3.6/pickle.py in dump(self, obj) 407 if self.proto >= 4: 408 self.framer.start_framing() --> 409 self.save(obj) 410 self.write(STOP) 411 self.framer.end_framing() /usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id) 474 f = self.dispatch.get(t) 475 if f is not None: --> 476 f(self, obj) # Call unbound method with explicit self 477 return 478 ~/.local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 931 # we only care about session the first pass thru 932 pickler._session = False --> 933 StockPickler.save_dict(pickler, obj) 934 log.info("# D2") 935 return /usr/lib/python3.6/pickle.py in save_dict(self, obj) 819 820 self.memoize(obj) --> 821 self._batch_setitems(obj.items()) 822 823 dispatch[dict] = save_dict /usr/lib/python3.6/pickle.py in _batch_setitems(self, items) 850 k, v = tmp[0] 851 save(k) --> 852 save(v) 853 write(SETITEM) 854 # else tmp is empty, and we're done /usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id) 494 reduce = getattr(obj, "__reduce_ex__", None) 495 if reduce is not None: --> 496 rv = reduce(self.proto) 497 else: 498 reduce = getattr(obj, "__reduce__", None) ~/.local/lib/python3.6/site-packages/pyarrow/_csv.cpython-36m-x86_64-linux-gnu.so in pyarrow._csv.ReadOptions.__reduce_cython__() TypeError: self.options cannot be converted to a Python object for pickling ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/794/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/792
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/792/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/792/comments
https://api.github.com/repos/huggingface/datasets/issues/792/events
https://github.com/huggingface/datasets/issues/792
734,693,652
MDU6SXNzdWU3MzQ2OTM2NTI=
792
KILT dataset: empty string in triviaqa input field
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md\r\n(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))" ]
1,604,338,434,000
1,604,572,499,000
1,604,572,499,000
CONTRIBUTOR
null
null
# What happened Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark) # Versions KILT version is `1.0.0` `datasets` version is `1.1.2` [more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1) # How to reproduce ```py In [1]: from datasets import load_dataset In [4]: dataset = load_dataset("kilt_tasks") # everything works fine, removed output for a better readibility Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data. # empty string in triviaqa input field In [36]: dataset['train_triviaqa'][0] Out[36]: {'id': 'dpql_5197', 'input': '', 'meta': {'left_context': '', 'mention': '', 'obj_surface': {'text': []}, 'partial_evidence': {'end_paragraph_id': [], 'meta': [], 'section': [], 'start_paragraph_id': [], 'title': [], 'wikipedia_id': []}, 'right_context': '', 'sub_surface': {'text': []}, 'subj_aliases': {'text': []}, 'template_questions': {'text': []}}, 'output': {'answer': ['five £', '5 £', '£5', 'five £'], 'meta': [], 'provenance': [{'bleu_score': [1.0], 'end_character': [248], 'end_paragraph_id': [30], 'meta': [], 'section': ['Section::::Question of legal tender.\n'], 'start_character': [246], 'start_paragraph_id': [30], 'title': ['Banknotes of the pound sterling'], 'wikipedia_id': ['270680']}]}} In [35]: dataset['train_triviaqa']['input'][:10] Out[35]: ['', '', '', '', '', '', '', '', '', ''] # same with test set In [37]: dataset['test_triviaqa']['input'][:10] Out[37]: ['', '', '', '', '', '', '', '', '', ''] # works fine with natural questions In [34]: dataset['train_nq']['input'][:10] Out[34]: ['how i.met your mother who is the mother', 'who had the most wins in the nfl', 'who played mantis guardians of the galaxy 2', 'what channel is the premier league on in france', "god's not dead a light in the darkness release date", 'who is the current president of un general assembly', 'when do the eclipse supposed to take place', 'what is the name of the sea surrounding dubai', 'who holds the nba record for most points in a career', 'when did the new maze runner movie come out'] ``` Stay safe :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/792/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/790
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/790/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/790/comments
https://api.github.com/repos/huggingface/datasets/issues/790/events
https://github.com/huggingface/datasets/issues/790
734,470,197
MDU6SXNzdWU3MzQ0NzAxOTc=
790
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist
{ "login": "shawwn", "id": 59632, "node_id": "MDQ6VXNlcjU5NjMy", "avatar_url": "https://avatars.githubusercontent.com/u/59632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shawwn", "html_url": "https://github.com/shawwn", "followers_url": "https://api.github.com/users/shawwn/followers", "following_url": "https://api.github.com/users/shawwn/following{/other_user}", "gists_url": "https://api.github.com/users/shawwn/gists{/gist_id}", "starred_url": "https://api.github.com/users/shawwn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shawwn/subscriptions", "organizations_url": "https://api.github.com/users/shawwn/orgs", "repos_url": "https://api.github.com/users/shawwn/repos", "events_url": "https://api.github.com/users/shawwn/events{/privacy}", "received_events_url": "https://api.github.com/users/shawwn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now", "Closing this one.\r\nFeel free to re-open if you still have issues" ]
1,604,320,595,000
1,605,017,102,000
1,605,017,102,000
NONE
null
null
I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error. ```sh git clone https://github.com/huggingface/datasets cd datasets virtualenv venv -p python3 --system-site-packages source venv/bin/activate pip install -e ".[dev]" ``` ![image](https://user-images.githubusercontent.com/59632/97868518-72871800-1cd5-11eb-9cd2-37d4e9d20b39.png) ![image](https://user-images.githubusercontent.com/59632/97868592-977b8b00-1cd5-11eb-8f3c-0c409616149c.png) Python 3.7.7
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/790/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/788/comments
https://api.github.com/repos/huggingface/datasets/issues/788/events
https://github.com/huggingface/datasets/issues/788
734,136,124
MDU6SXNzdWU3MzQxMzYxMjQ=
788
failed to reuse cache
{ "login": "WangHexie", "id": 31768052, "node_id": "MDQ6VXNlcjMxNzY4MDUy", "avatar_url": "https://avatars.githubusercontent.com/u/31768052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangHexie", "html_url": "https://github.com/WangHexie", "followers_url": "https://api.github.com/users/WangHexie/followers", "following_url": "https://api.github.com/users/WangHexie/following{/other_user}", "gists_url": "https://api.github.com/users/WangHexie/gists{/gist_id}", "starred_url": "https://api.github.com/users/WangHexie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WangHexie/subscriptions", "organizations_url": "https://api.github.com/users/WangHexie/orgs", "repos_url": "https://api.github.com/users/WangHexie/repos", "events_url": "https://api.github.com/users/WangHexie/events{/privacy}", "received_events_url": "https://api.github.com/users/WangHexie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604,284,956,000
1,604,319,975,000
1,604,319,975,000
NONE
null
null
I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/788/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/786
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/786/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/786/comments
https://api.github.com/repos/huggingface/datasets/issues/786/events
https://github.com/huggingface/datasets/issues/786
733,761,717
MDU6SXNzdWU3MzM3NjE3MTc=
786
feat(dataset): multiprocessing _generate_examples
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I agree that would be cool :)\r\nRight now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik", "`_generate_examples` can now be run in parallel thanks to https://github.com/huggingface/datasets/pull/5107. You can find more info [here](https://huggingface.co/docs/datasets/dataset_script#sharding)." ]
1,604,163,136,000
1,673,866,753,000
1,673,866,753,000
CONTRIBUTOR
null
null
forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool. In my use case, I would instead of: ```python for datum in data: yield self.load_datum(datum) ``` do: ```python return pool.map(self.load_datum, data) ``` As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset. If this was a larger dataset (and many such datasets exist), it would take multiple days to complete. Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/786/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/784
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/784/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/784/comments
https://api.github.com/repos/huggingface/datasets/issues/784/events
https://github.com/huggingface/datasets/issues/784
733,700,463
MDU6SXNzdWU3MzM3MDA0NjM=
784
Issue with downloading Wikipedia data for low resource language
{ "login": "SamuelCahyawijaya", "id": 2826602, "node_id": "MDQ6VXNlcjI4MjY2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamuelCahyawijaya", "html_url": "https://github.com/SamuelCahyawijaya", "followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers", "following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}", "gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions", "organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs", "repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos", "events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}", "received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ?", "@lhoestq\r\n\r\nI've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya.\r\n\r\nAlso, using another date (e.g. `load_dataset('wikipedia', '20201120.zh', beam_runner='DirectRunner')`) will give the following error message.\r\n\r\n```\r\nValueError: BuilderConfig 20201120.zh not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']\r\n```\r\n\r\nI am pretty sure that `https://dumps.wikimedia.org/enwiki/20201120/dumpstatus.json` exists.", "Thanks for reporting I created a PR to make the custom config work (language=\"zh\", date=\"20201120\").", "@lhoestq Thanks!", "For posterity, here's how I got the data I needed: I needed Bengali, so I had to check which dumps are available here: https://dumps.wikimedia.org/bnwiki/ , then I ran:\r\n```\r\nload_dataset(\"wikipedia\", language=\"bn\", date=\"20211101\",\r\n beam_runner=\"DirectRunner\")\r\n```" ]
1,604,144,400,000
1,644,429,016,000
1,606,318,933,000
NONE
null
null
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these two languages: Javanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json ``` Sundanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json ``` I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid. Any suggestions on how to handle this issue? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/784/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/784/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/778/comments
https://api.github.com/repos/huggingface/datasets/issues/778/events
https://github.com/huggingface/datasets/issues/778
732,449,652
MDU6SXNzdWU3MzI0NDk2NTI=
778
Unexpected behavior when loading cached csv file?
{ "login": "dcfidalgo", "id": 15979778, "node_id": "MDQ6VXNlcjE1OTc5Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dcfidalgo", "html_url": "https://github.com/dcfidalgo", "followers_url": "https://api.github.com/users/dcfidalgo/followers", "following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}", "gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}", "starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions", "organizations_url": "https://api.github.com/users/dcfidalgo/orgs", "repos_url": "https://api.github.com/users/dcfidalgo/repos", "events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}", "received_events_url": "https://api.github.com/users/dcfidalgo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! Thanks for reporting.\r\nThe same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 .\r\nThe fix will be available in the next release :)", "Thanks for the prompt reply and terribly sorry for the spam! \r\nLooking forward to the new release! " ]
1,603,987,570,000
1,604,006,487,000
1,604,006,487,000
CONTRIBUTOR
null
null
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset. Small snippet to reproduce the behavior: ```python import datasets with open("dummy_data.csv", "w") as file: file.write("test,this;text\n") print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names) # ["test", "this;text"] print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names) # still ["test", "this;text"] ``` By the way, thanks a lot for this amazing library! :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/778/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/773/comments
https://api.github.com/repos/huggingface/datasets/issues/773/events
https://github.com/huggingface/datasets/issues/773
731,684,153
MDU6SXNzdWU3MzE2ODQxNTM=
773
Adding CC-100: Monolingual Datasets from Web Crawl Data
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[ { "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false } ]
[ "cc @aconneau ;) ", "These dataset files are no longer available. https://data.statmt.org/cc-100/ files provided in this link are no longer available. Can anybody fix that issue?\r\n@abhishekkrthakur @yjernite ", "Hi ! Can you open an issue to report this problem ? This will help keep track of the fix :)", "Ok" ]
1,603,909,241,000
1,643,203,374,000
1,607,941,207,000
MEMBER
null
null
## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/773/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/771/comments
https://api.github.com/repos/huggingface/datasets/issues/771/events
https://github.com/huggingface/datasets/issues/771
731,482,213
MDU6SXNzdWU3MzE0ODIyMTM=
771
Using `Dataset.map` with `n_proc>1` print multiple progress bars
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset.\r\n\r\nAt one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar", "Hi @lhoestq, I am facing a similar issue, it is annoying when lots of progress bars are printed. Is there a way to turn off this behavior? ", "You can disable the progress bars with\r\n```python\r\nimport datasets\r\n\r\ndatasets.disable_progress_bar()\r\n```" ]
1,603,894,407,000
1,676,319,399,000
1,676,319,399,000
MEMBER
null
null
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/771/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/769
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/769/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/769/comments
https://api.github.com/repos/huggingface/datasets/issues/769/events
https://github.com/huggingface/datasets/issues/769
731,257,104
MDU6SXNzdWU3MzEyNTcxMDQ=
769
How to choose proper download_mode in function load_dataset?
{ "login": "jzq2000", "id": 48550398, "node_id": "MDQ6VXNlcjQ4NTUwMzk4", "avatar_url": "https://avatars.githubusercontent.com/u/48550398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jzq2000", "html_url": "https://github.com/jzq2000", "followers_url": "https://api.github.com/users/jzq2000/followers", "following_url": "https://api.github.com/users/jzq2000/following{/other_user}", "gists_url": "https://api.github.com/users/jzq2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/jzq2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jzq2000/subscriptions", "organizations_url": "https://api.github.com/users/jzq2000/orgs", "repos_url": "https://api.github.com/users/jzq2000/repos", "events_url": "https://api.github.com/users/jzq2000/events{/privacy}", "received_events_url": "https://api.github.com/users/jzq2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`download_mode=datasets.GenerateMode.FORCE_REDOWNLOAD` should work.\r\nThis makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing", "Can we just use `features=...` in `load_dataset` for this @lhoestq?", "Indeed you should use `features` in this case. \r\n```python\r\nfeatures = Features({'text': Value('string'), 'label': Value('float32')})\r\ndataset = load_dataset('csv', data_files=['sst_test.csv'], features=features)\r\n```\r\nNote that because of an issue with the caching when you change the features (see #750 ) you still need to specify the `FORCE_REDOWNLOAD ` flag. I'm working on a fix for this one", "https://github.com/huggingface/datasets/issues/769#issuecomment-717837832\r\n> This makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing\r\n\r\n@lhoestq do you still think we should rename it?\r\n", "It's no big deal, but since it can be confusing to users I think it's worth renaming it, and deprecate `GenerateMode` until `datasets` 2.0 at least. IMO it's confusing to have `download_mode=GenerateMode.something`" ]
1,603,876,579,000
1,645,532,572,000
1,645,532,572,000
NONE
null
null
Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5 ``` First I try to use this command to load my csv file . ``` python dataset=load_dataset('csv', data_files=['sst_test.csv']) ``` It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this. ``` python import pyarrow as pa from pyarrow import csv read_options = csv.ReadOptions(block_size=1024*1024) parse_options = csv.ParseOptions() convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()}) dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options, parse_options=parse_options, convert_options=convert_options) ``` It keeps the same: ```shell Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210) ``` I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right. Is it a bug? How to choose proper download_mode to avoid this issue?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/769/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/769/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/768/comments
https://api.github.com/repos/huggingface/datasets/issues/768/events
https://github.com/huggingface/datasets/issues/768
730,908,060
MDU6SXNzdWU3MzA5MDgwNjA=
768
Add a `lazy_map` method to `Dataset` and `DatasetDict`
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[ "This is cool! I think some aspects to think about and decide in terms of API are:\r\n- do we allow several methods (chained i guess)\r\n- how do we inspect the currently set method(s)\r\n- how do we control/reset them" ]
1,603,837,983,000
1,603,875,493,000
null
MEMBER
null
null
The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases: 1. load image on the fly 2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/768/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/767/comments
https://api.github.com/repos/huggingface/datasets/issues/767/events
https://github.com/huggingface/datasets/issues/767
730,771,610
MDU6SXNzdWU3MzA3NzE2MTA=
767
Add option for named splits when using ds.train_test_split
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[ "Yes definitely we should give more flexibility to control the name of the splits outputted by `train_test_split`.\r\n\r\nRelated is the very interesting feedback from @bramvanroy on how we should improve this method: https://discuss.huggingface.co/t/how-to-split-main-dataset-into-train-dev-test-as-datasetdict/1090/5\r\n\r\nAnd in particular that it should advantageously be able to split in 3 splits as well instead of just 2 like we copied from sklearn." ]
1,603,828,784,000
1,605,017,121,000
null
CONTRIBUTOR
null
null
### Feature Request 🚀 Can we add a way to name your splits when using the `.train_test_split` function? In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep. ### Workaround this is my hack for dealin with this, for now :slightly_smiling_face: ```python from datasets import load_dataset ​ ​ ds = load_dataset('imdb') ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values() ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/767/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/767/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/766
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/766/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/766/comments
https://api.github.com/repos/huggingface/datasets/issues/766/events
https://github.com/huggingface/datasets/issues/766
730,669,596
MDU6SXNzdWU3MzA2Njk1OTY=
766
[GEM] add DART data-to-text generation dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Is this a duplicate of #924 ?", "Yup, closing! Haven't been keeping track of the solved issues during the sprint." ]
1,603,820,044,000
1,607,002,638,000
1,607,002,638,000
MEMBER
null
null
## Adding a Dataset - **Name:** DART - **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. - **Paper:** https://arxiv.org/abs/2007.02871v1 - **Data:** https://github.com/Yale-LILY/dart - **Motivation:** the dataset will likely be included in the GEM benchmark Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/766/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/765/comments
https://api.github.com/repos/huggingface/datasets/issues/765/events
https://github.com/huggingface/datasets/issues/765
730,668,332
MDU6SXNzdWU3MzA2NjgzMzI=
765
[GEM] Add DART data-to-text generation dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[]
1,603,819,943,000
1,603,820,061,000
1,603,820,061,000
MEMBER
null
null
## Adding a Dataset - **Name:** DART - **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. - **Paper:** https://arxiv.org/abs/2007.02871v1 - **Data:** https://github.com/Yale-LILY/dart - **Motivation:** It will likely be included in the GEM generation evaluation benchmark Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/765/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/762/comments
https://api.github.com/repos/huggingface/datasets/issues/762/events
https://github.com/huggingface/datasets/issues/762
730,586,972
MDU6SXNzdWU3MzA1ODY5NzI=
762
[GEM] Add Czech Restaurant data-to-text generation dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[]
1,603,814,447,000
1,607,002,664,000
1,607,002,664,000
MEMBER
null
null
- Paper: https://www.aclweb.org/anthology/W19-8670.pdf - Data: https://github.com/UFAL-DSG/cs_restaurant_dataset - The dataset will likely be part of the GEM benchmark
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/762/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/761/comments
https://api.github.com/repos/huggingface/datasets/issues/761/events
https://github.com/huggingface/datasets/issues/761
729,898,867
MDU6SXNzdWU3Mjk4OTg4Njc=
761
Downloaded datasets are not usable offline
{ "login": "ghazi-f", "id": 25091538, "node_id": "MDQ6VXNlcjI1MDkxNTM4", "avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghazi-f", "html_url": "https://github.com/ghazi-f", "followers_url": "https://api.github.com/users/ghazi-f/followers", "following_url": "https://api.github.com/users/ghazi-f/following{/other_user}", "gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions", "organizations_url": "https://api.github.com/users/ghazi-f/orgs", "repos_url": "https://api.github.com/users/ghazi-f/repos", "events_url": "https://api.github.com/users/ghazi-f/events{/privacy}", "received_events_url": "https://api.github.com/users/ghazi-f/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes currently you need an internet connection because the lib tries to check for the etag of the dataset script online to see if you don't have it locally already.\r\n\r\nIf we add a way to store the etag/hash locally after the first download, it would allow users to first download the dataset with an internet connection, and still have it working without an internet connection.\r\n\r\nI'll let you know when we add this feature.", "Already fixed by:\r\n- #1726" ]
1,603,745,686,000
1,644,921,148,000
1,644,921,148,000
CONTRIBUTOR
null
null
I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset. Is this the intended behavior ? (Sorry, I wrote the the first version of this issue while still on nlp 0.3.0).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/761/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/760/comments
https://api.github.com/repos/huggingface/datasets/issues/760/events
https://github.com/huggingface/datasets/issues/760
729,637,917
MDU6SXNzdWU3Mjk2Mzc5MTc=
760
Add meta-data to the HANS dataset
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false } ]
[]
1,603,724,213,000
1,607,002,714,000
1,607,002,714,000
MEMBER
null
null
The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/760/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/759/comments
https://api.github.com/repos/huggingface/datasets/issues/759/events
https://github.com/huggingface/datasets/issues/759
729,046,916
MDU6SXNzdWU3MjkwNDY5MTY=
759
(Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
{ "login": "AI678", "id": 63541083, "node_id": "MDQ6VXNlcjYzNTQxMDgz", "avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AI678", "html_url": "https://github.com/AI678", "followers_url": "https://api.github.com/users/AI678/followers", "following_url": "https://api.github.com/users/AI678/following{/other_user}", "gists_url": "https://api.github.com/users/AI678/gists{/gist_id}", "starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AI678/subscriptions", "organizations_url": "https://api.github.com/users/AI678/orgs", "repos_url": "https://api.github.com/users/AI678/repos", "events_url": "https://api.github.com/users/AI678/events{/privacy}", "received_events_url": "https://api.github.com/users/AI678/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Are you running the script on a machine with an internet connection ?", "Yes , I can browse the url through Google Chrome.", "Does this HEAD request return 200 on your machine ?\r\n```python\r\nimport requests \r\nrequests.head(\"https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py\")\r\n```\r\n\r\nIf it returns 200, could you try again to load the dataset ?", "Thank you very much for your response.\r\nWhen I run \r\n``` \r\nimport requests \r\nrequests.head(\"https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py\")\r\n```\r\nIt returns 200.\r\n\r\nAnd I try again to load the dataset. I got the following errors again. \r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 608, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 475, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"C:\\Users\\666666\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\cnn_dailymail\\0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\\cnn_dailymail.py\", line 253, in _split_generators\r\n dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 175, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 224, in map_nested\r\n mapped = [\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 300, in cached_path\r\n output_path = get_from_cache(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 475, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\r\n\r\nConnection error happened but the url was different.\r\n\r\nI add the following code.\r\n```\r\nrequests.head(\"https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nThis didn't return 200\r\nIt returned like this:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 159, in _new_conn\r\n conn = connection.create_connection(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 84, in create_connection\r\n raise err\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 74, in create_connection\r\n sock.connect(sa)\r\nTimeoutError: [WinError 10060] \r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 670, in urlopen\r\n httplib_response = self._make_request(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 381, in _make_request\r\n self._validate_conn(conn)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 978, in _validate_conn\r\n conn.connect()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 309, in connect\r\n conn = self._new_conn()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 171, in _new_conn\r\n raise NewConnectionError(\r\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001F6060618E0>: Failed to establish a new connection: [WinError 10060] ", "Is google drive blocked on your network ?\r\nFor me \r\n```python\r\nrequests.head(\"https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nreturns 200", "I can browse the google drive through google chrome. It's weird. I can download the dataset through google drive manually.", "Could you try to update `requests` maybe ?\r\nIt works with 2.23.0 on my side", "My ```requests``` is 2.24.0 . It still can't return 200.", "Is it possible I download the dataset manually from google drive and use it for further test ? How can I do this ? I want to reproduce the model in this link https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16. But I can't download the dataset through load_dataset method . I have tried many times and the connection error always happens .\r\n", "The head request should definitely work, not sure what's going on on your side.\r\nIf you find a way to make it work, please post it here since other users might encounter the same issue.\r\n\r\nIf you don't manage to fix it you can use `load_dataset` on google colab and then save it using `dataset.save_to_disk(\"path/to/dataset\")`.\r\nThen you can download the directory on your machine and do\r\n```python\r\nfrom datasets import load_from_disk\r\ndataset = load_from_disk(\"path/to/local/dataset\")\r\n```", "Hi\r\nI want to know if this problem has been solved because I encountered a similar issue. Thanks.\r\n`train_data = datasets.load_dataset(\"xsum\", `split=\"train\")`\r\n`ConnectionError:` Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/xsum/xsum.py`", "Hi @smile0925 ! Do you have an internet connection ? Are you using some kind of proxy that may block the access to this file ?\r\n\r\nOtherwise you can try to update `datasets` since we introduced retries for http requests in the 1.2.0 version\r\n```\r\npip install --upgrade datasets\r\n```\r\nLet me know if that helps.", "Hi @lhoestq \r\nOh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n![image](https://user-images.githubusercontent.com/46243662/106456211-2ca24180-64c8-11eb-831e-47e9b40e7da4.png)\r\n\r\n", "> Hi @lhoestq\r\n> Oh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n> ![image](https://user-images.githubusercontent.com/46243662/106456211-2ca24180-64c8-11eb-831e-47e9b40e7da4.png)\r\n\r\nI have the same problem, have you solved it? Many thanks", "Hi @ZhengxiangShi \r\nYou can first try whether your network can access these files. I need to use VPN to access these files, so I download the files that cannot be accessed to the local in advance, and then use them in the code. Like this,\r\n`train_data = datasets.load_dataset(\"xsum.py\", split=\"train\")`" ]
1,603,640,097,000
1,628,100,609,000
1,628,100,609,000
NONE
null
null
Hey, I want to load the cnn-dailymail dataset for fine-tune. I write the code like this from datasets import load_dataset test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”) And I got the following errors. Traceback (most recent call last): File “test.py”, line 7, in test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset module_path, hash = prepare_module( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module local_path = cached_path(file_path, download_config=download_config) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path output_path = get_from_cache( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache raise ConnectionError(“Couldn’t reach {}”.format(url)) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py How can I fix this ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/759/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/759/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/758/comments
https://api.github.com/repos/huggingface/datasets/issues/758/events
https://github.com/huggingface/datasets/issues/758
728,638,559
MDU6SXNzdWU3Mjg2Mzg1NTk=
758
Process 0 very slow when using num_procs with map to tokenizer
{ "login": "ksjae", "id": 17930170, "node_id": "MDQ6VXNlcjE3OTMwMTcw", "avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ksjae", "html_url": "https://github.com/ksjae", "followers_url": "https://api.github.com/users/ksjae/followers", "following_url": "https://api.github.com/users/ksjae/following{/other_user}", "gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}", "starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ksjae/subscriptions", "organizations_url": "https://api.github.com/users/ksjae/orgs", "repos_url": "https://api.github.com/users/ksjae/repos", "events_url": "https://api.github.com/users/ksjae/events{/privacy}", "received_events_url": "https://api.github.com/users/ksjae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! Thanks for reporting.\r\nIs the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?\r\nAlso could how many CPUs can you use for multiprocessing ?\r\n```python\r\nimport multiprocessing\r\nprint(multiprocessing.cpu_count())\r\n```\r\nWhich tokenizer are you using ?", "Using pre trained HF tokenizer. The result is the same with tokenizer multiprocessing off and on.\r\nI have (absolutely) no idea about the distribution, but since this issue occurs on all of my datasets(regardless of files), I don't think distribution is the problems.\r\n\r\nI can use up to 16 cores.", "Ok weird, I don't manage to reproduce this issue on my side.\r\nDoes it happen even with `num_proc=2` for example ?\r\nAlso could you provide more details about your OS and the versions of tokenizers/datasets/multiprocess that you're using ?", "Yes, I can confirm it also happens with ```num_proc=2```.\r\n```\r\ntokenizers 0.9.2\r\ndatasets 1.1.2\r\nmultiprocess 0.70.10\r\n```\r\n```\r\nLinux nipa2020-0629 4.4.0-178-generic #208-Ubuntu SMP Sun Apr 5 23:45:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\r\n```", "I can't reproduce on my side unfortunately with the same versions.\r\n\r\nDo you have issues when doing multiprocessing with python ?\r\n```python\r\nfrom tqdm.auto import tqdm\r\nfrom multiprocess import Pool, RLock\r\n\r\ndef process_data(shard):\r\n # implement\r\n\r\nnum_proc = 8\r\nshards = [] # implement, this must be a list of size num_proc\r\n\r\nwith Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n results = [pool.apply_async(process_data, shard=shard) for shard in shards]\r\n transformed_shards = [r.get() for r in results]\r\n```", "Nah, I'll just wait a few hours. Thank you for helping, though." ]
1,603,507,220,000
1,603,857,586,000
1,603,857,585,000
NONE
null
null
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png"> The code I am using is ``` dataset = load_dataset("text", data_files=[file_path], split='train') dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), num_proc=8) dataset.set_format(type='torch', columns=['input_ids']) dataset.save_to_disk(file_path+'.arrow') ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/758/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/757/comments
https://api.github.com/repos/huggingface/datasets/issues/757/events
https://github.com/huggingface/datasets/issues/757
728,241,494
MDU6SXNzdWU3MjgyNDE0OTQ=
757
CUDA out of memory
{ "login": "li1117heex", "id": 47059217, "node_id": "MDQ6VXNlcjQ3MDU5MjE3", "avatar_url": "https://avatars.githubusercontent.com/u/47059217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/li1117heex", "html_url": "https://github.com/li1117heex", "followers_url": "https://api.github.com/users/li1117heex/followers", "following_url": "https://api.github.com/users/li1117heex/following{/other_user}", "gists_url": "https://api.github.com/users/li1117heex/gists{/gist_id}", "starred_url": "https://api.github.com/users/li1117heex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/li1117heex/subscriptions", "organizations_url": "https://api.github.com/users/li1117heex/orgs", "repos_url": "https://api.github.com/users/li1117heex/repos", "events_url": "https://api.github.com/users/li1117heex/events{/privacy}", "received_events_url": "https://api.github.com/users/li1117heex/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you provide more details ? What's the code you ran ?", "```python\r\ntokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small')\r\n\r\ndef tokenize(batch):\r\n return tokenizer(batch['text'], padding='max_length', truncation=True,max_length=512)\r\n\r\ndataset = load_dataset(\"bookcorpus\",split='train[:1000]').shuffle()\r\ndataset = dataset.map(tokenize, batched=True, batch_size=512)\r\n\r\n# dataset = LineByLineTextDataset(\r\n# tokenizer=tokenizer,\r\n# file_path=\"./wiki1000.txt\",\r\n# block_size=128\r\n# )\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\nconfig=FunnelConfig(\r\n return_dict=True\r\n)\r\n\r\nmodel= FunnelForMaskedLM(config=config)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./checkpoints\",\r\n overwrite_output_dir=True,\r\n do_train=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=16,\r\n per_device_eval_batch_size=16,\r\n save_steps=10000,\r\n logging_dir='./ptlogs'\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n)\r\ntrainer.train()\r\n```", "`RuntimeError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 15.90 GiB total capacity; 14.35 GiB already allocated; 753.75 MiB free; 14.39 GiB reserved in total by PyTorch)\r\nException raised from malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:272 (most recent call first):`\r\n\r\npart of error output", "from funnel model to bert model : error still happened\r\n\r\nfrom your dataset to LineByLineTextDataset : error disapeared", "notice i just loaded 1000 rows of data", "the error happens when executing loss.backward()", "Since you're using a data collator you don't need to tokenizer the dataset using `map`. Could you try not to use `map` and only the data collator instead ? The data collator is supposed to pad to the longest sequence in each batch afaik, instead of padding to 512.\r\n\r\nAlso cc @sgugger ", "Closing this one.\r\nFeel free to re-open if you have other questions about this issue" ]
1,603,461,420,000
1,608,732,389,000
1,608,732,389,000
NONE
null
null
In your dataset ,cuda run out of memory as long as the trainer begins: however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/757/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/752/comments
https://api.github.com/repos/huggingface/datasets/issues/752/events
https://github.com/huggingface/datasets/issues/752
726,917,801
MDU6SXNzdWU3MjY5MTc4MDE=
752
Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning
{ "login": "ogabrielluiz", "id": 24829397, "node_id": "MDQ6VXNlcjI0ODI5Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/24829397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ogabrielluiz", "html_url": "https://github.com/ogabrielluiz", "followers_url": "https://api.github.com/users/ogabrielluiz/followers", "following_url": "https://api.github.com/users/ogabrielluiz/following{/other_user}", "gists_url": "https://api.github.com/users/ogabrielluiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/ogabrielluiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ogabrielluiz/subscriptions", "organizations_url": "https://api.github.com/users/ogabrielluiz/orgs", "repos_url": "https://api.github.com/users/ogabrielluiz/repos", "events_url": "https://api.github.com/users/ogabrielluiz/events{/privacy}", "received_events_url": "https://api.github.com/users/ogabrielluiz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the report, can reproduce. Will fix", "Fixed now @ogabrielluiz " ]
1,603,320,983,000
1,603,383,582,000
1,603,383,582,000
NONE
null
null
Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this. Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching points to the right page. Thanks for all the great work!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/752/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/751/comments
https://api.github.com/repos/huggingface/datasets/issues/751/events
https://github.com/huggingface/datasets/issues/751
726,820,191
MDU6SXNzdWU3MjY4MjAxOTE=
751
Error loading ms_marco v2.1 using load_dataset()
{ "login": "JainSahit", "id": 30478979, "node_id": "MDQ6VXNlcjMwNDc4OTc5", "avatar_url": "https://avatars.githubusercontent.com/u/30478979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JainSahit", "html_url": "https://github.com/JainSahit", "followers_url": "https://api.github.com/users/JainSahit/followers", "following_url": "https://api.github.com/users/JainSahit/following{/other_user}", "gists_url": "https://api.github.com/users/JainSahit/gists{/gist_id}", "starred_url": "https://api.github.com/users/JainSahit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JainSahit/subscriptions", "organizations_url": "https://api.github.com/users/JainSahit/orgs", "repos_url": "https://api.github.com/users/JainSahit/repos", "events_url": "https://api.github.com/users/JainSahit/events{/privacy}", "received_events_url": "https://api.github.com/users/JainSahit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "There was a similar issue in #294 \r\nClearing the cache and download again the dataset did the job. Could you try to clear your cache and download the dataset again ?", "I was able to load the dataset successfully, I'm pretty sure it's just a cache issue that you have.\r\nLet me know if clearing your cache fixes the problem", "Yes, it indeed was a cache issue!\r\nThanks for reaching out!!" ]
1,603,310,083,000
1,604,539,917,000
1,604,539,917,000
NONE
null
null
Code: `dataset = load_dataset('ms_marco', 'v2.1')` Error: ``` `--------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) <ipython-input-16-34378c057212> in <module>() 9 10 # Downloading and loading a dataset ---> 11 dataset = load_dataset('ms_marco', 'v2.1') 10 frames /usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx) 353 """ 354 try: --> 355 obj, end = self.scan_once(s, idx) 356 except StopIteration as err: 357 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660) ` ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/751/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/750/comments
https://api.github.com/repos/huggingface/datasets/issues/750/events
https://github.com/huggingface/datasets/issues/750
726,589,446
MDU6SXNzdWU3MjY1ODk0NDY=
750
load_dataset doesn't include `features` in its hash
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,603,293,401,000
1,603,964,161,000
1,603,964,161,000
MEMBER
null
null
It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored. Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of: ``` dataset = load_dataset("glue", "mnli") features = dataset["train"].features features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order dataset = load_dataset("glue", "mnli", features=features) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/750/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/749/comments
https://api.github.com/repos/huggingface/datasets/issues/749/events
https://github.com/huggingface/datasets/issues/749
726,366,062
MDU6SXNzdWU3MjYzNjYwNjI=
749
[XGLUE] Adding new dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Amazing! ", "Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .\r\n\r\nAs stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language *cf.* here: \r\n\r\n![Screenshot from 2020-11-04 15-02-17](https://user-images.githubusercontent.com/23423619/98120893-d7499a80-1eae-11eb-9d0b-57dfe5d4ee68.png)\r\n\r\nSo, I'd suggest to have exactly 11 \"language-independent\" configs: \"ner\", \"pos\", ... and give the sample in each dataset in the config a \"language\" label being one of \"ar\", \"bg\", .... => To me this makes more sense than making languaga specific config, *e.g.* \"ner-de\", ...especially because training data is only available in English. Do you guys agree? ", "In this case we should have named splits, so config `ner` has splits `train`, `validation`, `test-en`, `test-ar`, `test-bg`, etc...\r\n\r\nThis is more in the spirit of the task afaiu, and will avoid making users do the filtering step themselves when testing different models or different configurations of the same model.", "I see your point! \r\n\r\nI think this would be quite feasible to do and makes sense to me as well! In the paper results are reported per language, so it seems more natural to do it this way. \r\n\r\nGood for me @yjernite ! What do the others think? @lhoestq \r\n", "I agree with Yacine on this!", "Okey actually not that easy to add things like `test-de` to `datasets` => this would be the first dataset to have this.\r\nSee: https://github.com/huggingface/datasets/pull/802", "IMO we should have one config per language. That's what we're doing for xnli, xtreme etc.\r\nHaving split names that depend on the language seems wrong. We should try to avoid split names that are not train/val/test.\r\nSorry for late response on this one", "@lhoestq agreed on having one config per language, but we also need to be able to have different split names and people are going to want to use hyphens, so we should at the very least warn them why it's failing :) E.g. for ANLI with different stages of data (currently using underscores) or https://www.tau-nlp.org/commonsenseqa with their train-sanity or dev-sanity splits", "Yes sure ! Could you open a separate issue for that ?", "Really cool dataset 👍 btw. does Transformers support all 11 tasks 🤔 would be awesome to have a xglue script (like the \"normal\" glue one)", "Just to make sure this is what we want here. If we add one config per language, \r\n\r\nthis means that this dataset ends up with well over 100 different configs most of which will have the same `train` split. The train split is always in English. Also, I'm not sure whether it's better for the user to be honest. \r\n\r\nI think it could be quite confusing for the user to have\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner-de\", split=\"train\")\r\n```\r\n\r\nin English even though it's `ner-de`.\r\n\r\nTo be honest, I'd prefer:\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test-de\")\r\ntest_dataset_fr = load_dataset(\"xglue\", \"ner\", split=\"test-fr\")\r\n```\r\n\r\nhere", "Oh yes right I didn't notice the train set was always in english sorry.\r\nMoreover it seems that the way this dataset is used is to pick a pretrained multilingual model, fine-tune it on the english train set and then evaluate on each test set (one per language).\r\nSo to better fit the usual usage of this dataset, I agree that it's better to have one test split per language. \r\n\r\nSomething like your latest example patrick is fine imo :\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test.de\")\r\n```\r\n\r\nI just replace test-de with test.de since `-` is not allowed for split names (it has to follow the `\\w+` regex), and usually we specify the language after a point. ", "Closing since XGLUE has been added in #802 , thanks patrick :) ", "I need xglue Urdu summarization dataset so how can i get it?", "According to the table in https://huggingface.co/datasets/xglue, Urdu only exists for POS and XNLI in XGLUE - not for summarization" ]
1,603,277,496,000
1,664,537,730,000
1,609,927,375,000
MEMBER
null
null
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf). I'm planning on adding the dataset to the library myself in a couple of weeks. Also tagging @JetRunner @qiweizhen in case I need some guidance
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/749/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/744/comments
https://api.github.com/repos/huggingface/datasets/issues/744/events
https://github.com/huggingface/datasets/issues/744
724,918,448
MDU6SXNzdWU3MjQ5MTg0NDg=
744
Dataset Explorer Doesn't Work for squad_es and squad_it
{ "login": "gaotongxiao", "id": 22607038, "node_id": "MDQ6VXNlcjIyNjA3MDM4", "avatar_url": "https://avatars.githubusercontent.com/u/22607038?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaotongxiao", "html_url": "https://github.com/gaotongxiao", "followers_url": "https://api.github.com/users/gaotongxiao/followers", "following_url": "https://api.github.com/users/gaotongxiao/following{/other_user}", "gists_url": "https://api.github.com/users/gaotongxiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/gaotongxiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaotongxiao/subscriptions", "organizations_url": "https://api.github.com/users/gaotongxiao/orgs", "repos_url": "https://api.github.com/users/gaotongxiao/repos", "events_url": "https://api.github.com/users/gaotongxiao/events{/privacy}", "received_events_url": "https://api.github.com/users/gaotongxiao/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
[ "Oups wrong click.\r\nThis one is for you @srush" ]
1,603,136,052,000
1,603,730,177,000
1,603,730,177,000
NONE
null
null
https://huggingface.co/nlp/viewer/?dataset=squad_es https://huggingface.co/nlp/viewer/?dataset=squad_it Both pages show "OSError: [Errno 28] No space left on device".
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/744/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/743/comments
https://api.github.com/repos/huggingface/datasets/issues/743/events
https://github.com/huggingface/datasets/issues/743
724,703,980
MDU6SXNzdWU3MjQ3MDM5ODA=
743
load_dataset for CSV files not working
{ "login": "iliemihai", "id": 2815308, "node_id": "MDQ6VXNlcjI4MTUzMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliemihai", "html_url": "https://github.com/iliemihai", "followers_url": "https://api.github.com/users/iliemihai/followers", "following_url": "https://api.github.com/users/iliemihai/following{/other_user}", "gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions", "organizations_url": "https://api.github.com/users/iliemihai/orgs", "repos_url": "https://api.github.com/users/iliemihai/repos", "events_url": "https://api.github.com/users/iliemihai/events{/privacy}", "received_events_url": "https://api.github.com/users/iliemihai/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Thank you !\r\nCould you provide a csv file that reproduces the error ?\r\nIt doesn't have to be one of your dataset. As long as it reproduces the error\r\nThat would help a lot !", "I think another good example is the following:\r\n`\r\nfrom datasets import load_dataset\r\n`\r\n`\r\ndataset = load_dataset(\"csv\", data_files=[\"./sts-dev.csv\"], delimiter=\"\\t\", column_names=[\"one\", \"two\", \"three\", \"four\", \"score\", \"sentence1\", \"sentence2\"], script_version=\"master\")`\r\n`\r\n\r\nDisplayed error `CSV parse error: Expected 7 columns, got 6` even tough I put 7 columns. First four columns from the csv don't have a name, so I've named them by default. The csv file is the .dev file from STSb benchmark dataset.\r\n\r\n", "Hi, seems I also can't read csv file. I was trying with a dummy csv with only three rows.\r\n\r\n```\r\ntext,label\r\nI hate google,negative\r\nI love Microsoft,positive\r\nI don't like you,negative\r\n```\r\nI was using the HuggingFace image in Paperspace Gradient (datasets==1.1.3). The following code doesn't work:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\")\r\n```\r\nIt outputs the following:\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset csv/default-3b6254ff4dd403e5 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/csv/default-3b6254ff4dd403e5/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...\r\nDataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-3b6254ff4dd403e5/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2. Subsequent calls will reuse this data.\r\n```\r\nBut `len(dataset)` gives `1` and I can't access rows with indexing `dataset[0]` (it gives `KeyError: 0`).\r\n\r\nHowever, loading from pandas dataframe is working.\r\n```\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\ndf = pd.read_csv('test_data.csv')\r\ndataset = Dataset.from_pandas(df)\r\n```\r\n\r\n", "This is because load_dataset without `split=` returns a dictionary of split names (train/validation/test) to dataset.\r\nYou can do\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\")\r\nprint(dataset[\"train\"][0])\r\n```\r\n\r\nOr if you want to directly get the train split:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\", split=\"train\")\r\nprint(dataset[0])\r\n```\r\n", "Good point\r\n\r\nDesign question for us, though: should `load_dataset` when no split is specified and only one split is present in the dataset (common use case with CSV/text/JSON datasets) return a `Dataset` instead of a `DatsetDict`? I feel like it's often what the user is expecting. I break a bit the paradigm of a unique return type but since this library is designed for widespread DS people more than CS people usage I would tend to think that UX should take precedence over CS reasons. What do you think?", "In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.\r\nI'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.\r\n\r\nFor the other datasets ton the other hand the user doesn't know in advance the splits so I would keep the dictionary by default. What do you think ?", "Thanks for your quick response! I'm fine with specifying the split as @lhoestq suggested. My only concern is when I'm loading from python dict or pandas, the library returns a dataset instead of a dictionary of datasets when no split is specified. I know that they use a different function `Dataset.from_dict` or `Dataset.from_pandas` but the text/csv files use `load_dataset()`. However, to the user, they do the same task and we probably expect them to have the same behavior.", "```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='./amazon_data/Video_Games_5.csv', delimiter=\",\", split=['train', 'test'])\r\n```\r\nI was running the above line, but got this error.\r\n\r\n```ValueError: Unknown split \"test\". Should be one of ['train'].```\r\n\r\nThe data is amazon product data. I load the Video_Games_5.json.gz data into pandas and save it as csv file. and then load the csv file using the above code. I thought, ```split=['train', 'test']``` would split the data into train and test. did I misunderstood?\r\n\r\nThank you!\r\n\r\n", "Hi ! the `split` argument in `load_dataset` is used to select the splits you want among the available splits.\r\nHowever when loading a csv with a single file as you did, only a `train` split is available by default.\r\n\r\nIndeed since `data_files='./amazon_data/Video_Games_5.csv'` is equivalent to `data_files={\"train\": './amazon_data/Video_Games_5.csv'}`, you can get a dataset with \r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='./amazon_data/Video_Games_5.csv', delimiter=\",\", split=\"train\")\r\n```\r\n\r\nAnd then to get both a train and test split you can do\r\n```python\r\ndataset = dataset.train_test_split()\r\nprint(dataset.keys())\r\n# ['train', 'test']\r\n```\r\n\r\n\r\nAlso note that a csv dataset may have several available splits if it is defined this way:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files={\r\n \"train\": './amazon_data/Video_Games_5_train.csv',\r\n \"test\": './amazon_data/Video_Games_5_test.csv'\r\n})\r\nprint(dataset.keys())\r\n# ['train', 'test']\r\n```\r\n", "> In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.\r\n> I'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.\r\n> \r\n> For the other datasets ton the other hand the user doesn't know in advance the splits so I would keep the dictionary by default. What do you think ?\r\n\r\nYes maybe this would be good. I think having to select 'train' from the resulting object why the user gave no split information is a confusing and not intuitive behavior.", "> Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.\r\n> \r\n> `from datasets import load_dataset`\r\n> `dataset = load_dataset(\"csv\", data_files=[\"./sample_data.csv\"], delimiter=\"\\t\", column_names=[\"title\", \"text\"], script_version=\"master\")`\r\n> \r\n> Displayed error:\r\n> `... ArrowInvalid: CSV parse error: Expected 2 columns, got 1`\r\n\r\nI'm also facing the same issue when trying to load from a csv file locally:\r\n\r\n```python\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('csv', data_files='sample_data.csv')\r\n```\r\n\r\nError when executed from Google Colab:\r\n```python\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-34-79a8d4f65ed6> in <module>()\r\n 1 from nlp import load_dataset\r\n----> 2 dataset = load_dataset('csv', data_files='sample_data.csv')\r\n\r\n9 frames\r\n/usr/local/lib/python3.7/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 535 try:\r\n 536 # Prepare split will record examples associated to the split\r\n--> 537 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 538 except OSError:\r\n 539 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)\r\n 863 \r\n 864 generator = self._generate_tables(**split_generator.gen_kwargs)\r\n--> 865 for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n 866 writer.write_table(table)\r\n 867 num_examples, num_bytes = writer.finalize()\r\n\r\n/usr/local/lib/python3.7/dist-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 213 def __iter__(self, *args, **kwargs):\r\n 214 try:\r\n--> 215 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 216 # return super(tqdm...) will not catch exception\r\n 217 yield obj\r\n\r\n/usr/local/lib/python3.7/dist-packages/tqdm/std.py in __iter__(self)\r\n 1102 fp_write=getattr(self.fp, 'write', sys.stderr.write))\r\n 1103 \r\n-> 1104 for obj in iterable:\r\n 1105 yield obj\r\n 1106 # Update and possibly print the progressbar.\r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/datasets/csv/ede98314803c971fef04bcee45d660c62f3332e8a74491e0b876106f3d99bd9b/csv.py in _generate_tables(self, files)\r\n 78 read_options=self.config.pa_read_options,\r\n 79 parse_options=self.config.pa_parse_options,\r\n---> 80 convert_options=self.config.convert_options,\r\n 81 )\r\n 82 yield i, pa_table\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: CSV parse error: Expected 1 columns, got 8\r\n```\r\n\r\nVersion:\r\n```\r\nnlp==0.4.0\r\n```", "Hi @kauvinlucas\r\n\r\nYou can use the latest versions of `datasets` to do this.\r\nTo do so, just `pip install datasets` instead of `nlp` (the library was renamed) and then\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='sample_data.csv')", "Hi \r\nI'm having a different problem with loading local csv. \r\n```Python\r\nfrom datasets import load_dataset \r\ndataset = load_dataset('csv', data_files='sample.csv') \r\n``` \r\n\r\ngives `ValueError: Specified named and prefix; you can only specify one.` error \r\n\r\nversions: \r\n- datasets: 1.1.3 \r\n- python: 3.9.6 \r\n- pyarrow: 2.0.0 ", "Oh.. I figured it out. According to issue #[42387](https://github.com/pandas-dev/pandas/issues/42387) from pandas, this new version does not accept None for both parameters (which was being done by the repo I'm testing). Dowgrading Pandas==1.0.4 and Python==3.8 worked", "Hi, \r\nI got an `OSError: Cannot find data file. ` when I tried to use load_dataset with tsv files. I have checked the paths, and they are correct. \r\n\r\nversions\r\n- python: 3.7.9\r\n- datasets: 1.1.3\r\n- pyarrow: 2.0.0\r\n- transformers: 4.2.2\r\n\r\n~~~\r\ndata_files = {\"train\": \"train.tsv\", \"test\",: \"test.tsv\"}\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n~~~\r\n\r\nThe entire Error message is on below:\r\n\r\n```08/14/2021 16:55:44 - INFO - __main__ - load a local file for train: /project/media-framing/transformer4/data/0/val/label1.tsv\r\n08/14/2021 16:55:44 - INFO - __main__ - load a local file for test: /project/media-framing/transformer4/data/unlabel/test.tsv\r\nUsing custom data configuration default\r\nDownloading and preparing dataset csv/default-00a4200ae8507533 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-00a4200ae8507533/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...\r\nTraceback (most recent call last):\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 592, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 944, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 307, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"run_glue.py\", line 484, in <module>\r\n main()\r\n File \"run_glue.py\", line 243, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py\", line 610, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 515, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 594, in _download_and_prepare\r\n raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\nOSError: Cannot find data file. ```", "Hi ! It looks like the error stacktrace doesn't match with your code snippet.\r\n\r\nWhat error do you get when running this ?\r\n```\r\ndata_files = {\"train\": \"train.tsv\", \"test\",: \"test.tsv\"}\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n```\r\ncan you check that both tsv files are in the same folder as the current working directory of your shell ?", "Hi @lhoestq, Below is the entire error message after I move both tsv files to the same directory. It's the same with I got before.\r\n```\r\n/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)\r\n return torch._C._cuda_getDeviceCount() > 0\r\n08/29/2021 22:56:43 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False\r\n08/29/2021 22:56:43 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=/projectnb/media-framing/pred_result/label1/, overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=True, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=8.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Aug29_22-56-43_scc1, logging_first_step=False, logging_steps=500, save_steps=3000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/projectnb/media-framing/pred_result/label1/, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, _n_gpu=0)\r\n08/29/2021 22:56:43 - INFO - __main__ - load a local file for train: /project/media-framing/transformer4/temp_train.tsv\r\n08/29/2021 22:56:43 - INFO - __main__ - load a local file for test: /project/media-framing/transformer4/temp_test.tsv\r\n08/29/2021 22:56:43 - WARNING - datasets.builder - Using custom data configuration default-df627c23ac0e98ec\r\nDownloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-df627c23ac0e98ec/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...\r\nTraceback (most recent call last):\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 693, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 1166, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 428, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"run_glue.py\", line 487, in <module>\r\n main()\r\n File \"run_glue.py\", line 244, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py\", line 852, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 616, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 699, in _download_and_prepare\r\n + str(e)\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nerror closing file\r\n```", "Hi !\r\nCan you try running this into a python shell directly ?\r\n\r\n```python\r\nimport os\r\nfrom datasets import load_dataset\r\n\r\ndata_files = {\"train\": \"train.tsv\", \"test\": \"test.tsv\"}\r\nassert all(os.path.isfile(data_file) for data_file in data_files.values()), \"Couln't find files\"\r\n\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\nprint(\"success !\")\r\n```\r\n\r\nThis way all the code from `run_glue.py` doesn't interfere with our tests :)", "Hi @lhoestq, \r\n\r\nBelow is what I got from terminal after I copied and run your code. I think the files themselves are good since there is no assertion error. \r\n\r\n```\r\nUsing custom data configuration default-df627c23ac0e98ec\r\nDownloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-df627c23ac0e98ec/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...\r\nTraceback (most recent call last):\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 693, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 1166, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 428, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 7, in <module>\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py\", line 852, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 616, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 699, in _download_and_prepare\r\n + str(e)\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nerror closing file\r\n```", "Hi, could this be a permission error ? I think it fails to close the arrow file that contains the data from your CSVs in the cache.\r\n\r\nBy default datasets are cached in `~/.cache/huggingface/datasets`, could you check that you have the right permissions ?\r\nYou can also try to change the cache directory by passing `cache_dir=\"path/to/my/cache/dir\"` to `load_dataset`.", "Thank you!! @lhoestq\r\n\r\nFor some reason, I don't have the default path for datasets to cache, maybe because I work from a remote system. The issue solved after I pass the `cache_dir` argument to the function. Thank you very much!!", "> Hi, could this be a permission error ? I think it fails to close the arrow file that contains the data from your CSVs in the cache.\r\n> \r\n> By default datasets are cached in `~/.cache/huggingface/datasets`, could you check that you have the right permissions ? You can also try to change the cache directory by passing `cache_dir=\"path/to/my/cache/dir\"` to `load_dataset`.\r\n\r\nThis is the exact solution I have been finding for the whole afternoon. Thanks a lot!\r\nI tried to do a training on a cluster computing system. The user's home directory is shared between nodes.\r\nIt always gets **stuck** at dataset loading...\r\nThe reason might be, the node (with GPU) can't read/write data in the default cache folder (in my home directory).\r\nAfter using an intermediate cache folder, this issue is resolved for me." ]
1,603,119,231,000
1,669,654,776,000
null
CONTRIBUTOR
null
null
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets. ` from datasets import load_dataset ` ` dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master") ` Displayed error: ` ... ArrowInvalid: CSV parse error: Expected 2 columns, got 1 ` I should mention that when I've tried to read data from `https://github.com/lhoestq/transformers/tree/custom-dataset-in-rag-retriever/examples/rag/test_data/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with /r character, so I've removed them from the custom dataset, but the problem still remains. I've added a colab reproducing the bug, but unfortunately I cannot provide the dataset. https://colab.research.google.com/drive/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing Are there any work around for it ? Thank you
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/743/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/741/comments
https://api.github.com/repos/huggingface/datasets/issues/741/events
https://github.com/huggingface/datasets/issues/741
723,924,275
MDU6SXNzdWU3MjM5MjQyNzU=
741
Creating dataset consumes too much memory
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting.\r\nIn theory since the dataset script is just made to yield examples to write them into an arrow file, it's not supposed to create memory issues.\r\n\r\nCould you please try to run this exact same loop in a separate script to see if it's not an issue with `PIL` ?\r\nYou can just copy paste what's inside `_generate_examples` and remove all the code for `datasets` (remove yield).\r\n\r\nIf the RAM usage stays low after 600 examples it means that it comes from some sort of memory leak in the library, or with pyarrow.", "Here's an equivalent loading code:\r\n```python\r\nimages_path = \"PHOENIX-2014-T-release-v3/PHOENIX-2014-T/features/fullFrame-210x260px/train\"\r\n\r\nfor dir_path in tqdm(os.listdir(images_path)):\r\n frames_path = os.path.join(images_path, dir_path)\r\n np_frames = []\r\n for frame_name in os.listdir(frames_path):\r\n frame_path = os.path.join(frames_path, frame_name)\r\n im = Image.open(frame_path)\r\n np_frames.append(np.asarray(im))\r\n im.close()\r\n```\r\n\r\nThe process takes 0.3% of memory, even after 1000 examples on the small machine with 120GB RAM.\r\n\r\nI guess something in the datasets library doesn't release the reference to the objects I'm yielding, but no idea how to test for this", "I've had similar issues with Arrow once. I'll investigate...\r\n\r\nFor now maybe we can simply use the images paths in the dataset you want to add. I don't expect to fix this memory issue until 1-2 weeks unfortunately. Then we can just update the dataset with the images. What do you think ?", "If it's just 1-2 weeks, I think it's best if we wait. I don't think it is very urgent to add it, and it will be much more useful with the images loaded rather than not (the images are low resolution, and thus papers using this dataset actually fit the entire video into memory anyway)\r\n\r\nI'll keep working on other datasets in the meanwhile :) ", "Ok found the issue. This is because the batch size used by the writer is set to 10 000 elements by default so it would load your full dataset in memory (the writer has a buffer that flushes only after each batch). Moreover to write in Apache Arrow we have to use python objects so what's stored inside the ArrowWriter's buffer is actually python integers (32 bits).\r\n\r\nLowering the batch size to 10 should do the job.\r\n\r\nI will add a flag to the DatasetBuilder class of dataset scripts, so that we can customize the batch size.", "Thanks, that's awesome you managed to find the problem.\r\n\r\nAbout the 32 bits - really? there isn't a way to serialize the numpy array somehow? 32 bits would take 4 times the memory / disk space needed to store these videos.\r\n\r\nPlease let me know when the batch size is customizable and I'll try again!", "The 32 bit integrers are only used in the writer's buffer because Arrow doesn't take numpy arrays correctly as input. On disk it's stored as uint8 in arrow format ;)", "> I don't expect to fix this memory issue until 1-2 weeks unfortunately.\r\n\r\nHi @lhoestq \r\nnot to rush of course, but I was wondering if you have a new timeline so I know how to plan my work around this :) ", "Hi ! Next week for sure :) ", "Alright it should be good now.\r\nYou just have to specify `_writer_batch_size = 10` for example as a class attribute of the dataset builder class.", "I added it, but still it consumes as much memory\r\n\r\nhttps://github.com/huggingface/datasets/pull/722/files#diff-2e0d865dd4a60dedd1861d6f8c5ed281ded71508467908e1e0b1dbe7d2d420b1R66\r\n\r\nDid I not do it correctly?", "Yes you did it right.\r\nDid you rebase to include the changes of #828 ?\r\n\r\nEDIT: looks like you merged from master in the PR. Not sure why you still have an issue then, I will investigate", "Hi @lhoestq, any update on this?\r\nPerhaps even a direction I could try myself?", "Sorry for the delay, I was busy with the dataset sprint and the incredible amount of contributions to the library ^^'\r\n\r\nWhat you can try to do to find what's wrong is check at which frequency the arrow writer writes all the examples from its in-memory buffer on disk. This happens [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L257-L258) in the code.\r\n\r\nThe idea is that `write_on_file` writes the examples every `writer_batch_size` examples and clear the buffer `self. current_rows`. As soon as `writer_batch_size` is small enough you shouldn't have memory issues in theory.\r\n\r\nLet me know if you have questions or if I can help.\r\n\r\nSince the dataset sprint is over and I will also be done with all the PRs soon I will be able to go back at it and take a look.", "Thanks. I gave it a try and no success. I'm not sure what's happening there", "I had the same issue. It works for me by setting `DEFAULT_WRITER_BATCH_SIZE = 10` of my dataset builder class. (And not `_writer_batch_size` as previously mentioned). I guess this is because `_writer_batch_size` is overwritten in `__init__` (see [here](https://github.com/huggingface/datasets/blob/0e2563e5d5c2fc193ea27d7c24607bb35607f2d5/src/datasets/builder.py#L934))", "Yes the class attribute you can change is `DEFAULT_WRITER_BATCH_SIZE`.\r\nOtherwise in `load_dataset` you can specify `writer_batch_size=`", "Ok thanks for the tips. Maybe the documentation should be updated accordingly https://huggingface.co/docs/datasets/add_dataset.html.", "Thanks for reporting this mistake in the docs.\r\nI just fixed it at https://github.com/huggingface/datasets/commit/85cf7ff920c90ca2e12bedca12b36d2a043c3da2", "May I close this issue, @AmitMY?" ]
1,603,001,226,000
1,644,944,590,000
1,644,944,590,000
CONTRIBUTOR
null
null
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue. Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400): ```python def _generate_examples(self, base_path, split): """ Yields examples. """ filepath = os.path.join(base_path, "annotations", "manual", "PHOENIX-2014-T." + split + ".corpus.csv") images_path = os.path.join(base_path, "features", "fullFrame-210x260px", split) with open(filepath, "r", encoding="utf-8") as f: data = csv.DictReader(f, delimiter="|", quoting=csv.QUOTE_NONE) for row in data: frames_path = os.path.join(images_path, row["video"])[:-7] np_frames = [] for frame_name in os.listdir(frames_path): frame_path = os.path.join(frames_path, frame_name) im = Image.open(frame_path) np_frames.append(np.asarray(im)) im.close() yield row["name"], {"video": np_frames} ``` The dataset creation process goes out of memory on a machine with 500GB RAM. I was under the impression that the "generator" here is exactly for that, to avoid memory constraints. However, even if you want the entire dataset in memory, it would be in the worst case `260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes So I'm not sure why it's taking more than 500GB. And the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM. --- ## Info that might help: Iterating over examples is extremely slow. ![image](https://user-images.githubusercontent.com/5757359/96359590-3c666780-111d-11eb-9347-1f833ad982a9.png) If I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples/sec And you can see at this state it is using 94% of the memory: ![image](https://user-images.githubusercontent.com/5757359/96359606-7afc2200-111d-11eb-8c11-0afbdba1a6a3.png) And it is only using one CPU core, which is probably why it's so slow: ![image](https://user-images.githubusercontent.com/5757359/96359630-a3841c00-111d-11eb-9ba0-7fd3cdf51d26.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/741/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/741/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/737/comments
https://api.github.com/repos/huggingface/datasets/issues/737/events
https://github.com/huggingface/datasets/issues/737
722,463,923
MDU6SXNzdWU3MjI0NjM5MjM=
737
Trec Dataset Connection Error
{ "login": "aychang95", "id": 10554495, "node_id": "MDQ6VXNlcjEwNTU0NDk1", "avatar_url": "https://avatars.githubusercontent.com/u/10554495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aychang95", "html_url": "https://github.com/aychang95", "followers_url": "https://api.github.com/users/aychang95/followers", "following_url": "https://api.github.com/users/aychang95/following{/other_user}", "gists_url": "https://api.github.com/users/aychang95/gists{/gist_id}", "starred_url": "https://api.github.com/users/aychang95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aychang95/subscriptions", "organizations_url": "https://api.github.com/users/aychang95/orgs", "repos_url": "https://api.github.com/users/aychang95/repos", "events_url": "https://api.github.com/users/aychang95/events{/privacy}", "received_events_url": "https://api.github.com/users/aychang95/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting.\r\nThat's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.\r\n\r\nI'm opening a PR to update the url" ]
1,602,777,473,000
1,603,097,676,000
1,603,097,676,000
NONE
null
null
**Datasets Version:** 1.1.2 **Python Version:** 3.6/3.7 **Code:** ```python from datasets import load_dataset load_dataset("trec") ``` **Expected behavior:** Download Trec dataset and load Dataset object **Current Behavior:** Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken) <details> <summary>Error Logs</summary> Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-8-66bf1242096e> in <module>() ----> 1 load_dataset("trec") 10 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label </details>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/737/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/735/comments
https://api.github.com/repos/huggingface/datasets/issues/735/events
https://github.com/huggingface/datasets/issues/735
722,225,270
MDU6SXNzdWU3MjIyMjUyNzA=
735
Throw error when an unexpected key is used in data_files
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting !\r\nWe'll add support for other keys" ]
1,602,759,327,000
1,604,064,232,000
1,604,064,232,000
CONTRIBUTOR
null
null
I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users. So the following, unintuitively, returns only one key (namely `train`). ```python datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f}) print(datasets.keys()) # dict_keys(['train']) ``` whereas using `validation` instead, does return the expected result: ```python datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f}) print(datasets.keys()) # dict_keys(['train', 'validation']) ``` I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/735/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/730/comments
https://api.github.com/repos/huggingface/datasets/issues/730/events
https://github.com/huggingface/datasets/issues/730
721,073,812
MDU6SXNzdWU3MjEwNzM4MTI=
730
Possible caching bug
{ "login": "ArneBinder", "id": 3375489, "node_id": "MDQ6VXNlcjMzNzU0ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArneBinder", "html_url": "https://github.com/ArneBinder", "followers_url": "https://api.github.com/users/ArneBinder/followers", "following_url": "https://api.github.com/users/ArneBinder/following{/other_user}", "gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions", "organizations_url": "https://api.github.com/users/ArneBinder/orgs", "repos_url": "https://api.github.com/users/ArneBinder/repos", "events_url": "https://api.github.com/users/ArneBinder/events{/privacy}", "received_events_url": "https://api.github.com/users/ArneBinder/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
[ "Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)", "Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command \r\n`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`\r\n\r\nchange the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html\r\n`dataset = datasets.load_dataset('json', data_files=args.dataset)`\r\n\r\nErrors:\r\n`Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264...\r\n`", "```ds = load_dataset(\"csv\", data_files={'train': 'train.csv', 'test': 'test.csv'})```\r\n\r\nGives the output\r\n```Using custom data configuration default-5c8ae7c208631aca```\r\n\r\nand the code hangs there.", "> `ds = load_dataset(\"csv\", data_files={'train': 'train.csv', 'test': 'test.csv'})`\r\n> \r\n> Gives the output `Using custom data configuration default-5c8ae7c208631aca`\r\n> \r\n> and the code hangs there.\r\n\r\nHave you solved it? I met this problem too!", "Can you Ctrl+C to kill the process and share the stacktrace here ? It should show at which location in the code it was hanging", "I had the same issue and solved it by downgrading the datasets version from 2.7.0 -> 2.6.1\r\npip install -q datasets==2.6.1", "> I had the same issue and solved it by downgrading the datasets version from 2.7.0 -> 2.6.1 pip install -q datasets==2.6.1\r\n\r\nThanks, it works for me" ]
1,602,640,954,000
1,669,081,554,000
1,603,964,161,000
NONE
null
null
The following code with `test1.txt` containing just "🤗🤗🤗": ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) ``` produces this output: ``` Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155... Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data. {'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'} Using custom data configuration default Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155) {'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'} ``` Just changing the order (and deleting the temp files): ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) ``` produces this: ``` Using custom data configuration default Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155... Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data. {'text': '🤗🤗🤗'} Using custom data configuration default Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155) {'text': '🤗🤗🤗'} ``` Is it intended that the cache path does not depend on the config entries? tested with datasets==1.1.2 and python==3.8.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/730/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/729/comments
https://api.github.com/repos/huggingface/datasets/issues/729/events
https://github.com/huggingface/datasets/issues/729
719,558,876
MDU6SXNzdWU3MTk1NTg4NzY=
729
Better error message when one forgets to call `add_batch` before `compute`
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602,525,562,000
1,603,984,704,000
1,603,984,704,000
MEMBER
null
null
When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer. ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( description="description", citation="citation", inputs_description="kwargs", features=datasets.Features({ 'predictions': datasets.Value('int64'), 'references': datasets.Value('int64'), }), codebase_urls=[], reference_urls=[], format='numpy' ) def _compute(self, predictions, references): return {"predictions": predictions, "labels": references} metric = GatherMetric(cache_dir="test-metric") inputs = torch.randint(0, 2, (1024,)) targets = torch.randint(0, 2, (1024,)) batch_size = 8 for i in range(0, 1024, batch_size): pass # User forgets to call `add_batch` result = metric.compute() ``` ## Stack trace: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-267729d187fa> in <module> 3 pass 4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) ----> 5 result = metric.compute() ~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs) 380 if predictions is not None: 381 self.add_batch(predictions=predictions, references=references) --> 382 self._finalize() 383 384 self.cache_file_name = None ~/git/datasets/src/datasets/metric.py in _finalize(self) 343 elif self.process_id == 0: 344 # Let's acquire a lock on each node files to be sure they are finished writing --> 345 file_paths, filelocks = self._get_all_cache_files() 346 347 # Read the predictions and references ~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self) 280 filelocks = [] 281 for process_id, file_path in enumerate(file_paths): --> 282 filelock = FileLock(file_path + ".lock") 283 try: 284 filelock.acquire(timeout=self.timeout) TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/729/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/728/comments
https://api.github.com/repos/huggingface/datasets/issues/728/events
https://github.com/huggingface/datasets/issues/728
719,555,780
MDU6SXNzdWU3MTk1NTU3ODA=
728
Passing `cache_dir` to a metric does not work
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602,525,314,000
1,603,964,082,000
1,603,964,082,000
MEMBER
null
null
When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError: ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( description="description", citation="citation", inputs_description="kwargs", features=datasets.Features({ 'predictions': datasets.Value('int64'), 'references': datasets.Value('int64'), }), codebase_urls=[], reference_urls=[], format='numpy' ) def _compute(self, predictions, references): return {"predictions": predictions, "labels": references} metric = GatherMetric(cache_dir="test-metric") inputs = torch.randint(0, 2, (1024,)) targets = torch.randint(0, 2, (1024,)) batch_size = 8 for i in range(0, 1024, batch_size): metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) result = metric.compute() ``` ## Stack trace: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) ~/git/datasets/src/datasets/metric.py in _finalize(self) 349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features)) --> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths])) 351 except FileNotFoundError: ~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions) 227 # Prepend path to filename --> 228 pa_table = self._read_files(files) 229 files = copy.deepcopy(files) ~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files) 166 for f_dict in files: --> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict) 168 pa_tables.append(pa_table) ~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take) 291 ) --> 292 mmap = pa.memory_map(filename) 293 f = pa.ipc.open_stream(mmap) ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-17-e42d43cc981f> in <module> 2 for i in range(0, 1024, batch_size): 3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) ----> 4 result = metric.compute() ~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs) 380 if predictions is not None: 381 self.add_batch(predictions=predictions, references=references) --> 382 self._finalize() 383 384 self.cache_file_name = None ~/git/datasets/src/datasets/metric.py in _finalize(self) 351 except FileNotFoundError: 352 raise ValueError( --> 353 "Error in finalize: another metric instance is already using the local cache file. " 354 "Please specify an experiment_id to avoid colision between distributed metric instances." 355 ) ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances. ``` The code works when we remove the `cache_dir=...` from the metric.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/728/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/727/comments
https://api.github.com/repos/huggingface/datasets/issues/727/events
https://github.com/huggingface/datasets/issues/727
719,386,366
MDU6SXNzdWU3MTkzODYzNjY=
727
Parallel downloads progress bar flickers
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,602,509,765,000
1,602,509,765,000
null
MEMBER
null
null
When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line. To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar. Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/727/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/726/comments
https://api.github.com/repos/huggingface/datasets/issues/726/events
https://github.com/huggingface/datasets/issues/726
719,313,754
MDU6SXNzdWU3MTkzMTM3NTQ=
726
"Checksums didn't match for dataset source files" error while loading openwebtext dataset
{ "login": "SparkJiao", "id": 16469472, "node_id": "MDQ6VXNlcjE2NDY5NDcy", "avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SparkJiao", "html_url": "https://github.com/SparkJiao", "followers_url": "https://api.github.com/users/SparkJiao/followers", "following_url": "https://api.github.com/users/SparkJiao/following{/other_user}", "gists_url": "https://api.github.com/users/SparkJiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/SparkJiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SparkJiao/subscriptions", "organizations_url": "https://api.github.com/users/SparkJiao/orgs", "repos_url": "https://api.github.com/users/SparkJiao/repos", "events_url": "https://api.github.com/users/SparkJiao/events{/privacy}", "received_events_url": "https://api.github.com/users/SparkJiao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi try, to provide more information please.\r\n\r\nExample code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).", "> Hi try, to provide more information please.\r\n> \r\n> Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).\r\n\r\nI have update the description, sorry for the incomplete issue by mistake.", "Hi, I have manually downloaded the compressed dataset `openwebtext.tar.xz' and use the following command to preprocess the examples:\r\n```\r\n>>> dataset = load_dataset('/home/admin/workspace/datasets/datasets-master/datasets-master/datasets/openwebtext', data_dir='/home/admin/workspace/datasets')\r\nUsing custom data configuration default\r\nDownloading and preparing dataset openwebtext/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...\r\nDataset openwebtext downloaded and prepared to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02. Subsequent calls will reuse this data.\r\n>>> len(dataset['train'])\r\n74571\r\n>>>\r\n```\r\nThe size of the pre-processed example file is only 354MB, however the processed bookcorpus dataset is 4.6g. Are there any problems?", "NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n\r\ni got this issue when i try to work on my own datasets kindly tell me, from where i can get checksums of train and dev file in my github repo", "Hi, I got the similar issue for xnli dataset while working on colab with python3.7. \r\n\r\n`nlp.load_dataset(path = 'xnli')`\r\n\r\nThe above command resulted in following issue : \r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']\r\n```\r\n\r\nAny idea how to fix this ?", "Did anyone figure out how to fix this error?", "Fixed by:\r\n- #2857", "Says fixed but I'm still getting it. \r\n\r\ncommand:\r\n\r\n dataset = load_dataset(\"ted_talks_iwslt\", language_pair=(\"en\", \"es\"), year=\"2014\",download_mode=\"force_redownload\")\r\n\r\ngot:\r\n\r\nUsing custom data configuration en_es_2014-35a2d3350a0f9823\r\nDownloading and preparing dataset ted_talks_iwslt/en_es_2014 (download: 2.15 KiB, generated: Unknown size, post-processed: Unknown size, total: 2.15 KiB) to /home/ken/.cache/huggingface/datasets/ted_talks_iwslt/en_es_2014-35a2d3350a0f9823/1.1.0/43935b3fe470c753a023642e1f54b068c590847f9928bd3f2ec99f15702ad6a6...\r\nDownloading:\r\n2.21k/? [00:00<00:00, 141kB/s]\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/u/0/uc?id=1Cz1Un9p8Xn9IpEMMrg2kXSDt0dnjxc4z&export=download']" ]
1,602,503,110,000
1,645,120,434,000
1,644,921,537,000
NONE
null
null
Hi, I have encountered this problem during loading the openwebtext dataset: ``` >>> dataset = load_dataset('openwebtext') Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://zenodo.org/record/3834942/files/openwebtext.tar.xz'] ``` I think this problem is caused because the released dataset has changed. Or I should download the dataset manually? Sorry for release the unfinised issue by mistake.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/726/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/datasets/issues/726/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/724/comments
https://api.github.com/repos/huggingface/datasets/issues/724/events
https://github.com/huggingface/datasets/issues/724
718,947,700
MDU6SXNzdWU3MTg5NDc3MDA=
724
need to redirect /nlp to /datasets and remove outdated info
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Should be fixed now: \r\n\r\n![image](https://user-images.githubusercontent.com/35882/95917301-040b0600-0d78-11eb-9655-c4ac0e788089.png)\r\n\r\nNot sure I understand what you mean by the second part?\r\n", "Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* https://huggingface.co/datasets/wikihow\r\n* https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all\r\nCan you see the difference? 2nd has formatting, 1st doesn't.\r\n", "For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.\r\n\r\nFor the second one, we'll move to markdown parsing soon, so it'll be formatted better.", "I understand. I was just flagging the lack of markup issue." ]
1,602,457,932,000
1,602,694,812,000
1,602,694,812,000
MEMBER
null
null
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all should probably redirect to: https://huggingface.co/datasets/wikihow also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/724/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/723/comments
https://api.github.com/repos/huggingface/datasets/issues/723/events
https://github.com/huggingface/datasets/issues/723
718,926,723
MDU6SXNzdWU3MTg5MjY3MjM=
723
Adding pseudo-labels to datasets
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Nice ! :)\r\nIt's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.\r\nCould you add details on what they could be used for ?\r\n", "They can be used as training data for a smaller model.", "Sounds just like a regular dataset to me then, no?", "A new configuration for those datasets should do the job then.\r\nNote that until now datasets like xsum only had one configuration. It means that users didn't have to specify the configuration name when loading the dataset. If we add new configs, users that update the lib will have to update their code to specify the default/standard configuration name (not the one with pseudo labels).", "Could also be a `user-namespace` dataset maybe?", "Oh yes why not. I'm more in favor of this actually since pseudo labels are things that users (not dataset authors in general) can compute by themselves and share with the community", "![image](https://user-images.githubusercontent.com/6045025/96045248-b528a380-0e3f-11eb-9124-bd55afa031bb.png)\r\n\r\nI assume I should (for example) rename the xsum dir, change the URL, and put the modified dir somewhere in S3?", "You can use the `datasets-cli` to upload the folder with your version of xsum with the pseudo labels.\r\n\r\n```\r\ndatasets-cli upload_dataset path/to/xsum\r\n```" ]
1,602,450,345,000
1,627,967,511,000
1,627,967,511,000
CONTRIBUTOR
null
null
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo. Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution. I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution. I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py What do you think @lhoestq ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/723/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/721/comments
https://api.github.com/repos/huggingface/datasets/issues/721/events
https://github.com/huggingface/datasets/issues/721
718,647,147
MDU6SXNzdWU3MTg2NDcxNDc=
721
feat(dl_manager): add support for ftp downloads
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the dataset.\r\n\r\nTo make the download_manager work with a custom downloader, you can call `download_manager.download_custom` instead of `download_manager.download_and_extract`. The expected arguments are the following:\r\n```\r\nurl_or_urls: url or `list`/`dict` of urls to download and extract. Each\r\n url is a `str`.\r\ncustom_download: Callable with signature (src_url: str, dst_path: str) -> Any\r\n as for example `tf.io.gfile.copy`, that lets you download from google storage\r\n```\r\n", "Also maybe it coud be interesting to have a direct support of ftp inside the `datasets` library. Do you know any good libraries that we might consider adding as a (optional ?) dependency ?", "Downloading an `ftp` file is as simple as:\r\n```python\r\nimport urllib \r\nurllib.urlretrieve('ftp://server/path/to/file', 'file')\r\n```\r\n\r\nI believe this should be supported by the library, as its not using any dependency and is trivial amount of code.", "I know its unorthodox, but I added `ftp` download support to `file_utils` in the same PR https://github.com/huggingface/datasets/pull/722\r\nSo its possible to understand the interaction of the download component with the ftp download ability", "Awesome ! I'll take a look :)", "@AmitMY Can you now download the Phoenix2014 Dataset?", "@hoanganhpham1006 yes.\r\nSee pull request https://github.com/huggingface/datasets/pull/722 , it has a loader for this dataset, mostly ready.\r\nThere's one issue that delays it being merged - https://github.com/huggingface/datasets/issues/741 - regarding memory consumption.", "The problem which I have now is that this dataset seems does not allow to download? Can you share it with me pls", "The dataset loader is not yet ready, because of that issue.\r\nIf you want to just download the dataset the old-fashioned way, just go to: https://www-i6.informatik.rwth-aachen.de/ftp/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz (the ftp link is now broken, and its available over https)", "Got it, thank you so much!", "FTP downloads are supported." ]
1,602,345,020,000
1,644,921,884,000
1,644,921,883,000
CONTRIBUTOR
null
null
I am working on a new dataset (#302) and encounter a problem downloading it. ```python # This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/ _URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz" dl_manager.download_and_extract(_URL) ``` I get an error: > ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path I checked, and indeed you don't consider `ftp` as a remote file. https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188 Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/721/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/720/comments
https://api.github.com/repos/huggingface/datasets/issues/720/events
https://github.com/huggingface/datasets/issues/720
716,581,266
MDU6SXNzdWU3MTY1ODEyNjY=
720
OSError: Cannot find data file when not using the dummy dataset in RAG
{ "login": "josemlopez", "id": 4112135, "node_id": "MDQ6VXNlcjQxMTIxMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josemlopez", "html_url": "https://github.com/josemlopez", "followers_url": "https://api.github.com/users/josemlopez/followers", "following_url": "https://api.github.com/users/josemlopez/following{/other_user}", "gists_url": "https://api.github.com/users/josemlopez/gists{/gist_id}", "starred_url": "https://api.github.com/users/josemlopez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josemlopez/subscriptions", "organizations_url": "https://api.github.com/users/josemlopez/orgs", "repos_url": "https://api.github.com/users/josemlopez/repos", "events_url": "https://api.github.com/users/josemlopez/events{/privacy}", "received_events_url": "https://api.github.com/users/josemlopez/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnpicklingError Traceback (most recent call last)\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 446 try:\r\n--> 447 return pickle.load(fid, **pickle_kwargs)\r\n 448 except Exception:\r\n\r\nUnpicklingError: pickle data was truncated\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 559 \r\n--> 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n\r\n~/src/datasets/src/datasets/builder.py in _prepare_split(self, split_generator)\r\n 847 writer.write(example)\r\n--> 848 finally:\r\n 849 num_examples, num_bytes = writer.finalize()\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 227 try:\r\n--> 228 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 229 # return super(tqdm...) will not catch exception\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)\r\n 1132 try:\r\n-> 1133 for obj in iterable:\r\n 1134 yield obj\r\n\r\n/hdd/rag/cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)\r\n 131 break\r\n--> 132 vecs = np.load(open(vectors_files.pop(0), \"rb\"), allow_pickle=True)\r\n 133 vec_idx = 0\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 449 raise IOError(\r\n--> 450 \"Failed to interpret file %s as a pickle\" % repr(file))\r\n 451 \r\n\r\nOSError: Failed to interpret file <_io.BufferedReader name='/hdd/rag/downloads/99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498'> as a pickle\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-8-24351ff8ce44> in <module>\r\n 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", \r\n 5 index_name=\"exact\",\r\n----> 6 use_dummy_dataset=False)\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 321 generator_tokenizer = rag_tokenizer.generator\r\n 322 return cls(\r\n--> 323 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 324 )\r\n 325 \r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 310 self.config = config\r\n 311 if self._init_retrieval:\r\n--> 312 self.init_retrieval()\r\n 313 \r\n 314 @classmethod\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_retrieval(self)\r\n 338 \r\n 339 logger.info(\"initializing retrieval\")\r\n--> 340 self.index.init_index()\r\n 341 \r\n 342 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_index(self)\r\n 248 split=self.dataset_split,\r\n 249 index_name=self.index_name,\r\n--> 250 dummy=self.use_dummy_dataset,\r\n 251 )\r\n 252 self.dataset.set_format(\"numpy\", columns=[\"embeddings\"], output_all_columns=True)\r\n\r\n~/src/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 615 builder_instance.download_and_prepare(\r\n 616 download_config=download_config,\r\n--> 617 download_mode=download_mode,\r\n 618 ignore_verifications=ignore_verifications,\r\n 619 )\r\n\r\n~/src/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 481 # Sync info\r\n 482 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n--> 483 self.info.download_checksums = dl_manager.get_recorded_sizes_checksums()\r\n 484 self.info.size_in_bytes = self.info.dataset_size + self.info.download_size\r\n 485 # Save info\r\n\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n--> 562 \r\n 563 # Update the info object with the splits.\r\n 564 self.info.splits = split_dict\r\n\r\nOSError: Cannot find data file.\r\n```\r\n\r\nThank you.", "An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors. ", "Closing this one. Feel free to re-open if you have other questions about this issue" ]
1,602,080,833,000
1,608,732,271,000
1,608,732,271,000
NONE
null
null
## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ## To reproduce Steps to reproduce the behaviour: ``` import os os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache' from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) ``` Plese note that I'm using the whole dataset: **use_dummy_dataset=False** After around 4 hours (downloading and some other things) this is returned: ``` Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2... --------------------------------------------------------------------------- UnpicklingError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 459 try: --> 460 return pickle.load(fid, **pickle_kwargs) 461 except Exception: UnpicklingError: pickle data was truncated During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 552 # Prepare split will record examples associated to the split --> 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 840 for key, record in utils.tqdm( --> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 842 ): /opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception /opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files) 131 break --> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True) 133 vec_idx = 0 /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 462 raise IOError( --> 463 "Failed to interpret file %s as a pickle" % repr(file)) 464 finally: OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-10-f28df370ac47> in <module> 1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets ----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs) 307 generator_tokenizer = rag_tokenizer.generator 308 return cls( --> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer 310 ) 311 /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer) 298 self.config = config 299 if self._init_retrieval: --> 300 self.init_retrieval() 301 302 @classmethod /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self) 324 325 logger.info("initializing retrieval") --> 326 self.index.init_index() 327 328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None): /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self) 238 split=self.dataset_split, 239 index_name=self.index_name, --> 240 dummy=self.use_dummy_dataset, 241 ) 242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 474 if not downloaded_from_gcs: 475 self._download_and_prepare( --> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 477 ) 478 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: --> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) 556 557 if verify_infos: OSError: Cannot find data file. ``` Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/720/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/720/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/712/comments
https://api.github.com/repos/huggingface/datasets/issues/712/events
https://github.com/huggingface/datasets/issues/712
714,242,316
MDU6SXNzdWU3MTQyNDIzMTY=
712
Error in the notebooks/Overview.ipynb notebook
{ "login": "subhrm", "id": 850012, "node_id": "MDQ6VXNlcjg1MDAxMg==", "avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4", "gravatar_id": "", "url": "https://api.github.com/users/subhrm", "html_url": "https://github.com/subhrm", "followers_url": "https://api.github.com/users/subhrm/followers", "following_url": "https://api.github.com/users/subhrm/following{/other_user}", "gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}", "starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subhrm/subscriptions", "organizations_url": "https://api.github.com/users/subhrm/orgs", "repos_url": "https://api.github.com/users/subhrm/repos", "events_url": "https://api.github.com/users/subhrm/events{/privacy}", "received_events_url": "https://api.github.com/users/subhrm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Do this:\r\n``` python\r\nsquad_dataset = list_datasets(with_details=True)[datasets.index('squad')]\r\npprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n```", "Thanks! This worked. I have created a PR to fix this in the notebook. " ]
1,601,791,111,000
1,601,915,140,000
1,601,915,140,000
CONTRIBUTOR
null
null
Hi, I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab. ```python # You can access various attributes of the datasets before downloading them squad_dataset = list_datasets()[datasets.index('squad')] pprint(squad_dataset.__dict__) # It's a simple python dataclass ``` Error message ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-8dc805c4949c> in <module>() 2 squad_dataset = list_datasets()[datasets.index('squad')] 3 ----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass AttributeError: 'str' object has no attribute '__dict__' ``` The object `squad_dataset` is a `str` not a `dataclass` .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/712/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/709/comments
https://api.github.com/repos/huggingface/datasets/issues/709/events
https://github.com/huggingface/datasets/issues/709
714,067,902
MDU6SXNzdWU3MTQwNjc5MDI=
709
How to use similarity settings other then "BM25" in Elasticsearch index ?
{ "login": "nsankar", "id": 431890, "node_id": "MDQ6VXNlcjQzMTg5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nsankar", "html_url": "https://github.com/nsankar", "followers_url": "https://api.github.com/users/nsankar/followers", "following_url": "https://api.github.com/users/nsankar/following{/other_user}", "gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}", "starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsankar/subscriptions", "organizations_url": "https://api.github.com/users/nsankar/orgs", "repos_url": "https://api.github.com/users/nsankar/repos", "events_url": "https://api.github.com/users/nsankar/events{/privacy}", "received_events_url": "https://api.github.com/users/nsankar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration passed to datasets\r\n\r\n```\r\ncurl -X PUT \"localhost:9200/index?pretty\" -H 'Content-Type: application/json' -d'\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"similarity\": {\r\n \"my_similarity\": {\r\n \"type\": \"DFR\",\r\n \"basic_model\": \"g\",\r\n \"after_effect\": \"l\",\r\n \"normalization\": \"h2\",\r\n \"normalization.h2.c\": \"3.0\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n'\r\n\r\n```" ]
1,601,723,929,000
1,664,903,977,000
1,664,903,977,000
NONE
null
null
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?** **ES Reference** https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html **HF doc reference:** https://huggingface.co/docs/datasets/faiss_and_ea.html **context :** ======== I used the latest Elasticsearch server version 7.9.2 When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error For example DFR that I had tried in the first instance in mappings as below., `"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},` I get the following error RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]') The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below `es_config = { "settings": { "number_of_shards": 1, **"similarity": "my_similarity"**: { "type": "DFR", "basic_model": "g", "after_effect": "l", "normalization": "h2", "normalization.h2.c": "3.0" } , "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}}, }, "mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}}, }` For this , I got the following error RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/709/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/708/comments
https://api.github.com/repos/huggingface/datasets/issues/708/events
https://github.com/huggingface/datasets/issues/708
714,020,953
MDU6SXNzdWU3MTQwMjA5NTM=
708
Datasets performance slow? - 6.4x slower than in memory dataset
{ "login": "eugeneware", "id": 38154, "node_id": "MDQ6VXNlcjM4MTU0", "avatar_url": "https://avatars.githubusercontent.com/u/38154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eugeneware", "html_url": "https://github.com/eugeneware", "followers_url": "https://api.github.com/users/eugeneware/followers", "following_url": "https://api.github.com/users/eugeneware/following{/other_user}", "gists_url": "https://api.github.com/users/eugeneware/gists{/gist_id}", "starred_url": "https://api.github.com/users/eugeneware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eugeneware/subscriptions", "organizations_url": "https://api.github.com/users/eugeneware/orgs", "repos_url": "https://api.github.com/users/eugeneware/repos", "events_url": "https://api.github.com/users/eugeneware/events{/privacy}", "received_events_url": "https://api.github.com/users/eugeneware/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly.", "And if you use in-memory-data with datasets with `load_dataset(..., keep_in_memory=True)`?", "Thanks for the tip @thomwolf ! I did not see that flag in the docs. I'll try with that.", "We should add it indeed and also maybe a specific section with all the tips for maximal speed. What do you think @lhoestq @SBrandeis @yjernite ?", "By default the datasets loaded with `load_dataset` live on disk.\r\nIt's possible to load them in memory by using some transforms like `.map(..., keep_in_memory=True)`.\r\n\r\nSmall correction to @thomwolf 's comment above: currently we don't have the `keep_in_memory` parameter for `load_dataset` AFAIK but it would be nice to add it indeed :)", "Yes indeed we should add it!", "Great! Thanks a lot.\r\n\r\nI did a test using `map(..., keep_in_memory=True)` and also a test using in-memory only data.\r\n\r\n```python\r\nfeatures = dataset.map(tokenize, batched=True, remove_columns=dataset['train'].column_names)\r\nfeatures.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nfeatures_in_memory = dataset.map(tokenize, batched=True, keep_in_memory=True, remove_columns=dataset['train'].column_names)\r\nfeatures_in_memory.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nin_memory = [features['train'][i] for i in range(len(features['train']))]\r\n```\r\n\r\nFor using the features without any tweak, I got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nFor using the features mapped with `keep_in_memory=True`, I also got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features_in_memory['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nAnd for the case using every element in memory, converted from the original dataset, I got **12.5s**:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(in_memory, batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nTaking a closer look in my SQuAD code, using a profiler, I see a lot of calls to `posix read` api. It seems that it is really reliying on disk, which results in a very high train time.", "I am having the same issue here. When loading from memory I can get the GPU up to 70% util but when loading after mapping I can only get 40%.\r\n\r\nIn disk:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=2500)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./mobile_bert_big\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=32,\r\n per_device_eval_batch_size=16,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n eval_steps=100,\r\n no_cuda=False,\r\n gradient_accumulation_steps=16,\r\n fp16=True)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n```\r\n\r\nIn disk I can only get 0,17 it/s:\r\n`[ 13/28907 01:03 < 46:03:27, 0.17 it/s, Epoch 0.00/1] `\r\n\r\nIf I load it with torch.utils.data.Dataset()\r\n```\r\nclass BCorpusDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings):\r\n self.encodings = encodings\r\n\r\n def __getitem__(self, idx):\r\n item = [torch.tensor(val[idx]) for key, val in self.encodings.items()][0]\r\n return item\r\n\r\n def __len__(self):\r\n length = [len(val) for key, val in self.encodings.items()][0]\r\n return length\r\n\r\n**book_corpus = book_corpus.select([i for i in range(16*2000)])** # filtering to not have 20% of BC in memory...\r\nbook_corpus = book_corpus(book_corpus)\r\n```\r\nI can get:\r\n` [ 5/62 00:09 < 03:03, 0.31 it/s, Epoch 0.06/1]`\r\n\r\nBut obviously I can not get BookCorpus in memory xD\r\n\r\nEDIT: it is something weird. If i load in disk 1% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]')\r\n```\r\n\r\nI can get 0.28 it/s, (the same that in memory) but if I load 20% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\n```\r\nI get again 0.17 it/s. \r\n\r\nI am missing something? I think it is something related to size, and not disk or in-memory.", "There is a way to increase the batches read from memory? or multiprocessed it? I think that one of two or it is reading with just 1 core o it is reading very small chunks from disk and left my GPU at 0 between batches", "My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks." ]
1,601,707,447,000
1,613,139,208,000
1,613,139,208,000
NONE
null
null
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower. For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33. Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss. For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU. I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower. What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance? At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice? In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test. ``` py import sys from datasets import load_dataset from transformers import DataCollatorWithPadding, BertTokenizerFast from torch.utils.data import DataLoader from tqdm import tqdm if __name__ == '__main__': tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') collate_fn = DataCollatorWithPadding(tokenizer, padding=True) ds = load_dataset('yelp_polarity') def do_tokenize(x): return tokenizer(x['text'], truncation=True) ds = ds.map(do_tokenize, batched=True) ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask']) if len(sys.argv) == 2 and sys.argv[1] == 'memory': # copy to memory - probably a faster way to do this - but demonstrates the point # approximately 530 batches per second - 17500 batches in 0:33 print('using memory') _ds = [data for data in tqdm(ds['train'])] else: # approximately 83 batches per second - 17500 batches in 3:31 print('using datasets') _ds = ds['train'] dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4) for data in tqdm(dl): for k, v in data.items(): data[k] = v.to('cuda') ``` For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d) Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints. Thanks for all your great work.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/708/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/708/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/707/comments
https://api.github.com/repos/huggingface/datasets/issues/707/events
https://github.com/huggingface/datasets/issues/707
713,954,666
MDU6SXNzdWU3MTM5NTQ2NjY=
707
Requirements should specify pyarrow<1
{ "login": "mathcass", "id": 918541, "node_id": "MDQ6VXNlcjkxODU0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/918541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mathcass", "html_url": "https://github.com/mathcass", "followers_url": "https://api.github.com/users/mathcass/followers", "following_url": "https://api.github.com/users/mathcass/following{/other_user}", "gists_url": "https://api.github.com/users/mathcass/gists{/gist_id}", "starred_url": "https://api.github.com/users/mathcass/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathcass/subscriptions", "organizations_url": "https://api.github.com/users/mathcass/orgs", "repos_url": "https://api.github.com/users/mathcass/repos", "events_url": "https://api.github.com/users/mathcass/events{/privacy}", "received_events_url": "https://api.github.com/users/mathcass/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @mathcass I would want to work on this issue. May I do the same? ", "@punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity.", "Hello @mathcass \r\n1. I did fork the repository and clone the same on my local system. \r\n\r\n2. Then learnt about how we can publish our package on pypi.org. Also, found some instructions on same in setup.py documentation.\r\n\r\n3. Then I Perplexity document link that you shared above. I created a colab link from there keep both tensorflow and pytorch means a mixed option and tried to run it in colab but I encountered no errors at a point where you mentioned. Can you help me to figure out the issue. \r\n\r\n4.Here is the link of the colab file with my saved responses. \r\nhttps://colab.research.google.com/drive/1hfYz8Ira39FnREbxgwa_goZWpOojp2NH?usp=sharing", "Also, please share some links which made you conclude that pyarrow < 1 would help. ", "Access granted for the colab link. ", "Thanks for looking at this @punitaojha and thanks for sharing the notebook. \r\n\r\nI just tried to reproduce this on my own (based on the environment where I had this issue) and I can't reproduce it somehow. If I run into this again, I'll include some steps to reproduce it. I'll close this as invalid. \r\n\r\nThanks again. ", "I am sorry for hijacking this closed issue, but I believe I was able to reproduce this very issue. Strangely enough, it also turned out that running `pip install \"pyarrow<1\" --upgrade` did indeed fix the issue (PyArrow was installed in version `0.14.1` in my case).\r\n\r\nPlease see the Colab below:\r\n\r\nhttps://colab.research.google.com/drive/15QQS3xWjlKW2aK0J74eEcRFuhXUddUST\r\n\r\nThanks!" ]
1,601,681,979,000
1,607,070,159,000
1,601,844,628,000
NONE
null
null
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file. https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68 Downgrading by installing `pip install "pyarrow<1"` resolved the issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/707/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/705/comments
https://api.github.com/repos/huggingface/datasets/issues/705/events
https://github.com/huggingface/datasets/issues/705
713,709,100
MDU6SXNzdWU3MTM3MDkxMDA=
705
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
{ "login": "pvcastro", "id": 12713359, "node_id": "MDQ6VXNlcjEyNzEzMzU5", "avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvcastro", "html_url": "https://github.com/pvcastro", "followers_url": "https://api.github.com/users/pvcastro/followers", "following_url": "https://api.github.com/users/pvcastro/following{/other_user}", "gists_url": "https://api.github.com/users/pvcastro/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvcastro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvcastro/subscriptions", "organizations_url": "https://api.github.com/users/pvcastro/orgs", "repos_url": "https://api.github.com/users/pvcastro/repos", "events_url": "https://api.github.com/users/pvcastro/events{/privacy}", "received_events_url": "https://api.github.com/users/pvcastro/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR", "Thanks @lhoestq !" ]
1,601,652,475,000
1,601,885,699,000
1,601,885,699,000
NONE
null
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - `datasets` version: 1.0.2 (installed as a dependency from transformers) - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.9 I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample: ``` text,label "Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION ``` However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section. ## To reproduce Steps to reproduce the behavior: 1. Created a new conda environment using conda env -n transformers python=3.7 2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt 3. Installed tensorflow with `pip install tensorflow` 3. Ran `run_tf_text_classification.py` with the following parameters: ``` --train_file <DATASET_PATH>/train.csv \ --dev_file <DATASET_PATH>/dev.csv \ --test_file <DATASET_PATH>/test.csv \ --label_column_id 1 \ --model_name_or_path neuralmind/bert-base-portuguese-cased \ --output_dir <OUTPUT_PATH> \ --num_train_epochs 4 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --do_train \ --do_eval \ --do_predict \ --logging_steps 1000 \ --evaluate_during_training \ --save_steps 1000 \ --overwrite_output_dir \ --overwrite_cache ``` I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace: ``` 2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 /media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz 2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1 10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False 10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False) 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock Using custom data configuration default Traceback (most recent call last): File "run_tf_text_classification.py", line 283, in <module> main() File "run_tf_text_classification.py", line 222, in main max_seq_length=data_args.max_seq_length, File "run_tf_text_classification.py", line 43, in get_tfds ds = datasets.load_dataset("csv", data_files=files) File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__ **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config for key in sorted(data_files.keys()): TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' ``` ## Expected behavior Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/705/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/699/comments
https://api.github.com/repos/huggingface/datasets/issues/699/events
https://github.com/huggingface/datasets/issues/699
713,395,642
MDU6SXNzdWU3MTMzOTU2NDI=
699
XNLI dataset is not loading
{ "login": "imadarsh1001", "id": 14936525, "node_id": "MDQ6VXNlcjE0OTM2NTI1", "avatar_url": "https://avatars.githubusercontent.com/u/14936525?v=4", "gravatar_id": "", "url": "https://api.github.com/users/imadarsh1001", "html_url": "https://github.com/imadarsh1001", "followers_url": "https://api.github.com/users/imadarsh1001/followers", "following_url": "https://api.github.com/users/imadarsh1001/following{/other_user}", "gists_url": "https://api.github.com/users/imadarsh1001/gists{/gist_id}", "starred_url": "https://api.github.com/users/imadarsh1001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/imadarsh1001/subscriptions", "organizations_url": "https://api.github.com/users/imadarsh1001/orgs", "repos_url": "https://api.github.com/users/imadarsh1001/repos", "events_url": "https://api.github.com/users/imadarsh1001/events{/privacy}", "received_events_url": "https://api.github.com/users/imadarsh1001/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "also i tried below code to solve checksum error \r\n`datasets-cli test ./datasets/xnli --save_infos --all_configs`\r\n\r\nand it shows \r\n\r\n```\r\n2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 268, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 279, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/bin/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py\", line 76, in run\r\n module_path, hash = prepare_module(path)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 283, in prepare_module\r\n combined_path, github_file_path, file_path\r\nFileNotFoundError: Couldn't find file locally at ./datasets/xnli/xnli.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n```\r\n\r\n", "Hi !\r\nYes the download url changed.\r\nIt's updated on the master branch. I'm doing a release today to fix that :)", "the issue is fixed with latest release \r\n\r\n" ]
1,601,621,596,000
1,601,747,152,000
1,601,747,017,000
NONE
null
null
`dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 39 logger.info("All the checksums matched successfully" + for_verification_name) 40 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip'] ``` I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/699/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/691/comments
https://api.github.com/repos/huggingface/datasets/issues/691/events
https://github.com/huggingface/datasets/issues/691
712,389,499
MDU6SXNzdWU3MTIzODk0OTk=
691
Add UI filter to filter datasets based on task
{ "login": "praateekmahajan", "id": 7589415, "node_id": "MDQ6VXNlcjc1ODk0MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7589415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/praateekmahajan", "html_url": "https://github.com/praateekmahajan", "followers_url": "https://api.github.com/users/praateekmahajan/followers", "following_url": "https://api.github.com/users/praateekmahajan/following{/other_user}", "gists_url": "https://api.github.com/users/praateekmahajan/gists{/gist_id}", "starred_url": "https://api.github.com/users/praateekmahajan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/praateekmahajan/subscriptions", "organizations_url": "https://api.github.com/users/praateekmahajan/orgs", "repos_url": "https://api.github.com/users/praateekmahajan/repos", "events_url": "https://api.github.com/users/praateekmahajan/events{/privacy}", "received_events_url": "https://api.github.com/users/praateekmahajan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
[ "Already supported." ]
1,601,513,778,000
1,644,922,010,000
1,644,922,010,000
NONE
null
null
This is great work, so huge shoutout to contributors and huggingface. The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list) - Classification - Multi label - Multi class - Q&A - Summarization - Translation I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities. Thank you :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/691/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/691/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/690/comments
https://api.github.com/repos/huggingface/datasets/issues/690/events
https://github.com/huggingface/datasets/issues/690
712,150,321
MDU6SXNzdWU3MTIxNTAzMjE=
690
XNLI dataset: NonMatchingChecksumError
{ "login": "xiey1", "id": 13307358, "node_id": "MDQ6VXNlcjEzMzA3MzU4", "avatar_url": "https://avatars.githubusercontent.com/u/13307358?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiey1", "html_url": "https://github.com/xiey1", "followers_url": "https://api.github.com/users/xiey1/followers", "following_url": "https://api.github.com/users/xiey1/following{/other_user}", "gists_url": "https://api.github.com/users/xiey1/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiey1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiey1/subscriptions", "organizations_url": "https://api.github.com/users/xiey1/orgs", "repos_url": "https://api.github.com/users/xiey1/repos", "events_url": "https://api.github.com/users/xiey1/events{/privacy}", "received_events_url": "https://api.github.com/users/xiey1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for reporting.\r\nThe data file must have been updated by the host.\r\nI'll update the checksum with the new one.", "Well actually it looks like the link isn't working anymore :(", "The new link is https://cims.nyu.edu/~sbowman/xnli/XNLI-1.0.zip\r\nI'll update the dataset script", "I'll do a release in the next few days to make the fix available for everyone.\r\nIn the meantime you can load `xnli` with\r\n```\r\nxnli = load_dataset('xnli', script_version=\"master\")\r\n```\r\nThis will use the latest version of the xnli script (available on master branch), instead of the old one.", "That's awesome! Thanks a lot!" ]
1,601,488,203,000
1,601,572,508,000
1,601,560,874,000
NONE
null
null
Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = load_dataset(path='xnli') 3 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']` The same code worked well several days ago in colab but stopped working now. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/690/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/687/comments
https://api.github.com/repos/huggingface/datasets/issues/687/events
https://github.com/huggingface/datasets/issues/687
711,664,810
MDU6SXNzdWU3MTE2NjQ4MTA=
687
`ArrowInvalid` occurs while running `Dataset.map()` function
{ "login": "peinan", "id": 5601012, "node_id": "MDQ6VXNlcjU2MDEwMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5601012?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peinan", "html_url": "https://github.com/peinan", "followers_url": "https://api.github.com/users/peinan/followers", "following_url": "https://api.github.com/users/peinan/following{/other_user}", "gists_url": "https://api.github.com/users/peinan/gists{/gist_id}", "starred_url": "https://api.github.com/users/peinan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peinan/subscriptions", "organizations_url": "https://api.github.com/users/peinan/orgs", "repos_url": "https://api.github.com/users/peinan/repos", "events_url": "https://api.github.com/users/peinan/events{/privacy}", "received_events_url": "https://api.github.com/users/peinan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi !\r\n\r\nThis is because `encode` expects one single text as input (str), or one tokenized text (List[str]).\r\nI believe that you actually wanted to use `encode_batch` which expects a batch of texts.\r\nHowever this method is only available for our \"fast\" tokenizers (ex: BertTokenizerFast).\r\nBertJapanese is not one of them unfortunately and I don't think it will be added for now (see https://github.com/huggingface/transformers/pull/7141)...\r\ncc @thomwolf for confirmation.\r\n\r\nTherefore what I'd suggest for now is disable batching and process one text at a time using `encode`.\r\nNote that you can make it faster by using multiprocessing:\r\n\r\n```python\r\nnum_proc = None # Specify here the number of processes if you want to use multiprocessing. ex: num_proc = 4\r\nencoded = train_ds.map(\r\n lambda example: {'tokens': t.encode(example['title'], max_length=1000)}, num_proc=num_proc\r\n)\r\n```\r\n", "Thank you very much for the kind and precise suggestion!\r\nI'm looking forward to seeing BertJapaneseTokenizer built into the \"fast\" tokenizers.\r\n\r\nI tried `map` with multiprocessing as follows, and it worked!\r\n\r\n```python\r\n# There was a Pickle problem if I use `lambda` for multiprocessing\r\ndef encode(examples):\r\n return {'tokens': t.encode(examples['title'], max_length=1000)}\r\n\r\nnum_proc = 8\r\nencoded = train_ds.map(encode, num_proc=num_proc)\r\n```\r\n\r\nThank you again!" ]
1,601,446,610,000
1,601,459,583,000
1,601,459,583,000
NONE
null
null
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=None) # }, num_rows: 99999) # suggested in #665 class PicklableTokenizer(BertJapaneseTokenizer): def __getstate__(self): state = dict(self.__dict__) state['do_lower_case'] = self.word_tokenizer.do_lower_case state['never_split'] = self.word_tokenizer.never_split del state['word_tokenizer'] return state def __setstate(self): do_lower_case = state.pop('do_lower_case') never_split = state.pop('never_split') self.__dict__ = state self.word_tokenizer = MecabTokenizer( do_lower_case=do_lower_case, never_split=never_split ) t = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking') encoded = train_ds.map( lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000 ) ``` Error Message: ``` 99% 99/100 [00:22<00:00, 39.07ba/s] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <timed exec> in <module> /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1242 fn_kwargs=fn_kwargs, 1243 new_fingerprint=new_fingerprint, -> 1244 update_data=update_data, 1245 ) 1246 else: /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 151 "output_all_columns": self._output_all_columns, 152 } --> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 154 if new_format["columns"] is not None: 155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names)) /usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 161 # Call actual function 162 --> 163 out = func(self, *args, **kwargs) 164 165 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data) 1496 if update_data: 1497 batch = cast_to_python_objects(batch) -> 1498 writer.write_batch(batch) 1499 if update_data: 1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file /usr/local/lib/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type) 272 typed_sequence_examples[col] = typed_sequence --> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples) 274 self.write_table(pa_table) 275 /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict() /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays() /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate() /usr/local/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Column 4 named tokens expected length 999 but got length 1000 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/687/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/686/comments
https://api.github.com/repos/huggingface/datasets/issues/686/events
https://github.com/huggingface/datasets/issues/686
711,385,739
MDU6SXNzdWU3MTEzODU3Mzk=
686
Dataset browser url is still https://huggingface.co/nlp/viewer/
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "repos_url": "https://api.github.com/users/jarednielsen/repos", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)", "This was fixed but forgot to close the issue. cc @lhoestq @yjernite \r\n\r\nThanks @jarednielsen!" ]
1,601,407,312,000
1,610,130,566,000
1,610,130,566,000
CONTRIBUTOR
null
null
Might be worth updating to https://huggingface.co/datasets/viewer/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/686/timeline
completed
false