url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.05B
node_id
stringlengths
18
32
number
int64
1
3.27k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
1,587B
1,637B
updated_at
int64
1,587B
1,637B
closed_at
int64
1,587B
1,637B
⌀
author_association
stringclasses
3 values
active_lock_reason
null
pull_request
dict
body
stringlengths
0
228k
⌀
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3267/comments
https://api.github.com/repos/huggingface/datasets/issues/3267/events
https://github.com/huggingface/datasets/pull/3267
1,052,750,084
PR_kwDODunzps4ufQzB
3,267
Replacing .format() and % by f-strings
{ "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,636,830,722,000
1,636,830,722,000
null
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3267", "html_url": "https://github.com/huggingface/datasets/pull/3267", "diff_url": "https://github.com/huggingface/datasets/pull/3267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3267.patch" }
**Fix #3257** Replaced _.format()_ and _%_ by f-strings in the following modules : - [x] **tests** - [x] **metrics** - [x] **benchmarks** - [x] **utils** - [x] **templates** Will follow in the next PR the modules left : - [ ] **src** Module **datasets** will not be edited as asked by @mariosasko PS : black and isort applied to files
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3267/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3266/comments
https://api.github.com/repos/huggingface/datasets/issues/3266/events
https://github.com/huggingface/datasets/pull/3266
1,052,700,155
PR_kwDODunzps4ufH94
3,266
fix-3264-change-download-urls
{ "login": "LashaO", "id": 28014149, "node_id": "MDQ6VXNlcjI4MDE0MTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/28014149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LashaO", "html_url": "https://github.com/LashaO", "followers_url": "https://api.github.com/users/LashaO/followers", "following_url": "https://api.github.com/users/LashaO/following{/other_user}", "gists_url": "https://api.github.com/users/LashaO/gists{/gist_id}", "starred_url": "https://api.github.com/users/LashaO/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LashaO/subscriptions", "organizations_url": "https://api.github.com/users/LashaO/orgs", "repos_url": "https://api.github.com/users/LashaO/repos", "events_url": "https://api.github.com/users/LashaO/events{/privacy}", "received_events_url": "https://api.github.com/users/LashaO/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "There seems to be problems with datasets metadata, of which I dont have access to. I think one of the datasets is from reddit. Can anyone help?" ]
1,636,815,694,000
1,636,889,575,000
null
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3266", "html_url": "https://github.com/huggingface/datasets/pull/3266", "diff_url": "https://github.com/huggingface/datasets/pull/3266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3266.patch" }
[#3264](https://github.com/huggingface/datasets/issues/3264)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3266/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3265/comments
https://api.github.com/repos/huggingface/datasets/issues/3265/events
https://github.com/huggingface/datasets/issues/3265
1,052,666,558
I_kwDODunzps4-vmq-
3,265
Checksum error for kilt_task_wow
{ "login": "slyviacassell", "id": 22296717, "node_id": "MDQ6VXNlcjIyMjk2NzE3", "avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slyviacassell", "html_url": "https://github.com/slyviacassell", "followers_url": "https://api.github.com/users/slyviacassell/followers", "following_url": "https://api.github.com/users/slyviacassell/following{/other_user}", "gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}", "starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions", "organizations_url": "https://api.github.com/users/slyviacassell/orgs", "repos_url": "https://api.github.com/users/slyviacassell/repos", "events_url": "https://api.github.com/users/slyviacassell/events{/privacy}", "received_events_url": "https://api.github.com/users/slyviacassell/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Using `dataset = load_dataset(\"kilt_tasks\", \"wow\", ignore_verifications=True)` may fix it, but I am not think it is a elegant solution." ]
1,636,805,057,000
1,636,810,950,000
null
NONE
null
null
## Describe the bug Checksum failed when downloads kilt_tasks_wow. See error output for details. ## Steps to reproduce the bug ```python import datasets datasets.load_datasets('kilt_tasks','wow') ``` ## Expected results Download successful ## Actual results ``` Downloading and preparing dataset kilt_tasks/wow (download: 72.07 MiB, generated: 61.82 MiB, post-processed: Unknown size, total: 133.89 MiB) to /root/.cache/huggingface/datasets/kilt_tasks/wow/1.0.0/57dc8b2431e76637e0c6ef79689ca4af61ed3a330e2e0cd62c8971465a35db3a... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 5121.25it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1527.42it/s] Traceback (most recent call last): File "kilt_wow.py", line 30, in <module> main() File "kilt_wow.py", line 27, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "kilt_wow.py", line 21, in load_dataset return datasets.load_dataset('kilt_tasks','wow') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 679, in _download_and_prepare verify_checksums( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['http://dl.fbaipublicfiles.com/KILT/wow-train-kilt.jsonl', 'http://dl.fbaipublicfiles.com/KILT/wow-dev-kilt.jsonl'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3265/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3264/comments
https://api.github.com/repos/huggingface/datasets/issues/3264/events
https://github.com/huggingface/datasets/issues/3264
1,052,663,513
I_kwDODunzps4-vl7Z
3,264
Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution
{ "login": "slyviacassell", "id": 22296717, "node_id": "MDQ6VXNlcjIyMjk2NzE3", "avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slyviacassell", "html_url": "https://github.com/slyviacassell", "followers_url": "https://api.github.com/users/slyviacassell/followers", "following_url": "https://api.github.com/users/slyviacassell/following{/other_user}", "gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}", "starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions", "organizations_url": "https://api.github.com/users/slyviacassell/orgs", "repos_url": "https://api.github.com/users/slyviacassell/repos", "events_url": "https://api.github.com/users/slyviacassell/events{/privacy}", "received_events_url": "https://api.github.com/users/slyviacassell/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "#take\r\nI am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy with new ones provided by authors.\r\n\r\nAs for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. I can include them in the dataset folder as the files are <1MB in size total.", "> #take I am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy.\r\n> \r\n> As for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. Anyone has opinions on whether it is preferable for me to host them somewhere (e.g. personal GDrive account) or upload them to the dataset folder directly and use github raw URLs? The files are <1MB in size.\r\n\r\nI am planning to fix it next few days. But my to-do list is full and I do not have the cache of definite_pronoun_resolution. I am glad that you can take this. Thanks a lot!", "No problem, buddy! Will submit a PR over this weekend." ]
1,636,804,032,000
1,636,810,761,000
null
NONE
null
null
## Describe the bug - WikiAuto Manual The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author. ``` https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv ``` - jeopardy The downloading URL for jeopardy may move from ``` http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` to ``` https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?resourcekey=0-1abK4cJq-mqxFoSg86ieIg ``` - definite_pronoun_resolution The following downloading URL for definite_pronoun_resolution cannot be reached for some reasons. ``` http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt ``` ## Steps to reproduce the bug ```python import datasets datasets.load_datasets('wiki_auto','manual') datasets.load_datasets('jeopardy') datasets.load_datasets('definite_pronoun_resolution') ``` ## Expected results Download successfully ## Actual results - WikiAuto Manual ``` Downloading and preparing dataset wiki_auto/manual (download: 151.65 MiB, generated: 155.97 MiB, post-processed: Unknown size, total: 307.61 MiB) to /root/.cache/huggingface/datasets/wiki_auto/manual/1.0.0/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8... 0%| | 0/3 [00:00<?, ?it/s]Traceback (most recent call last): File "wiki_auto.py", line 43, in <module> main() File "wiki_auto.py", line 40, in main train, dev, test = dataset.generate_k_shot_data(k=16, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 24, in generate_k_shot_data dataset = self.load_dataset() File "wiki_auto.py", line 34, in load_dataset return datasets.load_dataset('wiki_auto', 'manual') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wiki_auto/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8/wiki_auto.py", line 193, in _split_generators data_dir = dl_manager.download_and_extract(my_urls) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested mapped = [ File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 592, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv ``` - jeopardy ``` Using custom data configuration default Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /root/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810... Traceback (most recent call last): File "jeopardy.py", line 45, in <module> main() File "jeopardy.py", line 42, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "jeopardy.py", line 36, in load_dataset return datasets.load_dataset("jeopardy") File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators filepath = dl_manager.download_and_extract(_DATA_URL) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` - definite_pronoun_resolution ``` Downloading and preparing dataset definite_pronoun_resolution/plain_text (download: 222.12 KiB, generated: 239.12 KiB, post-processed: Unknown size, total: 461.24 KiB) to /root/.cache/huggingface/datasets/definite_pronoun_resolution/plain_text/1.0.0/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff... 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last): File "definite_pronoun_resolution.py", line 37, in <module> main() File "definite_pronoun_resolution.py", line 34, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "definite_pronoun_resolution.py", line 28, in load_dataset return datasets.load_dataset('definite_pronoun_resolution') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/definite_pronoun_resolution/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff/definite_pronoun_resolution.py", line 76, in _split_generators files = dl_manager.download_and_extract( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested mapped = [ File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3264/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3263/comments
https://api.github.com/repos/huggingface/datasets/issues/3263/events
https://github.com/huggingface/datasets/issues/3263
1,052,552,516
I_kwDODunzps4-vK1E
3,263
FET DATA
{ "login": "FStell01", "id": 90987031, "node_id": "MDQ6VXNlcjkwOTg3MDMx", "avatar_url": "https://avatars.githubusercontent.com/u/90987031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FStell01", "html_url": "https://github.com/FStell01", "followers_url": "https://api.github.com/users/FStell01/followers", "following_url": "https://api.github.com/users/FStell01/following{/other_user}", "gists_url": "https://api.github.com/users/FStell01/gists{/gist_id}", "starred_url": "https://api.github.com/users/FStell01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FStell01/subscriptions", "organizations_url": "https://api.github.com/users/FStell01/orgs", "repos_url": "https://api.github.com/users/FStell01/repos", "events_url": "https://api.github.com/users/FStell01/events{/privacy}", "received_events_url": "https://api.github.com/users/FStell01/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,636,782,366,000
1,636,810,307,000
1,636,810,307,000
NONE
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3263/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3262/comments
https://api.github.com/repos/huggingface/datasets/issues/3262/events
https://github.com/huggingface/datasets/pull/3262
1,052,455,082
PR_kwDODunzps4uej4t
3,262
asserts replaced with exception for image classification task, csv, json
{ "login": "manisnesan", "id": 153142, "node_id": "MDQ6VXNlcjE1MzE0Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manisnesan", "html_url": "https://github.com/manisnesan", "followers_url": "https://api.github.com/users/manisnesan/followers", "following_url": "https://api.github.com/users/manisnesan/following{/other_user}", "gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}", "starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions", "organizations_url": "https://api.github.com/users/manisnesan/orgs", "repos_url": "https://api.github.com/users/manisnesan/repos", "events_url": "https://api.github.com/users/manisnesan/events{/privacy}", "received_events_url": "https://api.github.com/users/manisnesan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,636,756,499,000
1,636,756,851,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3262", "html_url": "https://github.com/huggingface/datasets/pull/3262", "diff_url": "https://github.com/huggingface/datasets/pull/3262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3262.patch" }
Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3262/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3261/comments
https://api.github.com/repos/huggingface/datasets/issues/3261/events
https://github.com/huggingface/datasets/issues/3261
1,052,346,381
I_kwDODunzps4-uYgN
3,261
Scifi_TV_Shows: Having trouble getting viewer to find appropriate files
{ "login": "lara-martin", "id": 37913218, "node_id": "MDQ6VXNlcjM3OTEzMjE4", "avatar_url": "https://avatars.githubusercontent.com/u/37913218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lara-martin", "html_url": "https://github.com/lara-martin", "followers_url": "https://api.github.com/users/lara-martin/followers", "following_url": "https://api.github.com/users/lara-martin/following{/other_user}", "gists_url": "https://api.github.com/users/lara-martin/gists{/gist_id}", "starred_url": "https://api.github.com/users/lara-martin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lara-martin/subscriptions", "organizations_url": "https://api.github.com/users/lara-martin/orgs", "repos_url": "https://api.github.com/users/lara-martin/repos", "events_url": "https://api.github.com/users/lara-martin/events{/privacy}", "received_events_url": "https://api.github.com/users/lara-martin/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[]
1,636,745,119,000
1,636,745,119,000
null
NONE
null
null
## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*' **Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows) I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/tree/main/dummy), but the viewer still has a 404 error ("Not found. Maybe the cache is missing, or maybe the ressource does not exist."). I'm not sure what to try next. Thanks in advance! Am I the one who added this dataset? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3261/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3260/comments
https://api.github.com/repos/huggingface/datasets/issues/3260/events
https://github.com/huggingface/datasets/pull/3260
1,052,247,373
PR_kwDODunzps4ueCIU
3,260
Fix ConnectionError in Scielo dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,636,740,157,000
1,636,740,275,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3260", "html_url": "https://github.com/huggingface/datasets/pull/3260", "diff_url": "https://github.com/huggingface/datasets/pull/3260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3260.patch" }
This PR: * allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint) * makes the Scielo dataset streamable Fixes #3255.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3260/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3259/comments
https://api.github.com/repos/huggingface/datasets/issues/3259/events
https://github.com/huggingface/datasets/pull/3259
1,052,189,775
PR_kwDODunzps4ud5W3
3,259
Updating details of IRC disentanglement data
{ "login": "jkkummerfeld", "id": 1298052, "node_id": "MDQ6VXNlcjEyOTgwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/1298052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jkkummerfeld", "html_url": "https://github.com/jkkummerfeld", "followers_url": "https://api.github.com/users/jkkummerfeld/followers", "following_url": "https://api.github.com/users/jkkummerfeld/following{/other_user}", "gists_url": "https://api.github.com/users/jkkummerfeld/gists{/gist_id}", "starred_url": "https://api.github.com/users/jkkummerfeld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jkkummerfeld/subscriptions", "organizations_url": "https://api.github.com/users/jkkummerfeld/orgs", "repos_url": "https://api.github.com/users/jkkummerfeld/repos", "events_url": "https://api.github.com/users/jkkummerfeld/events{/privacy}", "received_events_url": "https://api.github.com/users/jkkummerfeld/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,636,737,418,000
1,636,737,418,000
null
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3259", "html_url": "https://github.com/huggingface/datasets/pull/3259", "diff_url": "https://github.com/huggingface/datasets/pull/3259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3259.patch" }
I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3259/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3258/comments
https://api.github.com/repos/huggingface/datasets/issues/3258/events
https://github.com/huggingface/datasets/issues/3258
1,052,188,195
I_kwDODunzps4-tx4j
3,258
Reload dataset that was already downloaded with `load_from_disk` from cloud storage
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,636,737,299,000
1,636,737,299,000
null
MEMBER
null
null
`load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once. It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3258/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3257/comments
https://api.github.com/repos/huggingface/datasets/issues/3257/events
https://github.com/huggingface/datasets/issues/3257
1,052,118,365
I_kwDODunzps4-tg1d
3,257
Use f-strings for string formatting
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[ { "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, I would be glad to help with this. Is there anyone else working on it?", "Hi, I would be glad to work on this too.", "#self-assign" ]
1,636,732,935,000
1,636,822,451,000
null
CONTRIBUTOR
null
null
f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax. > **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files located under `datasets/*`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3257/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3257/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3256/comments
https://api.github.com/repos/huggingface/datasets/issues/3256/events
https://github.com/huggingface/datasets/pull/3256
1,052,000,613
PR_kwDODunzps4udTqg
3,256
asserts replaced by exception for text classification task with test.
{ "login": "manisnesan", "id": 153142, "node_id": "MDQ6VXNlcjE1MzE0Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manisnesan", "html_url": "https://github.com/manisnesan", "followers_url": "https://api.github.com/users/manisnesan/followers", "following_url": "https://api.github.com/users/manisnesan/following{/other_user}", "gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}", "starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions", "organizations_url": "https://api.github.com/users/manisnesan/orgs", "repos_url": "https://api.github.com/users/manisnesan/repos", "events_url": "https://api.github.com/users/manisnesan/events{/privacy}", "received_events_url": "https://api.github.com/users/manisnesan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Haha it looks like you got the chance of being reviewed twice at the same time and got the same suggestion twice x)\r\nAnyway it's all good now so we can merge !", "Thanks for the feedback. " ]
1,636,725,936,000
1,636,729,773,000
1,636,729,172,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3256", "html_url": "https://github.com/huggingface/datasets/pull/3256", "diff_url": "https://github.com/huggingface/datasets/pull/3256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3256.patch" }
I have replaced only a single assert in text_classification.py along with a unit test to verify an exception is raised based on https://github.com/huggingface/datasets/issues/3171 . I would like to first understand the code contribution workflow. So keeping the change to a single file rather than making too many changes. Once this gets approved, I will look into the rest. Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3256/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3255/comments
https://api.github.com/repos/huggingface/datasets/issues/3255/events
https://github.com/huggingface/datasets/issues/3255
1,051,783,129
I_kwDODunzps4-sO_Z
3,255
SciELO dataset ConnectionError
{ "login": "WojciechKusa", "id": 2575047, "node_id": "MDQ6VXNlcjI1NzUwNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/2575047?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WojciechKusa", "html_url": "https://github.com/WojciechKusa", "followers_url": "https://api.github.com/users/WojciechKusa/followers", "following_url": "https://api.github.com/users/WojciechKusa/following{/other_user}", "gists_url": "https://api.github.com/users/WojciechKusa/gists{/gist_id}", "starred_url": "https://api.github.com/users/WojciechKusa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WojciechKusa/subscriptions", "organizations_url": "https://api.github.com/users/WojciechKusa/orgs", "repos_url": "https://api.github.com/users/WojciechKusa/repos", "events_url": "https://api.github.com/users/WojciechKusa/events{/privacy}", "received_events_url": "https://api.github.com/users/WojciechKusa/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[]
1,636,711,034,000
1,636,733,150,000
null
NONE
null
null
## Describe the bug I get `ConnectionError` when I am trying to load the SciELO dataset. When I try the URL with `requests` I get: ``` >>> requests.head("https://ndownloader.figstatic.com/files/14019287") <Response [302]> ``` And as far as I understand redirections in `datasets` are not supported for downloads. https://github.com/huggingface/datasets/blob/807341d0db0728073ab605c812c67f927d148f38/datasets/scielo/scielo.py#L45 ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("scielo", "en-es") ``` ## Expected results Download SciELO dataset and load Dataset object ## Actual results ``` Downloading and preparing dataset scielo/en-es (download: 21.90 MiB, generated: 68.45 MiB, post-processed: Unknown size, total: 90.35 MiB) to /Users/test/.cache/huggingface/datasets/scielo/en-es/1.0.0/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e... Traceback (most recent call last): File "scielo.py", line 3, in <module> dataset = load_dataset("scielo", "en-es") File "../lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "../lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "../lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/test/.cache/huggingface/modules/datasets_modules/datasets/scielo/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e/scielo.py", line 77, in _split_generators data_dir = dl_manager.download_and_extract(_URLS[self.config.name]) File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "../lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested return function(data_struct) File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://ndownloader.figstatic.com/files/14019287 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.12 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3255/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3254/comments
https://api.github.com/repos/huggingface/datasets/issues/3254/events
https://github.com/huggingface/datasets/pull/3254
1,051,351,172
PR_kwDODunzps4ubPwR
3,254
Update xcopa dataset (fix checksum issues + add translated data)
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The CI failures are unrelated to the changes (missing fields in the readme and the CER metric error fixed in #3252)." ]
1,636,663,893,000
1,636,713,058,000
1,636,713,057,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3254", "html_url": "https://github.com/huggingface/datasets/pull/3254", "diff_url": "https://github.com/huggingface/datasets/pull/3254.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3254.patch" }
This PR updates the checksums (as reported [here](https://discuss.huggingface.co/t/how-to-load-dataset-locally/11601/2)) of the `xcopa` dataset. Additionally, it adds new configs that hold the translated data of the original set of configs. This data was not available at the time of adding this dataset to the lib.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3254/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3253/comments
https://api.github.com/repos/huggingface/datasets/issues/3253/events
https://github.com/huggingface/datasets/issues/3253
1,051,308,972
I_kwDODunzps4-qbOs
3,253
`GeneratorBasedBuilder` does not support `None` values
{ "login": "pavel-lexyr", "id": 69010336, "node_id": "MDQ6VXNlcjY5MDEwMzM2", "avatar_url": "https://avatars.githubusercontent.com/u/69010336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pavel-lexyr", "html_url": "https://github.com/pavel-lexyr", "followers_url": "https://api.github.com/users/pavel-lexyr/followers", "following_url": "https://api.github.com/users/pavel-lexyr/following{/other_user}", "gists_url": "https://api.github.com/users/pavel-lexyr/gists{/gist_id}", "starred_url": "https://api.github.com/users/pavel-lexyr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pavel-lexyr/subscriptions", "organizations_url": "https://api.github.com/users/pavel-lexyr/orgs", "repos_url": "https://api.github.com/users/pavel-lexyr/repos", "events_url": "https://api.github.com/users/pavel-lexyr/events{/privacy}", "received_events_url": "https://api.github.com/users/pavel-lexyr/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi,\r\n\r\nthanks for reporting and providing a minimal reproducible example. \r\n\r\nThis line of the PR I've linked in our discussion on the Forum will add support for `None` values:\r\nhttps://github.com/huggingface/datasets/blob/a53de01842aac65c66a49b2439e18fa93ff73ceb/src/datasets/features/features.py#L835\r\n\r\nI expect that PR to be merged soon." ]
1,636,660,281,000
1,636,716,750,000
null
NONE
null
null
## Describe the bug `GeneratorBasedBuilder` does not support `None` values. ## Steps to reproduce the bug See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction. ## Expected results Dataset is initialized with a `None` value in the `value` column. ## Actual results ``` Traceback (most recent call last): File "main.py", line 3, in <module> datasets.load_dataset("./bad-data") File ".../datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File ".../datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File ".../datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".../datasets/builder.py", line 1103, in _prepare_split example = self.info.features.encode_example(record) File ".../datasets/features/features.py", line 1033, in encode_example return encode_nested_example(self, example) File ".../datasets/features/features.py", line 808, in encode_nested_example return { File ".../datasets/features/features.py", line 809, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File ".../datasets/features/features.py", line 855, in encode_nested_example return schema.encode_example(obj) File ".../datasets/features/features.py", line 299, in encode_example return float(value) TypeError: float() argument must be a string or a number, not 'NoneType' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3253/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3252/comments
https://api.github.com/repos/huggingface/datasets/issues/3252/events
https://github.com/huggingface/datasets/pull/3252
1,051,124,749
PR_kwDODunzps4uagoy
3,252
Fix failing CER metric test in CI after update
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,646,236,000
1,636,726,004,000
1,636,726,003,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3252", "html_url": "https://github.com/huggingface/datasets/pull/3252", "diff_url": "https://github.com/huggingface/datasets/pull/3252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3252.patch" }
Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the change, so the results should be the same irrespective of the `jiwer` version.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3252/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3250/comments
https://api.github.com/repos/huggingface/datasets/issues/3250/events
https://github.com/huggingface/datasets/pull/3250
1,050,541,348
PR_kwDODunzps4uYmkr
3,250
Add ETHICS dataset
{ "login": "ssss1029", "id": 7088559, "node_id": "MDQ6VXNlcjcwODg1NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/7088559?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ssss1029", "html_url": "https://github.com/ssss1029", "followers_url": "https://api.github.com/users/ssss1029/followers", "following_url": "https://api.github.com/users/ssss1029/following{/other_user}", "gists_url": "https://api.github.com/users/ssss1029/gists{/gist_id}", "starred_url": "https://api.github.com/users/ssss1029/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ssss1029/subscriptions", "organizations_url": "https://api.github.com/users/ssss1029/orgs", "repos_url": "https://api.github.com/users/ssss1029/repos", "events_url": "https://api.github.com/users/ssss1029/events{/privacy}", "received_events_url": "https://api.github.com/users/ssss1029/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,636,602,334,000
1,636,665,960,000
null
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3250", "html_url": "https://github.com/huggingface/datasets/pull/3250", "diff_url": "https://github.com/huggingface/datasets/pull/3250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3250.patch" }
This PR adds the ETHICS dataset, including all 5 sub-datasets. From https://arxiv.org/abs/2008.02275
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3250/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/3250/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3249/comments
https://api.github.com/repos/huggingface/datasets/issues/3249/events
https://github.com/huggingface/datasets/pull/3249
1,050,193,138
PR_kwDODunzps4uXeea
3,249
Fix streaming for id_newspapers_2018
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,570,530,000
1,636,725,692,000
1,636,725,691,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3249", "html_url": "https://github.com/huggingface/datasets/pull/3249", "diff_url": "https://github.com/huggingface/datasets/pull/3249.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3249.patch" }
To be compatible with streaming, this dataset must use `dl_manager.iter_archive` since the data are in a .tgz file
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3249/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3248/comments
https://api.github.com/repos/huggingface/datasets/issues/3248/events
https://github.com/huggingface/datasets/pull/3248
1,050,171,082
PR_kwDODunzps4uXZzU
3,248
Stream from Google Drive and other hosts
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I just tried some datasets and noticed that `spider` is not working for some reason (the compression type is not recognized), resulting in FileNotFoundError. I can take a look tomorrow", "I'm fixing the remaining files based on TAR archives" ]
1,636,569,152,000
1,636,737,492,000
1,636,737,491,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3248", "html_url": "https://github.com/huggingface/datasets/pull/3248", "diff_url": "https://github.com/huggingface/datasets/pull/3248.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3248.patch" }
Streaming from Google Drive is a bit more challenging than the other host we've been supporting: - the download URL must be updated to add the confirm token obtained by HEAD request - it requires to use cookies to keep the connection alive - the URL doesn't tell any information about whether the file is compressed or not Therefore I did two things: - I added a step for URL and headers/cookies preparation in the StreamingDownloadManager - I added automatic compression type inference by reading the [magic number](https://en.wikipedia.org/wiki/List_of_file_signatures) This allows to do do fancy things like ```python from datasets.utils.streaming_download_manager import StreamingDownloadManager, xopen, xjoin, xglob # zip file containing a train.tsv file url = "https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh" extracted = StreamingDownloadManager().download_and_extract(url) for inner_file in xglob(xjoin(extracted, "*.tsv")): with xopen(inner_file) as f: # streaming starts here for line in f: print(line) ``` This should make around 80 datasets streamable. It concerns those hosted on Google Drive but also any dataset for which the URL doesn't give any information about compression. Here is the full list: ``` amazon_polarity, ami, arabic_billion_words, ascent_kb, asset, big_patent, billsum, capes, cmrc2018, cnn_dailymail, code_x_glue_cc_code_completion_token, code_x_glue_cc_code_refinement, code_x_glue_cc_code_to_code_trans, code_x_glue_tt_text_to_text, conll2002, craigslist_bargains, dbpedia_14, docred, ehealth_kd, emo, euronews, germeval_14, gigaword, grail_qa, great_code, has_part, head_qa, health_fact, hope_edi, id_newspapers_2018, igbo_english_machine_translation, irc_disentangle, jfleg, jnlpba, journalists_questions, kor_ner, linnaeus, med_hop, mrqa, mt_eng_vietnamese, multi_news, norwegian_ner, offcombr, offenseval_dravidian, para_pat, peoples_daily_ner, pn_summary, poleval2019_mt, pubmed_qa, qangaroo, reddit_tifu, refresd, ro_sts_parallel, russian_super_glue, samsum, sberquad, scielo, search_qa, species_800, spider, squad_adversarial, tamilmixsentiment, tashkeela, ted_talks_iwslt, trec, turk, turkish_ner, twi_text_c3, universal_morphologies, web_of_science, weibo_ner, wiki_bio, wiki_hop, wiki_lingua, wiki_summary, wili_2018, wisesight1000, wnut_17, yahoo_answers_topics, yelp_review_full, yoruba_text_c3 ``` Some of them may not work if the host doesn't support HTTP range requests for example Fix https://github.com/huggingface/datasets/issues/2742 Fix https://github.com/huggingface/datasets/issues/3188
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3248/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3248/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3247/comments
https://api.github.com/repos/huggingface/datasets/issues/3247/events
https://github.com/huggingface/datasets/issues/3247
1,049,699,088
I_kwDODunzps4-kSMQ
3,247
Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError
{ "login": "maxzirps", "id": 29249513, "node_id": "MDQ6VXNlcjI5MjQ5NTEz", "avatar_url": "https://avatars.githubusercontent.com/u/29249513?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maxzirps", "html_url": "https://github.com/maxzirps", "followers_url": "https://api.github.com/users/maxzirps/followers", "following_url": "https://api.github.com/users/maxzirps/following{/other_user}", "gists_url": "https://api.github.com/users/maxzirps/gists{/gist_id}", "starred_url": "https://api.github.com/users/maxzirps/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxzirps/subscriptions", "organizations_url": "https://api.github.com/users/maxzirps/orgs", "repos_url": "https://api.github.com/users/maxzirps/repos", "events_url": "https://api.github.com/users/maxzirps/events{/privacy}", "received_events_url": "https://api.github.com/users/maxzirps/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi,\r\n\r\nthis issue is similar to https://github.com/huggingface/datasets/issues/3093, so you can either use the solution provided there or try to load the data in one chunk (you can control the chunk size by specifying the `chunksize` parameter (`int`) in `load_dataset`).\r\n\r\n@lhoestq Is this worth opening an issue on Jira? Basically, PyArrow doesn't allow casts that change the order of the struct fields because they treat `pa.struct` as an ordered sequence. Reordering fields manually in Python is probably too slow, so I think this needs to be fixed by them to be usable on our side.", "I agree I would expect PyArrow to be able to handle this, do you want to open the issue @mariosasko ?\r\nAlthough maybe it's possible to fix struct casting on our side without hurting performance too much, if it's simply a matter of reordering the arrays in the StructArray" ]
1,636,543,079,000
1,636,712,753,000
null
NONE
null
null
## Describe the bug When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct` Splitting the big file into smaller ones and then loading it with the `load_dataset` method did also not work. Creating a pandas dataframe from it and then loading it with `Dataset.from_pandas` works ## Steps to reproduce the bug ```python load_dataset("json", data_files="test.json") ``` test.json ~25MB ```json {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} ... ``` working.json ~160bytes ```json {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} ``` ## Expected results It should load the dataset from the json file without error. ## Actual results It raises Exception `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct` ``` Traceback (most recent call last): File "/Users/m/workspace/xxx/project/main.py", line 60, in <module> dataset = load_dataset("json", data_files="result.json") File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/load.py", line 1627, in load_dataset builder_instance.download_and_prepare( File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split writer.write_table(table) File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/arrow_writer.py", line 428, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1685, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 630, in pyarrow.lib._sanitize_arrays File "pyarrow/array.pxi", line 338, in pyarrow.lib.asarray File "pyarrow/table.pxi", line 304, in pyarrow.lib.ChunkedArray.cast File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/pyarrow/compute.py", line 309, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 528, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 327, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct ``` ## Environment info - `datasets` version: 1.14.0 - Platform: macOS-12.0.1-arm64-arm-64bit - Python version: 3.9.7 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3247/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3246/comments
https://api.github.com/repos/huggingface/datasets/issues/3246/events
https://github.com/huggingface/datasets/pull/3246
1,049,662,746
PR_kwDODunzps4uVvaW
3,246
[tiny] fix typo in stream docs
{ "login": "nollied", "id": 26421036, "node_id": "MDQ6VXNlcjI2NDIxMDM2", "avatar_url": "https://avatars.githubusercontent.com/u/26421036?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nollied", "html_url": "https://github.com/nollied", "followers_url": "https://api.github.com/users/nollied/followers", "following_url": "https://api.github.com/users/nollied/following{/other_user}", "gists_url": "https://api.github.com/users/nollied/gists{/gist_id}", "starred_url": "https://api.github.com/users/nollied/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nollied/subscriptions", "organizations_url": "https://api.github.com/users/nollied/orgs", "repos_url": "https://api.github.com/users/nollied/repos", "events_url": "https://api.github.com/users/nollied/events{/privacy}", "received_events_url": "https://api.github.com/users/nollied/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,540,802,000
1,636,542,639,000
1,636,542,639,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3246", "html_url": "https://github.com/huggingface/datasets/pull/3246", "diff_url": "https://github.com/huggingface/datasets/pull/3246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3246.patch" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3246/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3245
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3245/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3245/comments
https://api.github.com/repos/huggingface/datasets/issues/3245/events
https://github.com/huggingface/datasets/pull/3245
1,048,726,062
PR_kwDODunzps4uSqqq
3,245
Fix load_from_disk temporary directory
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,470,915,000
1,636,471,852,000
1,636,471,851,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3245", "html_url": "https://github.com/huggingface/datasets/pull/3245", "diff_url": "https://github.com/huggingface/datasets/pull/3245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3245.patch" }
`load_from_disk` uses `tempfile.TemporaryDirectory()` instead of our `get_temporary_cache_files_directory()` function. This can cause the temporary directory to be deleted before the dataset object is garbage collected. In practice, it prevents anyone from using methods like `shuffle` on a dataset loaded this way, because it can't write the shuffled indices in a directory that doesn't exist anymore. In this PR I switch to using `get_temporary_cache_files_directory()` and I update the tests. cc @mariosasko since you worked on `get_temporary_cache_files_directory()`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3245/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3245/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3244/comments
https://api.github.com/repos/huggingface/datasets/issues/3244/events
https://github.com/huggingface/datasets/pull/3244
1,048,675,741
PR_kwDODunzps4uSgG5
3,244
Fix filter method for batched=True
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,468,259,000
1,636,473,178,000
1,636,473,177,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3244", "html_url": "https://github.com/huggingface/datasets/pull/3244", "diff_url": "https://github.com/huggingface/datasets/pull/3244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3244.patch" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3244/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3243/comments
https://api.github.com/repos/huggingface/datasets/issues/3243/events
https://github.com/huggingface/datasets/pull/3243
1,048,630,754
PR_kwDODunzps4uSWtB
3,243
Remove redundant isort module placement
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,465,830,000
1,636,725,765,000
1,636,725,765,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3243", "html_url": "https://github.com/huggingface/datasets/pull/3243", "diff_url": "https://github.com/huggingface/datasets/pull/3243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3243.patch" }
`isort` can place modules by itself from [version 5.0.0](https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html#module-placement-changes-known_third_party-known_first_party-default_section-etc) onwards, making the `known_first_party` and `known_third_party` fields in `setup.cfg` redundant (this is why our CI works, even though we haven't touched these options in a while).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3243/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3242/comments
https://api.github.com/repos/huggingface/datasets/issues/3242/events
https://github.com/huggingface/datasets/issues/3242
1,048,527,232
I_kwDODunzps4-f0GA
3,242
Adding ANERcorp-CAMeLLab dataset
{ "login": "vitalyshalumov", "id": 33824221, "node_id": "MDQ6VXNlcjMzODI0MjIx", "avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vitalyshalumov", "html_url": "https://github.com/vitalyshalumov", "followers_url": "https://api.github.com/users/vitalyshalumov/followers", "following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}", "gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}", "starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions", "organizations_url": "https://api.github.com/users/vitalyshalumov/orgs", "repos_url": "https://api.github.com/users/vitalyshalumov/repos", "events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}", "received_events_url": "https://api.github.com/users/vitalyshalumov/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Adding ANERcorp dataset\r\n\r\n## Adding a Dataset\r\n- **Name:** *ANERcorp-CAMeLLab*\r\n- **Description:** *Since its creation in 2008, the ANERcorp dataset (Benajiba & Rosso, 2008) has been a standard reference used by Arabic named entity recognition researchers around the world. However, over time, this dataset was copied over from user to user, modified slightly here and there, and split in many different configurations that made it hard to compare fairly across papers and systems.\r\n\r\nIn 2020, a group of researchers from CAMeL Lab (Habash, Alhafni and Oudah), and Mind Lab (Antoun and Baly) met with the creator of the corpus, Yassine Benajiba, to consult with him and collectively agree on an exact split, and accepted minor corrections from the original dataset. Bashar Alhafni from CAMeL Lab working with Nizar Habash implemented the decisions provided in this release.*\r\n\r\n- **Paper:** *(a) Benajiba, Yassine, Paolo Rosso, and José Miguel Benedí Ruiz. \"Anersys: An Arabic named entity recognition system based on maximum entropy.\" In International Conference on Intelligent Text Processing and Computational Linguistics, pp. 143-153. Springer, Berlin, Heidelberg, 2007.\r\n\r\n(b)Ossama Obeid, Nasser Zalmout, Salam Khalifa, Dima Taji, Mai Oudah, Bashar Alhafni, Go Inoue, Fadhl Eryani, Alexander Erdmann, and Nizar Habash. \"CAMeL Tools: An Open Source Python Toolkit, for Arabic Natural Language Processing.\" In Proceedings of the Conference on Language Resources and Evaluation (LREC 2020), Marseille, 2020.*\r\n- **Data:** *https://camel.abudhabi.nyu.edu/anercorp/*\r\n- **Motivation:** This is the standard dataset for evaluating NER performance in Arabic*\r\n\r\nInstructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)." ]
1,636,459,444,000
1,636,461,675,000
null
NONE
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3242/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3241/comments
https://api.github.com/repos/huggingface/datasets/issues/3241/events
https://github.com/huggingface/datasets/pull/3241
1,048,461,852
PR_kwDODunzps4uRzHa
3,241
Swap descriptions of v1 and raw-v1 configs of WikiText dataset and fix metadata
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,455,255,000
1,636,465,769,000
1,636,465,768,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3241", "html_url": "https://github.com/huggingface/datasets/pull/3241", "diff_url": "https://github.com/huggingface/datasets/pull/3241.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3241.patch" }
Fix #3237.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3241/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3240/comments
https://api.github.com/repos/huggingface/datasets/issues/3240/events
https://github.com/huggingface/datasets/issues/3240
1,048,376,021
I_kwDODunzps4-fPLV
3,240
Couldn't reach
{ "login": "pandya6988", "id": 81331791, "node_id": "MDQ6VXNlcjgxMzMxNzkx", "avatar_url": "https://avatars.githubusercontent.com/u/81331791?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pandya6988", "html_url": "https://github.com/pandya6988", "followers_url": "https://api.github.com/users/pandya6988/followers", "following_url": "https://api.github.com/users/pandya6988/following{/other_user}", "gists_url": "https://api.github.com/users/pandya6988/gists{/gist_id}", "starred_url": "https://api.github.com/users/pandya6988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pandya6988/subscriptions", "organizations_url": "https://api.github.com/users/pandya6988/orgs", "repos_url": "https://api.github.com/users/pandya6988/repos", "events_url": "https://api.github.com/users/pandya6988/events{/privacy}", "received_events_url": "https://api.github.com/users/pandya6988/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "It looks like the dataset isn't available anymore on appen.com\r\n\r\nThe CSV files appear to still be available at https://www.kaggle.com/landlord/multilingual-disaster-response-messages though. It says that the data are under the CC0 license so I guess we can host the dataset elsewhere instead ?" ]
1,636,450,002,000
1,636,474,048,000
null
NONE
null
null
## Describe the bug Following command gives an ConnectionError. ## Steps to reproduce the bug ```python disaster = load_dataset('disaster_response_messages') ``` ## Error ``` ConnectionError: Couldn't reach https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv ``` ## Expected results It should load dataset without an error ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Google Colab - Python version: 3.7 - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3240/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3239
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3239/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3239/comments
https://api.github.com/repos/huggingface/datasets/issues/3239/events
https://github.com/huggingface/datasets/issues/3239
1,048,360,232
I_kwDODunzps4-fLUo
3,239
Inconsistent performance of the "arabic_billion_words" dataset
{ "login": "vitalyshalumov", "id": 33824221, "node_id": "MDQ6VXNlcjMzODI0MjIx", "avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vitalyshalumov", "html_url": "https://github.com/vitalyshalumov", "followers_url": "https://api.github.com/users/vitalyshalumov/followers", "following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}", "gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}", "starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions", "organizations_url": "https://api.github.com/users/vitalyshalumov/orgs", "repos_url": "https://api.github.com/users/vitalyshalumov/repos", "events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}", "received_events_url": "https://api.github.com/users/vitalyshalumov/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,636,449,060,000
1,636,449,060,000
null
NONE
null
null
## Describe the bug When downloaded from macine 1 the dataset is downloaded and parsed correctly. When downloaded from machine two (which has a different cache directory), the following script: import datasets from datasets import load_dataset raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload') gives the following error: **Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17... Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 348M/348M [00:24<00:00, 14.0MB/s] Traceback (most recent call last): File ".../why_mismatch.py", line 3, in <module> File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 709, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words')}]** Note that the package versions of datasets (1.15.1) and rarfile (4.0) are identical. ## Steps to reproduce the bug import datasets from datasets import load_dataset raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload') # Sample code to reproduce the bug ## Expected results Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17... Downloading: 100%|███████████████████████████| 348M/348M [00:22<00:00, 15.8MB/s] Dataset arabic_billion_words downloaded and prepared to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17. Subsequent calls will reuse this data. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Machine 1: - `datasets` version: 1.15.1 - Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1 Machine 2 (the bugged one) - `datasets` version: 1.15.1 - Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3239/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3238/comments
https://api.github.com/repos/huggingface/datasets/issues/3238/events
https://github.com/huggingface/datasets/issues/3238
1,048,226,086
I_kwDODunzps4-eqkm
3,238
Reuters21578 Couldn't reach
{ "login": "TingNLP", "id": 54096137, "node_id": "MDQ6VXNlcjU0MDk2MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/54096137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TingNLP", "html_url": "https://github.com/TingNLP", "followers_url": "https://api.github.com/users/TingNLP/followers", "following_url": "https://api.github.com/users/TingNLP/following{/other_user}", "gists_url": "https://api.github.com/users/TingNLP/gists{/gist_id}", "starred_url": "https://api.github.com/users/TingNLP/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TingNLP/subscriptions", "organizations_url": "https://api.github.com/users/TingNLP/orgs", "repos_url": "https://api.github.com/users/TingNLP/repos", "events_url": "https://api.github.com/users/TingNLP/events{/privacy}", "received_events_url": "https://api.github.com/users/TingNLP/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Hi ! The URL works fine on my side today, could you try again ?", "thank you @lhoestq \r\nit works" ]
1,636,438,136,000
1,636,588,977,000
1,636,588,977,000
NONE
null
null
``## Adding a Dataset - **Name:** *Reuters21578* - **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz* - **Data:** *https://huggingface.co/datasets/reuters21578* `from datasets import load_dataset` `dataset = load_dataset("reuters21578", 'ModLewis')` ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz And I try to request the link as follow: `import requests` `requests.head('https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz')` SSLError: HTTPSConnectionPool(host='kdd.ics.uci.edu', port=443): Max retries exceeded with url: /databases/reuters21578/reuters21578.tar.gz (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),)) This problem likes #575 What should I do ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3238/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3237/comments
https://api.github.com/repos/huggingface/datasets/issues/3237/events
https://github.com/huggingface/datasets/issues/3237
1,048,165,525
I_kwDODunzps4-ebyV
3,237
wikitext description wrong
{ "login": "hongyuanmei", "id": 19693633, "node_id": "MDQ6VXNlcjE5NjkzNjMz", "avatar_url": "https://avatars.githubusercontent.com/u/19693633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hongyuanmei", "html_url": "https://github.com/hongyuanmei", "followers_url": "https://api.github.com/users/hongyuanmei/followers", "following_url": "https://api.github.com/users/hongyuanmei/following{/other_user}", "gists_url": "https://api.github.com/users/hongyuanmei/gists{/gist_id}", "starred_url": "https://api.github.com/users/hongyuanmei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hongyuanmei/subscriptions", "organizations_url": "https://api.github.com/users/hongyuanmei/orgs", "repos_url": "https://api.github.com/users/hongyuanmei/repos", "events_url": "https://api.github.com/users/hongyuanmei/events{/privacy}", "received_events_url": "https://api.github.com/users/hongyuanmei/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @hongyuanmei, thanks for reporting.\r\n\r\nI'm fixing it." ]
1,636,430,812,000
1,636,465,768,000
1,636,465,768,000
NONE
null
null
## Describe the bug Descriptions of the wikitext datasests are wrong. ## Steps to reproduce the bug Please see: https://github.com/huggingface/datasets/blob/f6dcafce996f39b6a4bbe3a9833287346f4a4b68/datasets/wikitext/wikitext.py#L50 ## Expected results The descriptions for raw-v1 and v1 should be switched.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3237/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3236/comments
https://api.github.com/repos/huggingface/datasets/issues/3236/events
https://github.com/huggingface/datasets/issues/3236
1,048,026,358
I_kwDODunzps4-d5z2
3,236
Loading of datasets changed in #3110 returns no examples
{ "login": "eladsegal", "id": 13485709, "node_id": "MDQ6VXNlcjEzNDg1NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eladsegal", "html_url": "https://github.com/eladsegal", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "repos_url": "https://api.github.com/users/eladsegal/repos", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @eladsegal, thanks for reporting.\r\n\r\nI am sorry, but I can't reproduce the bug:\r\n```\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"qasper\")\r\nDownloading: 5.11kB [00:00, ?B/s]\r\nDownloading and preparing dataset qasper/qasper (download: 9.88 MiB, generated: 35.11 MiB, post-processed: Unknown size, total: 44.99 MiB) to .cache\\qasper\\qasper\\0.1.0\\b99154d2a15aa54bfc669f82b2eda715a2e342e81023d39613b0e2920fdb3ad8...\r\nDataset qasper downloaded and prepared to .cache\\qasper\\qasper\\0.1.0\\b99154d2a15aa54bfc669f82b2eda715a2e342e81023d39613b0e2920fdb3ad8. Subsequent calls will reuse this data.\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<?, ?it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'abstract', 'full_text', 'qas'],\r\n num_rows: 888\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'abstract', 'full_text', 'qas'],\r\n num_rows: 281\r\n })\r\n})\r\n``` \r\n\r\nThis makes me suspect that the origin of the problem might be the cache: I didn't have this dataset in my cache, although I guess you already had it, before the code change introduced by #3110.\r\n\r\n@lhoestq might it be possible that the code change introduced by #3110 makes \"inaccessible\" all previously cached TAR-based datasets?\r\n- Before the caching system downloaded and extracted the tar dataset\r\n- Now it only downloads the tar dataset (no extraction is done)", "I can't reproduce either in my environment (macos, python 3.7).\r\n\r\nIn your case it generates zero examples. This can only happen if the extraction of the TAR archive doesn't output the right filenames. Indeed if the `qasper` script can't find the right file to load, it's currently ignored and it returns zero examples. This case was not even considered when #3110 was developed since we considered the file names to be deterministic - and not depend on your environment.\r\n\r\nTherefore here is my hypothesis:\r\n- either the cache is corrupted somehow with an empty TAR archive\r\n- OR I suspect that the issue comes from python 3.8\r\n", "I just tried again on python 3.8 and I was able to reproduce the issue. Let me work on a fix", "Ok I found the issue. It's not related to python 3.8 in itself though. This issue happens because your local installation of `datasets` is outdated compared to the changes to datasets in #3110\r\n\r\nTo fix this you just have to pull the latest changes from `master` :)\r\n\r\nLet me know if that helps !\r\n\r\n--------------\r\n\r\nHere are more details about my investigation:\r\n\r\nIt's possible to reproduce this issue if you use `datasets<=1.15.1` or before b6469baa22c174b3906c631802a7016fedea6780 and if you load the dataset after revision b6469baa22c174b3906c631802a7016fedea6780. This is because `dl_manager.iter_archive` had issues at that time (and it was not used anywhere anyway).\r\n\r\nIn particular it was returning the absolute path to extracted files instead of the relative path of the file inside the archive. This was an issue because `dl_manager.iter_archive` isn't supposed to extract the TAR archive. Instead, it iterates over all the files inside the archive, without creating a directory with the extracted content.\r\n\r\nTherefore if you want to use the datasets on `master`, make sure that you have an up-to-date local installation of `datasets` as well, or you may face incompatibilities like this.", "Thanks!\r\nBut what about code that is already using older version of datasets? \r\nThe reason I encountered this issue was that suddenly one of my repos with version 1.12.1 started getting 0 examples.\r\nI handled it by adding `revision` to `load_dataset`, but I guess it would still be an issue for other users who doesn't know this.", "Hi, in 1.12.1 it uses the dataset scripts from that time, not the one on master.\r\n\r\nIt only uses the datasets from master if you installed `datasets` from source, or if the dataset isn't available in your local version (in this case it shows a warning and it loads from master).\r\n", "OK, I understand the issue a bit better now.\r\nI see I wasn't on 1.12.1, but on 1.12.1.dev0 and since it is a dev version it uses master.\r\nSo users that use an old dev version must specify revision or else they'll encounter this problem.\r\n\r\nBTW, when I opened the issue I installed the latest master version with\r\n```\r\npip install git+git://github.com/huggingface/datasets@master#egg=datasets\r\n```\r\nand also used `download_mode=\"force_redownload\"`, and it still returned 0 examples.\r\nNow I deleted all of the cache and ran the code again, and it worked.\r\nI'm not sure what exactly happened here, but looks like it was due to a mix of an unofficial version and its cache.\r\n\r\nThanks again!" ]
1,636,414,186,000
1,636,476,365,000
1,636,476,347,000
CONTRIBUTOR
null
null
## Describe the bug Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples: ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 0 }) validation: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 0 }) }) ``` ## Steps to reproduce the bug Load any of the datasets that were changed in https://github.com/huggingface/datasets/pull/3110: ```python from datasets import load_dataset load_dataset("qasper") # The problem only started with the commit of #3110 load_dataset("qasper", revision="b6469baa22c174b3906c631802a7016fedea6780") ``` ## Expected results ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 888 }) validation: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 281 }) }) ``` Which can be received when specifying revision of the commit before https://github.com/huggingface/datasets/pull/3110: ```python from datasets import load_dataset load_dataset("qasper", revision="acfe2abda1ca79f0ce5c1896aa83b4b78af76b7d") ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.2.dev0 (master) - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3236/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3235/comments
https://api.github.com/repos/huggingface/datasets/issues/3235/events
https://github.com/huggingface/datasets/pull/3235
1,047,808,263
PR_kwDODunzps4uPr9Z
3,235
Addd options to use updated bleurt checkpoints
{ "login": "jaehlee", "id": 11873078, "node_id": "MDQ6VXNlcjExODczMDc4", "avatar_url": "https://avatars.githubusercontent.com/u/11873078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaehlee", "html_url": "https://github.com/jaehlee", "followers_url": "https://api.github.com/users/jaehlee/followers", "following_url": "https://api.github.com/users/jaehlee/following{/other_user}", "gists_url": "https://api.github.com/users/jaehlee/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaehlee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaehlee/subscriptions", "organizations_url": "https://api.github.com/users/jaehlee/orgs", "repos_url": "https://api.github.com/users/jaehlee/repos", "events_url": "https://api.github.com/users/jaehlee/events{/privacy}", "received_events_url": "https://api.github.com/users/jaehlee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,397,634,000
1,636,725,928,000
1,636,725,928,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3235", "html_url": "https://github.com/huggingface/datasets/pull/3235", "diff_url": "https://github.com/huggingface/datasets/pull/3235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3235.patch" }
Adds options to use newer recommended checkpoint (as of 2021/10/8) bleurt-20 and its distilled versions. Updated checkpoints are described in https://github.com/google-research/bleurt/blob/master/checkpoints.md#the-recommended-checkpoint-bleurt-20 This change won't affect the default behavior of metrics/bleurt. It only adds option to load newer checkpoints as `datasets.load_metric('bleurt', 'bleurt-20')` `bluert-20` generates scores roughly between 0 and 1, which wasn't the case for the previous checkpoints.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3235/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3234/comments
https://api.github.com/repos/huggingface/datasets/issues/3234/events
https://github.com/huggingface/datasets/pull/3234
1,047,634,236
PR_kwDODunzps4uPHRk
3,234
Avoid PyArrow type optimization if it fails
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "That's good to have a way to disable this easily :)\r\nI just find it a bit unfortunate that users would have to experience the error once and then do `DISABLE_PYARROW_TYPES_OPTIMIZATION=1`. Do you know if there's a way to simply fallback on disabling it automatically when it fails ?", "@lhoestq Actually, I agree a fallback makes more sense. The current approach is not very practical indeed and would require a mention in the docs.\r\n", "Replaced the env variable with a fallback!", "Hmm if the fallback automatically happens without the user knowing it, then I don't think we really need to mention it. But if you really wanted to, I think the [Improve performance](https://huggingface.co/docs/datasets/cache.html#improve-performance) section would be a great place for it! ", "Yea I think this could just end up in a note that says that `datasets` automatically picks the most optimized integer precision for your tokenized text data to save you disk space. Maybe later if we have a page on text processing we can add this note, but for now I agree it doesn't fit well into the doc.\r\n\r\nIn particular in the \"Improve performance\" section we mention what users can do to speed up their computations, while this behavior is just some internal feature that users don't have control over anyway." ]
1,636,387,827,000
1,636,545,869,000
1,636,545,868,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3234", "html_url": "https://github.com/huggingface/datasets/pull/3234", "diff_url": "https://github.com/huggingface/datasets/pull/3234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3234.patch" }
Adds a new variable, `DISABLE_PYARROW_TYPES_OPTIMIZATION`, to `config.py` for easier control of the Arrow type optimization. Fix #2206
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3234/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3233/comments
https://api.github.com/repos/huggingface/datasets/issues/3233/events
https://github.com/huggingface/datasets/pull/3233
1,047,474,931
PR_kwDODunzps4uOl9-
3,233
Improve repository structure docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,379,495,000
1,636,452,138,000
1,636,452,137,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3233", "html_url": "https://github.com/huggingface/datasets/pull/3233", "diff_url": "https://github.com/huggingface/datasets/pull/3233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3233.patch" }
Continuation of the documentation started in https://github.com/huggingface/datasets/pull/3221, taking into account @stevhliu 's comments
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3233/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3232/comments
https://api.github.com/repos/huggingface/datasets/issues/3232/events
https://github.com/huggingface/datasets/issues/3232
1,047,361,573
I_kwDODunzps4-bXgl
3,232
The Xsum datasets seems not able to download.
{ "login": "FYYFU", "id": 37999885, "node_id": "MDQ6VXNlcjM3OTk5ODg1", "avatar_url": "https://avatars.githubusercontent.com/u/37999885?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FYYFU", "html_url": "https://github.com/FYYFU", "followers_url": "https://api.github.com/users/FYYFU/followers", "following_url": "https://api.github.com/users/FYYFU/following{/other_user}", "gists_url": "https://api.github.com/users/FYYFU/gists{/gist_id}", "starred_url": "https://api.github.com/users/FYYFU/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FYYFU/subscriptions", "organizations_url": "https://api.github.com/users/FYYFU/orgs", "repos_url": "https://api.github.com/users/FYYFU/repos", "events_url": "https://api.github.com/users/FYYFU/events{/privacy}", "received_events_url": "https://api.github.com/users/FYYFU/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! On my side the URL is working fine, could you try again ?", "> Hi ! On my side the URL is working fine, could you try again ?\r\n\r\nI try it again and cannot download the file (might because of my location). Could you please provide another download link(such as google drive)? :>", "I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example.", "> I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example.\r\n\r\n:> ok. Thanks for your reply." ]
1,636,372,734,000
1,636,470,436,000
1,636,470,436,000
NONE
null
null
## Describe the bug The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download. ## Steps to reproduce the bug ```python load_dataset('xsum') ``` ## Actual results ``` python raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3232/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3231/comments
https://api.github.com/repos/huggingface/datasets/issues/3231/events
https://github.com/huggingface/datasets/pull/3231
1,047,170,906
PR_kwDODunzps4uNmWT
3,231
Group tests in multiprocessing workers by test file
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,361,163,000
1,636,377,558,000
1,636,361,984,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3231", "html_url": "https://github.com/huggingface/datasets/pull/3231", "diff_url": "https://github.com/huggingface/datasets/pull/3231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3231.patch" }
By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker. Therefore, the fixture `hf_token` will be called only once (and from the same worker). Related to: #3200. Fix #3219.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3231/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3230/comments
https://api.github.com/repos/huggingface/datasets/issues/3230/events
https://github.com/huggingface/datasets/pull/3230
1,047,135,583
PR_kwDODunzps4uNfEd
3,230
Add full tagset to conll2003 README
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I also added the missing `pretty_name` tag in the dataset card to fix the CI" ]
1,636,358,764,000
1,636,454,918,000
1,636,454,458,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3230", "html_url": "https://github.com/huggingface/datasets/pull/3230", "diff_url": "https://github.com/huggingface/datasets/pull/3230.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3230.patch" }
Even though it is possible to manually get the tagset list with ```python dset.features[field_name].feature.names ``` I think it is useful to have an overview of the used tagset on the dataset card. This is particularly useful in light of the **dataset viewer**: the tags are encoded, so it is not immediately obvious what they are for a given sample. Adding a label-int mapping should make it easier for visitors to get a grasp of what they mean. From user-experience perspective, I would urge the full tagsets to always be available in the README's but I understand that that would take a lot of work, probably. Perhaps it can be automated? closes #3189
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3230/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3230/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3229/comments
https://api.github.com/repos/huggingface/datasets/issues/3229/events
https://github.com/huggingface/datasets/pull/3229
1,046,706,425
PR_kwDODunzps4uMKsx
3,229
Fix URL in CITATION file
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,279,475,000
1,636,279,486,000
1,636,279,485,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3229", "html_url": "https://github.com/huggingface/datasets/pull/3229", "diff_url": "https://github.com/huggingface/datasets/pull/3229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3229.patch" }
Currently the BibTeX citation parsed from the CITATION file has wrong URL (it shows the repo URL instead of the proceedings paper URL): ``` @inproceedings{Lhoest_Datasets_A_Community_2021, author = {Lhoest, Quentin and Villanova del Moral, Albert and von Platen, Patrick and Wolf, Thomas and Šaško, Mario and Jernite, Yacine and Thakur, Abhishek and Tunstall, Lewis and Patil, Suraj and Drame, Mariama and Chaumond, Julien and Plu, Julien and Davison, Joe and Brandeis, Simon and Sanh, Victor and Le Scao, Teven and Canwen Xu, Kevin and Patry, Nicolas and Liu, Steven and McMillan-Major, Angelina and Schmid, Philipp and Gugger, Sylvain and Raw, Nathan and Lesage, Sylvain and Lozhkov, Anton and Carrigan, Matthew and Matussière, Théo and von Werra, Leandro and Debut, Lysandre and Bekman, Stas and Delangue, Clément}, booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, month = {11}, pages = {175--184}, publisher = {Association for Computational Linguistics}, title = {{Datasets: A Community Library for Natural Language Processing}}, url = {https://github.com/huggingface/datasets}, year = {2021} } ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3229/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3228/comments
https://api.github.com/repos/huggingface/datasets/issues/3228/events
https://github.com/huggingface/datasets/pull/3228
1,046,702,143
PR_kwDODunzps4uMJ58
3,228
Add CITATION file
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,278,019,000
1,636,278,707,000
1,636,278,706,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3228", "html_url": "https://github.com/huggingface/datasets/pull/3228", "diff_url": "https://github.com/huggingface/datasets/pull/3228.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3228.patch" }
Add CITATION file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3228/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3227/comments
https://api.github.com/repos/huggingface/datasets/issues/3227/events
https://github.com/huggingface/datasets/issues/3227
1,046,667,845
I_kwDODunzps4-YuJF
3,227
Error in `Json(datasets.ArrowBasedBuilder)` class
{ "login": "JunShern", "id": 7796965, "node_id": "MDQ6VXNlcjc3OTY5NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/7796965?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JunShern", "html_url": "https://github.com/JunShern", "followers_url": "https://api.github.com/users/JunShern/followers", "following_url": "https://api.github.com/users/JunShern/following{/other_user}", "gists_url": "https://api.github.com/users/JunShern/gists{/gist_id}", "starred_url": "https://api.github.com/users/JunShern/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JunShern/subscriptions", "organizations_url": "https://api.github.com/users/JunShern/orgs", "repos_url": "https://api.github.com/users/JunShern/repos", "events_url": "https://api.github.com/users/JunShern/events{/privacy}", "received_events_url": "https://api.github.com/users/JunShern/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I have additionally identified the source of the error, being that [this condition](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/src/datasets/packaged_modules/json/json.py#L124-L126) in the file\r\n`python3.8/site-packages/datasets/packaged_modules/json/json.py` is not being entered correctly:\r\n```python\r\n if (\r\n isinstance(e, pa.ArrowInvalid)\r\n and \"straddling\" not in str(e)\r\n or block_size > len(batch)\r\n ):\r\n```\r\n\r\nFrom what I can tell, in my case the block_size simply needs to be increased, but the error message does not contain \"straddling\" so the condition does trigger correctly and we fail to reach [the line to increase block_size](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/src/datasets/packaged_modules/json/json.py#L135).\r\n\r\nChanging the condition above to simply\r\n```python\r\n if (\r\n block_size > len(batch)\r\n ):\r\n```\r\n\r\nFixes the error for me. I'm happy to create a PR containing this fix if the developers deem the other conditions unnecessary.", "Hi ! I think the issue comes from the fact that your JSON file is not a valid JSON Lines file.\r\nEach example should be on one single line.\r\n\r\nCan you try fixing the format to have one line per example and try again ?", ":open_mouth: you're right, that did it! I just put everything on a single line (my file only has a single example) and that fixed the error. Thank you so much!" ]
1,636,264,232,000
1,636,484,955,000
1,636,484,955,000
NONE
null
null
## Describe the bug When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails. ## Steps to reproduce the bug Create a folder that contains the following: ``` . ├── testdata │   └── mydata.json └── test.py ``` Please download [this file](https://github.com/huggingface/datasets/files/7491797/mydata.txt) as `mydata.json`. (The error does not occur in JSON files with shorter text, but it is reproducible when the text is long as in the file I provide) :exclamation: :exclamation: GitHub doesn't allow me to upload JSON so this file is a TXT, and you should rename it to `.json`! `test.py` simply contains: ```python from datasets import load_dataset my_dataset = load_dataset("testdata") ``` To reproduce the error, simply run ``` python test.py ``` ## Expected results The data should load correctly without error. ## Actual results The dataset builder fails with: ``` Using custom data configuration testdata-d490389b8ab4fd82 Downloading and preparing dataset json/testdata to /home/junshern.chan/.cache/huggingface/datasets/json/testdata-d490389b8ab4fd82/0.0.0/3333a8af0db9764dfcff43a42ff26228f0f2e267f0d8a0a294452d188beadb34... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2264.74it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 447.01it/s] Failed to read file '/home/junshern.chan/hf-json-bug/testdata/mydata.json' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 0 Traceback (most recent call last): File "test.py", line 28, in <module> my_dataset = load_dataset("testdata") File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 1156, in _prepare_split for key, table in utils.tqdm( File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/tqdm/std.py", line 1168, in __iter__ for obj in iterable: File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 146, in _generate_tables raise ValueError( ValueError: Not able to read records in the JSON file at /home/junshern.chan/hf-json-bug/testdata/mydata.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['text']. Select the correct one and provide it as `field='XXX'` to the dataset loading method. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3227/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/3226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3226/comments
https://api.github.com/repos/huggingface/datasets/issues/3226/events
https://github.com/huggingface/datasets/pull/3226
1,046,584,518
PR_kwDODunzps4uL0ma
3,226
Fix paper BibTeX citation with proceedings reference
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,228,379,000
1,636,268,728,000
1,636,268,727,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3226", "html_url": "https://github.com/huggingface/datasets/pull/3226", "diff_url": "https://github.com/huggingface/datasets/pull/3226.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3226.patch" }
Fix paper BibTeX citation with proceedings reference.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3226/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3225/comments
https://api.github.com/repos/huggingface/datasets/issues/3225/events
https://github.com/huggingface/datasets/pull/3225
1,046,530,493
PR_kwDODunzps4uLrB3
3,225
Update tatoeba to v2021-07-22
{ "login": "KoichiYasuoka", "id": 15098598, "node_id": "MDQ6VXNlcjE1MDk4NTk4", "avatar_url": "https://avatars.githubusercontent.com/u/15098598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KoichiYasuoka", "html_url": "https://github.com/KoichiYasuoka", "followers_url": "https://api.github.com/users/KoichiYasuoka/followers", "following_url": "https://api.github.com/users/KoichiYasuoka/following{/other_user}", "gists_url": "https://api.github.com/users/KoichiYasuoka/gists{/gist_id}", "starred_url": "https://api.github.com/users/KoichiYasuoka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KoichiYasuoka/subscriptions", "organizations_url": "https://api.github.com/users/KoichiYasuoka/orgs", "repos_url": "https://api.github.com/users/KoichiYasuoka/repos", "events_url": "https://api.github.com/users/KoichiYasuoka/events{/privacy}", "received_events_url": "https://api.github.com/users/KoichiYasuoka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "How about this? @lhoestq @abhishekkrthakur ", "Hi ! I think it would be nice if people could still be able to load the old version.\r\nMaybe this can be a parameter ? For example to load the old version they could do\r\n```python\r\nload_dataset(\"tatoeba\", lang1=\"en\", lang2=\"mr\", date=\"v2020-11-09\")\r\n```\r\n\r\nIf it sounds good to you, we can add this parameter to the TatoebaConfig:\r\n```python\r\nclass TatoebaConfig(datasets.BuilderConfig):\r\n def __init__(self, *args, lang1=None, lang2=None, date=\"v2021-07-22\", **kwargs):\r\n self.date = date\r\n```\r\nand then pass the date to the URL\r\n```python\r\n_BASE_URL = \"https://object.pouta.csc.fi/OPUS-Tatoeba/{}/moses/{}-{}.txt.zip\"\r\n```\r\n```python\r\n def _base_url(lang1, lang2, date):\r\n return _BASE_URL.format(date, lang1, lang2)\r\n```\r\n\r\nWhat do you think ?", "`_DATE = \"v\" + \"-\".join(s.zfill(2) for s in _VERSION.split(\".\"))` seems rather tricky but works well. How about this? @lhoestq \r\n", "The CI is only failing because of the missing sections in the dataset card, and because of an issue with the CER metric that is unrelated to this PR" ]
1,636,211,671,000
1,636,715,593,000
1,636,715,593,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3225", "html_url": "https://github.com/huggingface/datasets/pull/3225", "diff_url": "https://github.com/huggingface/datasets/pull/3225.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3225.patch" }
Tatoeba's latest version is v2021-07-22
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3225/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3224/comments
https://api.github.com/repos/huggingface/datasets/issues/3224/events
https://github.com/huggingface/datasets/pull/3224
1,046,495,831
PR_kwDODunzps4uLk2q
3,224
User-pickling with dynamic sub-classing
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq Feel free to have a look. The implementation is slightly different from what you suggested. I have opted to overwrite `save` instead of meddling with `save_global`. `save_global` is called very late down in dill/pickle so it is hard to control for what is happening there. I might be wrong. Pickling is more complex than I thought! \r\n\r\nThe linked issue (`map` with spaCy) also works now!\r\n\r\n```python\r\nimport pickle\r\nimport spacy\r\nfrom spacy import Language\r\nfrom datasets import load_dataset\r\nfrom datasets.utils.py_utils import dumps, pklregister\r\n\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, nlp: Language):\r\n pickler.save(nlp.to_bytes())\r\n\r\ndef main():\r\n fin = r\"large/file.txt\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n\r\n def tokenize(l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\n ds = load_dataset(\"text\", data_files=fin)\r\n ds = ds[\"train\"].map(tokenize)\r\n\r\n # Sanity check: load NLP from pickle created with our own `dumps`\r\n config = nlp.config\r\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n nlp2 = lang_cls.from_config(config)\r\n nlp2.from_bytes(pickle.loads(dumps(nlp)))\r\n\r\n assert isinstance(nlp2, type(nlp))\r\n assert dumps(nlp) == dumps(nlp2)\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\nIf this all looks good to you, I'll start writing on some documentation and examples.\r\n", "One more thing. This is a reduction function for SpaCy Language that should work with the new API:\r\n```python\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, obj):\r\n def create_language(config, bytes_data):\r\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n nlp = lang_cls.from_config(config)\r\n return nlp.from_bytes(bytes_data)\r\n\r\n args = (obj.config, obj.to_bytes())\r\n pickler.save_reduce(create_language, args, obj=obj)\r\n```\r\nso IMO we are missing a test with `pickler.save_reduce`. ", "> One more thing. This is a reduction function for SpaCy Language that should work with the new API:\r\n> \r\n> ```python\r\n> @pklregister(Language, allow_subclasses=True)\r\n> def hash_spacy_language(pickler, obj):\r\n> def create_language(config, bytes_data):\r\n> lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n> nlp = lang_cls.from_config(config)\r\n> return nlp.from_bytes(bytes_data)\r\n> \r\n> args = (obj.config, obj.to_bytes())\r\n> pickler.save_reduce(create_language, args, obj=obj)\r\n> ```\r\n> \r\n> so IMO we are missing a test with `pickler.save_reduce`.\r\n\r\nSure that seems a good idea, but I do not quite understand what `save_reduce` does. Could you give some more info about what reduce functions do and how they differ from regular `save` and `save_global`? I've read about it but the docs nor the built-in `pickle` code seem really helpful.", "I'm no pickle expect, but here is my understanding. I believe the pickler uses the reduce function when you do `loads` to reconstructs the original object from the parameters/arguments that were saved with `dumps`.\r\n\r\nFor example your sanity check could be simplified from\r\n```python\r\n config = nlp.config\r\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n nlp2 = lang_cls.from_config(config)\r\n nlp2 = nlp2.from_bytes(pickle.loads(dumps(nlp)))\r\n```\r\nto\r\nEDIT: <s>pickle.loads(pickle.dumps(nlp))</s>\r\n```python\r\n nlp2 = loads(dumps(nlp)) # using our custom pickler\r\n```\r\n\r\nThough note that while it can be convenient for tests, we actually don't care about the reconstruction of the object since we're only using the pickler for `dumps` to compute hashes.", "> I'm no pickle expect, but here is my understanding. I believe the pickler uses the reduce function when you do `loads` to reconstructs the original object from the parameters/arguments that were saved with `dumps`.\r\n> \r\n> For example your sanity check could be simplified from\r\n> \r\n> ```python\r\n> config = nlp.config\r\n> lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n> nlp2 = lang_cls.from_config(config)\r\n> nlp2 = nlp2.from_bytes(pickle.loads(dumps(nlp)))\r\n> ```\r\n> \r\n> to\r\n> \r\n> ```python\r\n> nlp2 = pickle.loads(pickle.dumps(nlp))\r\n> ```\r\n> \r\n> Though note that while it can be convenient for tests, we actually don't care about the reconstruction of the object since we're only using the pickler for `dumps` to compute hashes.\r\n\r\nYes, the sanity check can be simplified like that _if_ we use `pickle.dumps` - but that would not test our own `dumps` functionality and would do a naive dump instead of using `to_bytes`. It won't work if we use our own `dumps`, exactly because of the reason that we want custom pickling and being able to call `to_bytes`. To reconstruct the object from the pickled bytes from `to_bytes` we need `from_bytes`. The result of pickle/dill loads will therefore always be a `bytes` object and not a `Language` object.\r\n\r\nBut `save_reduce` is called when saving, right? Not when loading, AFAICT. I am just not sure what exactly it is saving. It is _potentially_ called [at the end of `save`](https://github.com/python/cpython/blob/24af9a40a8f85af813ea89998aa4e931fcc78cd9/Lib/pickle.py#L603) but only if we haven't returned by then. I just can't figure out what that base case is.", "I don't think we expect users to write the reduce function that isn't going to be used anyway. So maybe let's stick with `save` ?", "@BramVanroy \r\nAs I understand `save_reduce` is very similar to `copyreg.pickle`, so I'd suggest you to check the following links:\r\n* https://docs.python.org/3/library/copyreg.html#copyreg.pickle\r\n* https://docs.python.org/3/library/pickle.html#object.__reduce__\r\n\r\n\r\n@lhoestq \r\n> I don't think we expect users to write the reduce function that isn't going to be used anyway. So maybe let's stick with save ?\r\n\r\nI agree. \r\n\r\n`save_reduce` is very similar to `copyreg.pickle` and `object.__reduce__`, which are part of public API (and `save` isn't), so I expect more advanced users to know how to write their own reduction functions. But, as you say, `pklregister` should also work with `save` (even though I think `save` is a bit lower-level, and harder to understand than `save_reduce`).\r\n\r\nAll our examples in `py_utils` that use `pklregister` also use `save_reduce` in the last step, so my reduction for SpaCy is meant to be added there, and not to be written by users (because SpaCy is very popular, so the official support by us makes sense :)).\r\n\r\nAnd in the tests, let's ignore the reconstruction part of pickle/dill, because it's not important for us, and focus on the generated dumps. What do you think?", "@mariosasko What exactly do you mean with \"isn't part of the public API\"? It is [a public method](https://github.com/python/cpython/blob/24af9a40a8f85af813ea89998aa4e931fcc78cd9/Lib/pickle.py#L535) in base pickle, just like `dump` is but maybe you mean something else.", "@BramVanroy Oh sorry, it's public (not prefixed with `\"_\"`) but it's not documented in the docs. `save_reduce` is also not in the docs, but its signature/functionality is similar to `copyreg.pickle` and I see it more often being used in the projects on GH, so it's seems \"more public\" to me. ", "Unfortunately I feel that pickle in general is under-documented. 😄 \r\n\r\nFor the documentation, I can add a brief example, maybe under \"How-to Guides\"? The only thing that isn't immediately obvious to me is how I can add that doc page to the TOC?", "Yes great idea ! To add that doc page to the TOC, you just have to add it to the index.rst file in the \"How-to guides\" TOC section", "@mariosasko @lhoestq Feel free to make any edits or suggestions in the text!", "Hi @mariosasko. I wish you'd told me sooner, as I spent quite some time writing on this.\r\n\r\nI'm also not sure whether it is too advanced to have in the documentation. The spaCy use-case seems potentially frequent. Or do you wish to add that case to the defaults, and whenever new issues come up that seem like frequent/obvious cases, add those internally as well?" ]
1,636,200,504,000
1,636,738,664,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3224", "html_url": "https://github.com/huggingface/datasets/pull/3224", "diff_url": "https://github.com/huggingface/datasets/pull/3224.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3224.patch" }
This is a continuation of the now closed PR in https://github.com/huggingface/datasets/pull/3206. The discussion there has shaped a new approach to do this. In this PR, behavior of `pklregister` and `Pickler` is extended. Earlier, users were already able to register custom pickle functions. That is useful if they have objects that are not easily picklable with default methods. When one registers a custom function to a type, an object of that type will be pickled with the given function by `Pickler` which looks up the type in its `dispatch` table. The downside of this method, and of `pickle` in general, is that it is limited to direct type-matching and does not allow sub-classes. In many, default, cases that is not an issue. But when you are using external libraries where classes (e.g. parsers, models) are sub-classed this is not ideal. ```python from datasets.fingerprint import Hasher from datasets.utils.py_utils import pklregister class BaseParser: pass class EnglishParser(BaseParser): pass @pklregister(BaseParser) def custom_pkl_func(pickler, obj): print(f"Called the custom pickle function for type {type(obj)}!") # do something with the obj and ultimately save with the pickler base = BaseParser() en = EnglishParser() # Hasher.hash uses the Pickler behind the scenes # `custom_pkl_func` called for base Hasher.hash(base) # `custom_pkl_func` not called for en :-( Hasher.hash(en) ``` In the example above we'd want to sub-class `EnglishParser` to be handled in the same way as its super-class `BaseParser`. This PR solves that by allowing for a keyword-argument `allow_subclasses` in `pklregister` (default: `False`). ```python @pklregister(BaseParser, allow_subclasses=True) ``` When this option is enabled, we not only save the function in `Pickler.dispatch` but also save it in a custom table `Pickler.subclass_dispatch` **which allows us to dynamically add sub-classes of that class to the real dispatch table**. Then, if we want to pickle an object `obj` with `Pickler.dump()` (which ultimately will call `Pickler.save()`) we _first_ check whether any of the object's super-classes exist in `Pickler.sublcass_dispatch` and get the related custom pickle function. If we find one, we add the type of `obj` alongside the function to `Pickler.dispatch`. All of this happens at the start of the call to `Pickler.save()`. _Only then_ dill.Pickler's `save` will be called, which in turn will call `pickle._Pickler.save` which handles everything. Here, the `Pickler.dispatch` table will be used to look up custom pickler functions - and it now also includes the function for `obj`, which was copied from its super-class, which we added at the very start of our custom `Pickler.save()`. For edge cases and, especially, for testing, a contextmanager class `TempPickleRegistry` is included that resets the pickle registry on exit to its previous state. ```python with TempPickleRegistry(): @pklregister(MyObjClass) def pickle_registry_test_false(pickler, obj): pickler.save(obj.fancy_method()) some_obj = MyObjClass() dumps(some_obj) # `MyObjClass` is in Pickler.dispatch # ... `MyObjClass` is _not_ in Pickler.dispatch anymore ``` closes https://github.com/huggingface/datasets/issues/3178 To Do ==== - [x] Write tests - [ ] Write documentation/examples?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3224/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3223/comments
https://api.github.com/repos/huggingface/datasets/issues/3223/events
https://github.com/huggingface/datasets/pull/3223
1,046,445,507
PR_kwDODunzps4uLb1E
3,223
Update BibTeX entry
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,636,180,912,000
1,636,182,398,000
1,636,182,398,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3223", "html_url": "https://github.com/huggingface/datasets/pull/3223", "diff_url": "https://github.com/huggingface/datasets/pull/3223.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3223.patch" }
Update BibTeX entry.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3223/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/3222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3222/comments
https://api.github.com/repos/huggingface/datasets/issues/3222/events
https://github.com/huggingface/datasets/pull/3222
1,046,299,725
PR_kwDODunzps4uK_uG
3,222
Add docs for audio processing
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "Nice ! love it this way. I guess you can set this PR to \"ready for review\" ?" ]
1,636,153,679,000
1,636,728,820,000
null
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3222", "html_url": "https://github.com/huggingface/datasets/pull/3222", "diff_url": "https://github.com/huggingface/datasets/pull/3222.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3222.patch" }
This PR adds documentation for the `Audio` feature. It describes: - The difference between loading `path` and `audio`, as well as use-cases/best practices for each of them. - Resampling audio files with `cast_column`, and then calling `ds[0]["audio"]` to automatically decode and resample to the desired sampling rate. - Resampling with `map`. Preview [here](https://52969-250213286-gh.circle-artifacts.com/0/docs/_build/html/audio_process.html), let me know if I'm missing anything!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3222/timeline
null
true

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
100
Add dataset card