url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.68B
1.88B
node_id
stringlengths
18
19
number
int64
5.79k
6.2k
title
stringlengths
1
280
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
int64
0
44
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
3
17.6k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5999/comments
https://api.github.com/repos/huggingface/datasets/issues/5999/events
https://github.com/huggingface/datasets/issues/5999
1,781,851,513
I_kwDODunzps5qNOV5
5,999
Getting a 409 error while loading xglue dataset
{ "login": "Praful932", "id": 45713796, "node_id": "MDQ6VXNlcjQ1NzEzNzk2", "avatar_url": "https://avatars.githubusercontent.com/u/45713796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Praful932", "html_url": "https://github.com/Praful932", "followers_url": "https://api.github.com/users/Praful932/followers", "following_url": "https://api.github.com/users/Praful932/following{/other_user}", "gists_url": "https://api.github.com/users/Praful932/gists{/gist_id}", "starred_url": "https://api.github.com/users/Praful932/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Praful932/subscriptions", "organizations_url": "https://api.github.com/users/Praful932/orgs", "repos_url": "https://api.github.com/users/Praful932/repos", "events_url": "https://api.github.com/users/Praful932/events{/privacy}", "received_events_url": "https://api.github.com/users/Praful932/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2023-06-30T04:13:54
2023-06-30T05:57:23
2023-06-30T05:57:22
NONE
null
### Describe the bug Unable to load xglue dataset ### Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("xglue", "ntg") ``` > ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409) ### Expected behavior Expected the dataset to load ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5999/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5998/comments
https://api.github.com/repos/huggingface/datasets/issues/5998/events
https://github.com/huggingface/datasets/issues/5998
1,781,805,018
I_kwDODunzps5qNC_a
5,998
The current implementation has a potential bug in the sort method
{ "login": "wangyuxinwhy", "id": 22192665, "node_id": "MDQ6VXNlcjIyMTkyNjY1", "avatar_url": "https://avatars.githubusercontent.com/u/22192665?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wangyuxinwhy", "html_url": "https://github.com/wangyuxinwhy", "followers_url": "https://api.github.com/users/wangyuxinwhy/followers", "following_url": "https://api.github.com/users/wangyuxinwhy/following{/other_user}", "gists_url": "https://api.github.com/users/wangyuxinwhy/gists{/gist_id}", "starred_url": "https://api.github.com/users/wangyuxinwhy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wangyuxinwhy/subscriptions", "organizations_url": "https://api.github.com/users/wangyuxinwhy/orgs", "repos_url": "https://api.github.com/users/wangyuxinwhy/repos", "events_url": "https://api.github.com/users/wangyuxinwhy/events{/privacy}", "received_events_url": "https://api.github.com/users/wangyuxinwhy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-06-30T03:16:57
2023-06-30T14:21:03
2023-06-30T14:11:25
NONE
null
### Describe the bug In the sort method,here's a piece of code ```python # column_names: Union[str, Sequence_[str]] # Check proper format of and for duplicates in column_names if not isinstance(column_names, list): column_names = [column_names] ``` I get an error when I pass in a tuple based on the column_names type annotation, it will raise an errror.As in the example below, while the type annotation implies that a tuple can be passed. ```python from datasets import load_dataset dataset = load_dataset('glue', 'ax')['test'] dataset.sort(column_names=('premise', 'hypothesis')) # Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset. ``` Of course, after I modified the tuple into a list, everything worked fine Change the code to the following so there will be no problem ```python # Check proper format of and for duplicates in column_names if not isinstance(column_names, list): if isinstance(column_names, str): column_names = [column_names] else: column_names = list(column_names) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('glue', 'ax')['test'] dataset.sort(column_names=('premise', 'hypothesis')) # Raise ValueError: Column '('premise', 'hypothesis')' not found in the dataset. ``` ### Expected behavior Passing tuple into column_names should be equivalent to passing list ### Environment info - `datasets` version: 2.13.0 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5998/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5997/comments
https://api.github.com/repos/huggingface/datasets/issues/5997/events
https://github.com/huggingface/datasets/issues/5997
1,781,582,818
I_kwDODunzps5qMMvi
5,997
extend the map function so it can wrap around long text that does not fit in the context window
{ "login": "siddhsql", "id": 127623723, "node_id": "U_kgDOB5tiKw", "avatar_url": "https://avatars.githubusercontent.com/u/127623723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/siddhsql", "html_url": "https://github.com/siddhsql", "followers_url": "https://api.github.com/users/siddhsql/followers", "following_url": "https://api.github.com/users/siddhsql/following{/other_user}", "gists_url": "https://api.github.com/users/siddhsql/gists{/gist_id}", "starred_url": "https://api.github.com/users/siddhsql/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/siddhsql/subscriptions", "organizations_url": "https://api.github.com/users/siddhsql/orgs", "repos_url": "https://api.github.com/users/siddhsql/repos", "events_url": "https://api.github.com/users/siddhsql/events{/privacy}", "received_events_url": "https://api.github.com/users/siddhsql/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
2023-06-29T22:15:21
2023-07-03T17:58:52
null
NONE
null
### Feature request I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https://stackoverflow.com/a/76343993/147530): ``` data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True) ``` but running the code gives me this error: ``` File "/llm/fine-tune.py", line 117, in <module> data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 580, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 545, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3087, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3480, in _map_single writer.write_batch(batch) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_writer.py", line 556, in write_batch pa_table = pa.Table.from_arrays(arrays, schema=schema) File "pyarrow/table.pxi", line 3798, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 2962, in pyarrow.lib.Table.validate File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447 ``` The lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it. ### Motivation please see above ### Your contribution I'm afraid I don't have much knowledge to help
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5997/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5996/comments
https://api.github.com/repos/huggingface/datasets/issues/5996/events
https://github.com/huggingface/datasets/pull/5996
1,779,294,374
PR_kwDODunzps5UKP0i
5,996
Deprecate `use_auth_token` in favor of `token`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
9
2023-06-28T16:26:38
2023-07-05T15:22:20
2023-07-03T16:03:33
CONTRIBUTOR
null
... to be consistent with `transformers` and `huggingface_hub`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5996/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5996", "html_url": "https://github.com/huggingface/datasets/pull/5996", "diff_url": "https://github.com/huggingface/datasets/pull/5996.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5996.patch", "merged_at": "2023-07-03T16:03:33" }
true
https://api.github.com/repos/huggingface/datasets/issues/5995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5995/comments
https://api.github.com/repos/huggingface/datasets/issues/5995/events
https://github.com/huggingface/datasets/pull/5995
1,777,088,925
PR_kwDODunzps5UCvYJ
5,995
Support returning dataframe in map transform
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-27T14:15:08
2023-06-28T13:56:02
2023-06-28T13:46:33
CONTRIBUTOR
null
Allow returning Pandas DataFrames in `map` transforms. (Plus, raise an error in the non-batched mode if a returned PyArrow table/Pandas DataFrame has more than one row)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5995/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5995", "html_url": "https://github.com/huggingface/datasets/pull/5995", "diff_url": "https://github.com/huggingface/datasets/pull/5995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5995.patch", "merged_at": "2023-06-28T13:46:33" }
true
https://api.github.com/repos/huggingface/datasets/issues/5994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5994/comments
https://api.github.com/repos/huggingface/datasets/issues/5994/events
https://github.com/huggingface/datasets/pull/5994
1,776,829,004
PR_kwDODunzps5UB1cA
5,994
Fix select_columns columns order
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-27T12:32:46
2023-06-27T15:40:47
2023-06-27T15:32:43
MEMBER
null
Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`. I also fixed the same issue for `dataset.flatten()` Close https://github.com/huggingface/datasets/issues/5993
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5994/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5994", "html_url": "https://github.com/huggingface/datasets/pull/5994", "diff_url": "https://github.com/huggingface/datasets/pull/5994.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5994.patch", "merged_at": "2023-06-27T15:32:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/5993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5993/comments
https://api.github.com/repos/huggingface/datasets/issues/5993/events
https://github.com/huggingface/datasets/issues/5993
1,776,643,555
I_kwDODunzps5p5W3j
5,993
ValueError: Table schema does not match schema used to create file
{ "login": "exs-avianello", "id": 128361578, "node_id": "U_kgDOB6akag", "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/exs-avianello", "html_url": "https://github.com/exs-avianello", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "repos_url": "https://api.github.com/users/exs-avianello/repos", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
2
2023-06-27T10:54:07
2023-06-27T15:36:42
2023-06-27T15:32:44
NONE
null
### Describe the bug Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order. ### Steps to reproduce the bug ```python import datasets dataset = datasets.Dataset.from_dict( { "x1": [1, 2, 3], "x2": [10, 11, 12], } ) ds = dataset.select_columns(["x2", "x1"]) ds.to_parquet("demo.parquet") ``` ```shell >>> ValueError: Table schema does not match schema used to create file: table: x2: int64 x1: int64 -- schema metadata -- huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 vs. file: x1: int64 x2: int64 -- schema metadata -- huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53 ``` --- I think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it. ```python ds.features.arrow_schema >>> x1: int64 x2: int64 -- schema metadata -- huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53 ds.data.schema >>> x2: int64 x1: int64 -- schema metadata -- huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 ``` So when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https://github.com/apache/arrow/blob/11b140a734a516e436adaddaeb35d23f30dcce44/python/pyarrow/parquet/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written 🙌 https://github.com/huggingface/datasets/blob/6ed837325cb539a5deb99129e5ad181d0269e050/src/datasets/io/parquet.py#L139-L141 ### Expected behavior The dataset gets successfully saved as parquet. *In the same way as it does if saving it as csv: ```python import datasets dataset = datasets.Dataset.from_dict( { "x1": [1, 2, 3], "x2": [10, 11, 12], } ) ds = dataset.select_columns(["x2", "x1"]) ds.to_csv("demo.csv") ``` ### Environment info `python==3.11` `datasets==2.13.1`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5993/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5993/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5992/comments
https://api.github.com/repos/huggingface/datasets/issues/5992/events
https://github.com/huggingface/datasets/pull/5992
1,776,460,964
PR_kwDODunzps5UAk3C
5,992
speedup
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-06-27T09:17:58
2023-06-27T09:23:07
2023-06-27T09:18:04
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5992/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5992", "html_url": "https://github.com/huggingface/datasets/pull/5992", "diff_url": "https://github.com/huggingface/datasets/pull/5992.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5992.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5991/comments
https://api.github.com/repos/huggingface/datasets/issues/5991/events
https://github.com/huggingface/datasets/issues/5991
1,774,456,518
I_kwDODunzps5pxA7G
5,991
`map` with any joblib backend
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
2023-06-26T10:33:42
2023-06-26T10:33:42
null
MEMBER
null
We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet. Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main process. If a Queue implementation that would work on any joblib backend by leveraging the filesystem that is shared among workers, we can have `iflatmap_unordered` for joblib and therefore a `map` with any joblib backend with a progress bar ! Note that the Queue doesn't need to be that optimized though since we can choose a small frequency for progress updates (like 1 update per second).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5991/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5989/comments
https://api.github.com/repos/huggingface/datasets/issues/5989/events
https://github.com/huggingface/datasets/issues/5989
1,774,134,091
I_kwDODunzps5pvyNL
5,989
Set a rule on the config and split names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
2023-06-26T07:34:14
2023-07-19T14:22:54
null
CONTRIBUTOR
null
> should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise https://github.com/huggingface/datasets-server/issues/853
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5989/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5988/comments
https://api.github.com/repos/huggingface/datasets/issues/5988/events
https://github.com/huggingface/datasets/issues/5988
1,773,257,828
I_kwDODunzps5pscRk
5,988
ConnectionError: Couldn't reach dataset_infos.json
{ "login": "yulingao", "id": 20674868, "node_id": "MDQ6VXNlcjIwNjc0ODY4", "avatar_url": "https://avatars.githubusercontent.com/u/20674868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yulingao", "html_url": "https://github.com/yulingao", "followers_url": "https://api.github.com/users/yulingao/followers", "following_url": "https://api.github.com/users/yulingao/following{/other_user}", "gists_url": "https://api.github.com/users/yulingao/gists{/gist_id}", "starred_url": "https://api.github.com/users/yulingao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yulingao/subscriptions", "organizations_url": "https://api.github.com/users/yulingao/orgs", "repos_url": "https://api.github.com/users/yulingao/repos", "events_url": "https://api.github.com/users/yulingao/events{/privacy}", "received_events_url": "https://api.github.com/users/yulingao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-06-25T12:39:31
2023-07-07T13:20:57
2023-07-07T13:20:57
NONE
null
### Describe the bug I'm trying to load codeparrot/codeparrot-clean-train, but get the following error: ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')))) ### Steps to reproduce the bug train_data = load_dataset('codeparrot/codeparrot-clean-train', split='train') ### Expected behavior download the dataset ### Environment info centos7
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5988/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5987/comments
https://api.github.com/repos/huggingface/datasets/issues/5987/events
https://github.com/huggingface/datasets/issues/5987
1,773,047,909
I_kwDODunzps5prpBl
5,987
Why max_shard_size is not supported in load_dataset and passed to download_and_prepare
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2023-06-25T04:19:13
2023-06-29T16:06:08
2023-06-29T16:06:08
CONTRIBUTOR
null
### Describe the bug https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809 What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead. ### Steps to reproduce the bug https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809 ### Expected behavior Users can define the max shard size. ### Environment info datasets==2.13.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5987/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5986
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5986/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5986/comments
https://api.github.com/repos/huggingface/datasets/issues/5986/events
https://github.com/huggingface/datasets/pull/5986
1,772,233,111
PR_kwDODunzps5TygOZ
5,986
Make IterableDataset.from_spark more efficient
{ "login": "mathewjacob1002", "id": 134338709, "node_id": "U_kgDOCAHYlQ", "avatar_url": "https://avatars.githubusercontent.com/u/134338709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mathewjacob1002", "html_url": "https://github.com/mathewjacob1002", "followers_url": "https://api.github.com/users/mathewjacob1002/followers", "following_url": "https://api.github.com/users/mathewjacob1002/following{/other_user}", "gists_url": "https://api.github.com/users/mathewjacob1002/gists{/gist_id}", "starred_url": "https://api.github.com/users/mathewjacob1002/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathewjacob1002/subscriptions", "organizations_url": "https://api.github.com/users/mathewjacob1002/orgs", "repos_url": "https://api.github.com/users/mathewjacob1002/repos", "events_url": "https://api.github.com/users/mathewjacob1002/events{/privacy}", "received_events_url": "https://api.github.com/users/mathewjacob1002/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
2023-06-23T22:18:20
2023-07-07T10:05:58
2023-07-07T09:56:09
CONTRIBUTOR
null
Moved the code from using collect() to using toLocalIterator, which allows for prefetching partitions that will be selected next, thus allowing for better performance when iterating.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5986/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5986", "html_url": "https://github.com/huggingface/datasets/pull/5986", "diff_url": "https://github.com/huggingface/datasets/pull/5986.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5986.patch", "merged_at": "2023-07-07T09:56:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/5985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5985/comments
https://api.github.com/repos/huggingface/datasets/issues/5985/events
https://github.com/huggingface/datasets/issues/5985
1,771,588,158
I_kwDODunzps5pmEo-
5,985
Cannot reuse tokenizer object for dataset map
{ "login": "vikigenius", "id": 12724810, "node_id": "MDQ6VXNlcjEyNzI0ODEw", "avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vikigenius", "html_url": "https://github.com/vikigenius", "followers_url": "https://api.github.com/users/vikigenius/followers", "following_url": "https://api.github.com/users/vikigenius/following{/other_user}", "gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}", "starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions", "organizations_url": "https://api.github.com/users/vikigenius/orgs", "repos_url": "https://api.github.com/users/vikigenius/repos", "events_url": "https://api.github.com/users/vikigenius/events{/privacy}", "received_events_url": "https://api.github.com/users/vikigenius/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
2
2023-06-23T14:45:31
2023-07-21T14:09:14
2023-07-21T14:09:14
NONE
null
### Describe the bug Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both. Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same. But dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change. ### Steps to reproduce the bug ```python from transformers import AutoTokenizer from datasets.utils.py_utils import dumps # Huggingface datasets t = AutoTokenizer.from_pretrained('bert-base-uncased') t.save_pretrained("tok1") th1 = hash(dumps(t)) text = "This is an example text" ttext = t(text, max_length=512, padding="max_length", truncation=True) t.save_pretrained("tok2") th2 = hash(dumps(t)) assert th1 == th2 # Assertion Error ``` But if you use just the hash of the object without dumps, the hashes don't change ```python from transformers import AutoTokenizer from datasets.utils.py_utils import dumps # Huggingface datasets t = AutoTokenizer.from_pretrained('bert-base-uncased') th1 = hash(t) # Just hash no dumps text = "This is an example text" ttext = t(text, max_length=512, padding="max_length", truncation=True) th2 = hash(t) # Just hash no dumps assert th1 == th2 # This is OK ``` This causes situations such as the following 1. Create a text file like this `yes "This is an example text" | head -n 10000 > lines.txt` ```python from transformers import AutoTokenizer import datasets class TokenizeMapper(object): """Mapper for tokenizer. This is needed because the caching mechanism of HuggingFace does not work on lambdas. Each time a new lambda will be created by a new process which will lead to a different hash. This way we can have a universal mapper object in init and reuse it with the same hash for each process. """ def __init__(self, tokenizer): """Initialize the tokenizer.""" self.tokenizer = tokenizer def __call__(self, examples, **kwargs): """Run the mapper.""" texts = examples["text"] tt = self.tokenizer(texts, max_length=256, padding="max_length", truncation=True) batch_outputs = { "input_ids": tt.input_ids, "attention_mask": tt.attention_mask, } return batch_outputs t = AutoTokenizer.from_pretrained('bert-base-uncased') mapper = TokenizeMapper(t) ds = datasets.load_dataset("text", data_files="lines.txt") mds1 = ds.map( mapper, batched=False, remove_columns=["text"], ).with_format("torch") mds2 = ds.map( mapper, batched=False, remove_columns=["text"], ).with_format("torch") ``` The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps. ### Expected behavior We should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset. The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps. ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-6.1.31_1-x86_64-with-glibc2.36 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5985/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5985/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5984/comments
https://api.github.com/repos/huggingface/datasets/issues/5984/events
https://github.com/huggingface/datasets/issues/5984
1,771,571,458
I_kwDODunzps5pmAkC
5,984
AutoSharding IterableDataset's when num_workers > 1
{ "login": "mathephysicist", "id": 25594384, "node_id": "MDQ6VXNlcjI1NTk0Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/25594384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mathephysicist", "html_url": "https://github.com/mathephysicist", "followers_url": "https://api.github.com/users/mathephysicist/followers", "following_url": "https://api.github.com/users/mathephysicist/following{/other_user}", "gists_url": "https://api.github.com/users/mathephysicist/gists{/gist_id}", "starred_url": "https://api.github.com/users/mathephysicist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathephysicist/subscriptions", "organizations_url": "https://api.github.com/users/mathephysicist/orgs", "repos_url": "https://api.github.com/users/mathephysicist/repos", "events_url": "https://api.github.com/users/mathephysicist/events{/privacy}", "received_events_url": "https://api.github.com/users/mathephysicist/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
6
2023-06-23T14:34:20
2023-07-04T17:03:56
null
NONE
null
### Feature request Minimal Example ``` import torch from datasets import IterableDataset d = IterableDataset.from_file(<file_name>) dl = torch.utils.data.dataloader.DataLoader(d,num_workers=3) for sample in dl: print(sample) ``` Warning: Too many dataloader workers: 2 (max is dataset.n_shards=1). Stopping 1 dataloader workers. To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=1. To enable more parallelism, please split the dataset in more files than 1. Expected Behavior: Dataset is sharded each cpu uses subset (contiguously - so you can do checkpoint loading/saving) ### Motivation I have a lot of unused cpu's and would like to be able to shard iterable datasets with pytorch's dataloader when num_workers > 1. This is for a very large single file. I am aware that we can use the `split_dataset_by_node` to ensure that each node (for distributed) gets different shards, but we should extend it so that this also continues for multiple workers. ### Your contribution If someone points me to what needs to change, I can create a PR.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5984/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5983/comments
https://api.github.com/repos/huggingface/datasets/issues/5983/events
https://github.com/huggingface/datasets/pull/5983
1,770,578,804
PR_kwDODunzps5TtDdy
5,983
replaced PathLike as a variable for save_to_disk for dataset_path wit…
{ "login": "benjaminbrown038", "id": 35114142, "node_id": "MDQ6VXNlcjM1MTE0MTQy", "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminbrown038", "html_url": "https://github.com/benjaminbrown038", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2023-06-23T00:57:05
2023-06-23T00:57:05
null
NONE
null
…h str like that of load_from_disk
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5983/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5983", "html_url": "https://github.com/huggingface/datasets/pull/5983", "diff_url": "https://github.com/huggingface/datasets/pull/5983.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5983.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5982/comments
https://api.github.com/repos/huggingface/datasets/issues/5982/events
https://github.com/huggingface/datasets/issues/5982
1,770,333,296
I_kwDODunzps5phSRw
5,982
404 on Datasets Documentation Page
{ "login": "kmulka-bloomberg", "id": 118509387, "node_id": "U_kgDOBxBPSw", "avatar_url": "https://avatars.githubusercontent.com/u/118509387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kmulka-bloomberg", "html_url": "https://github.com/kmulka-bloomberg", "followers_url": "https://api.github.com/users/kmulka-bloomberg/followers", "following_url": "https://api.github.com/users/kmulka-bloomberg/following{/other_user}", "gists_url": "https://api.github.com/users/kmulka-bloomberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/kmulka-bloomberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kmulka-bloomberg/subscriptions", "organizations_url": "https://api.github.com/users/kmulka-bloomberg/orgs", "repos_url": "https://api.github.com/users/kmulka-bloomberg/repos", "events_url": "https://api.github.com/users/kmulka-bloomberg/events{/privacy}", "received_events_url": "https://api.github.com/users/kmulka-bloomberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-22T20:14:57
2023-06-26T15:45:03
2023-06-26T15:45:03
NONE
null
### Describe the bug Getting a 404 from the Hugging Face Datasets docs page: https://huggingface.co/docs/datasets/index ### Steps to reproduce the bug 1. Go to URL https://huggingface.co/docs/datasets/index 2. Notice 404 not found ### Expected behavior URL should either show docs or redirect to new location ### Environment info hugginface.co
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5982/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5982/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5981/comments
https://api.github.com/repos/huggingface/datasets/issues/5981/events
https://github.com/huggingface/datasets/issues/5981
1,770,310,087
I_kwDODunzps5phMnH
5,981
Only two cores are getting used in sagemaker with pytorch 3.10 kernel
{ "login": "mmr-crexi", "id": 107141022, "node_id": "U_kgDOBmLXng", "avatar_url": "https://avatars.githubusercontent.com/u/107141022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmr-crexi", "html_url": "https://github.com/mmr-crexi", "followers_url": "https://api.github.com/users/mmr-crexi/followers", "following_url": "https://api.github.com/users/mmr-crexi/following{/other_user}", "gists_url": "https://api.github.com/users/mmr-crexi/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmr-crexi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmr-crexi/subscriptions", "organizations_url": "https://api.github.com/users/mmr-crexi/orgs", "repos_url": "https://api.github.com/users/mmr-crexi/repos", "events_url": "https://api.github.com/users/mmr-crexi/events{/privacy}", "received_events_url": "https://api.github.com/users/mmr-crexi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-22T19:57:31
2023-07-24T11:54:52
2023-07-24T11:54:52
NONE
null
### Describe the bug When using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field. We have solved this in our own code by placing the following snippet in the code that is called inside subprocesses: ```os.sched_setaffinity(0, {i for i in range(1000)})``` The problem, as near as we can tell, us that once upon a time, cpu affinity was set using a bitmask ("0xfffff" and the like), and affinity recently changed to a list of processors rather than to using the mask. As such, only processors 1 and 17 are shown to be working in htop. ![Selection_072](https://github.com/huggingface/datasets/assets/107141022/04c5a824-5321-4531-afca-7bc84dff36b4) When running functions via `map`, the above resetting of affinity works to spread across the cores. When using `filter`, however, only two cores are active. ### Steps to reproduce the bug Repro steps: 1. Create an aws sagemaker instance 2. use the pytorch 3_10 kernel 3. Load a dataset 4. run a filter operation 5. watch as only 2 cores are used when num_proc > 2 6. run a map operation 7. watch as only 2 cores are used when num_proc > 2 8. run a map operation with processor affinity reset inside the function called via map 9. Watch as all cores run ### Expected behavior All specified cores are used via the num_proc argument. ### Environment info AWS sagemaker with the following init script run in the terminal after instance creation: conda init bash bash conda activate pytorch_p310 pip install Wand PyPDF pytesseract datasets seqeval pdfplumber transformers pymupdf sentencepiece timm donut-python accelerate optimum xgboost python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' sudo yum -y install htop sudo yum -y update sudo yum -y install wget libstdc++ autoconf automake libtool autoconf-archive pkg-config gcc gcc-c++ make libjpeg-devel libpng-devel libtiff-devel zlib-devel
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5981/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5980/comments
https://api.github.com/repos/huggingface/datasets/issues/5980/events
https://github.com/huggingface/datasets/issues/5980
1,770,255,973
I_kwDODunzps5pg_Zl
5,980
Viewing dataset card returns “502 Bad Gateway”
{ "login": "tbenthompson", "id": 4241811, "node_id": "MDQ6VXNlcjQyNDE4MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tbenthompson", "html_url": "https://github.com/tbenthompson", "followers_url": "https://api.github.com/users/tbenthompson/followers", "following_url": "https://api.github.com/users/tbenthompson/following{/other_user}", "gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}", "starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions", "organizations_url": "https://api.github.com/users/tbenthompson/orgs", "repos_url": "https://api.github.com/users/tbenthompson/repos", "events_url": "https://api.github.com/users/tbenthompson/events{/privacy}", "received_events_url": "https://api.github.com/users/tbenthompson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-22T19:14:48
2023-06-27T08:38:19
2023-06-26T14:42:45
NONE
null
The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams I am able to successfully view the “Files and versions” tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main) Any help would be appreciated! Thanks! I hope this is the right place to report an issue like this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5980/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5979/comments
https://api.github.com/repos/huggingface/datasets/issues/5979/events
https://github.com/huggingface/datasets/pull/5979
1,770,198,250
PR_kwDODunzps5TrxS_
5,979
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-22T18:32:14
2023-06-22T18:42:22
2023-06-22T18:32:22
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5979/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5979", "html_url": "https://github.com/huggingface/datasets/pull/5979", "diff_url": "https://github.com/huggingface/datasets/pull/5979.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5979.patch", "merged_at": "2023-06-22T18:32:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/5978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5978/comments
https://api.github.com/repos/huggingface/datasets/issues/5978/events
https://github.com/huggingface/datasets/pull/5978
1,770,187,053
PR_kwDODunzps5Tru2_
5,978
Release: 2.13.1
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-22T18:23:11
2023-06-22T18:40:24
2023-06-22T18:30:16
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5978/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5978", "html_url": "https://github.com/huggingface/datasets/pull/5978", "diff_url": "https://github.com/huggingface/datasets/pull/5978.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5978.patch", "merged_at": "2023-06-22T18:30:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/5976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5976/comments
https://api.github.com/repos/huggingface/datasets/issues/5976/events
https://github.com/huggingface/datasets/pull/5976
1,768,503,913
PR_kwDODunzps5TmAFp
5,976
Avoid stuck map operation when subprocesses crashes
{ "login": "pappacena", "id": 1213561, "node_id": "MDQ6VXNlcjEyMTM1NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1213561?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pappacena", "html_url": "https://github.com/pappacena", "followers_url": "https://api.github.com/users/pappacena/followers", "following_url": "https://api.github.com/users/pappacena/following{/other_user}", "gists_url": "https://api.github.com/users/pappacena/gists{/gist_id}", "starred_url": "https://api.github.com/users/pappacena/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pappacena/subscriptions", "organizations_url": "https://api.github.com/users/pappacena/orgs", "repos_url": "https://api.github.com/users/pappacena/repos", "events_url": "https://api.github.com/users/pappacena/events{/privacy}", "received_events_url": "https://api.github.com/users/pappacena/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
11
2023-06-21T21:18:31
2023-07-10T09:58:39
2023-07-10T09:50:07
CONTRIBUTOR
null
I've been using Dataset.map() with `num_proc=os.cpu_count()` to leverage multicore processing for my datasets, but from time to time I get stuck processes waiting forever. Apparently, when one of the subprocesses is abruptly killed (OOM killer, segfault, SIGKILL, etc), the main process keeps waiting for the async task sent to that child process to finish. It seems to be easy to reproduce the issue with the following script: ``` import os from datasets import Dataset, Features, Value def do_stuck(item): os.kill(os.getpid(), 9) data = { "col1": list(range(5)), "col2": list(range(5)), } ds = Dataset.from_dict( data, features=Features({ "col1": Value("int64"), "col2": Value("int64"), }), ) print(ds.map(do_stuck, num_proc=4)) ``` This is an old behavior in Python, which apparently was fixed a few years ago in `concurrent.futures.ProcessPoolExecutor` ([ref](https://bugs.python.org/issue9205)), but not in `multiprocessing.pool.Pool` / `multiprocess.pool.Pool`, which is used by `Dataset.map` ([ref](https://bugs.python.org/issue22393)). This PR is an idea to try to detect when a child process gets killed, and raises a `RuntimeError` warning the dataset.map() caller. EDIT: Related proposal for future improvement: https://github.com/huggingface/datasets/discussions/5977
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5976/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5976/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5976", "html_url": "https://github.com/huggingface/datasets/pull/5976", "diff_url": "https://github.com/huggingface/datasets/pull/5976.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5976.patch", "merged_at": "2023-07-10T09:50:07" }
true
https://api.github.com/repos/huggingface/datasets/issues/5975
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5975/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5975/comments
https://api.github.com/repos/huggingface/datasets/issues/5975/events
https://github.com/huggingface/datasets/issues/5975
1,768,271,343
I_kwDODunzps5pZa3v
5,975
Streaming Dataset behind Proxy - FileNotFoundError
{ "login": "Veluchs", "id": 135350576, "node_id": "U_kgDOCBFJMA", "avatar_url": "https://avatars.githubusercontent.com/u/135350576?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Veluchs", "html_url": "https://github.com/Veluchs", "followers_url": "https://api.github.com/users/Veluchs/followers", "following_url": "https://api.github.com/users/Veluchs/following{/other_user}", "gists_url": "https://api.github.com/users/Veluchs/gists{/gist_id}", "starred_url": "https://api.github.com/users/Veluchs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Veluchs/subscriptions", "organizations_url": "https://api.github.com/users/Veluchs/orgs", "repos_url": "https://api.github.com/users/Veluchs/repos", "events_url": "https://api.github.com/users/Veluchs/events{/privacy}", "received_events_url": "https://api.github.com/users/Veluchs/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
9
2023-06-21T19:10:02
2023-06-30T05:55:39
2023-06-30T05:55:38
NONE
null
### Describe the bug When trying to stream a dataset i get the following error after a few minutes of waiting. ``` FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json If the repo is private or gated, make sure to log in with `huggingface-cli login`. ``` I have already set the proxy environment variables. Downloading a Dataset without streaming works as expected. Still i suspect that this is connected to being behind a proxy. Is there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec? ### Steps to reproduce the bug This is the code i use. ``` import os os.environ['http_proxy'] = "http://example.com:xxxx" os.environ['https_proxy'] = "http://example.com:xxxx" from datasets import load_dataset ds = load_dataset("facebook/voxpopuli", name="de", streaming=True) ``` ### Expected behavior I would expect the streaming functionality to use the set proxy settings. ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5975/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5974
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5974/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5974/comments
https://api.github.com/repos/huggingface/datasets/issues/5974/events
https://github.com/huggingface/datasets/pull/5974
1,767,981,231
PR_kwDODunzps5TkXCb
5,974
Deprecate `errors` param in favor of `encoding_errors` in text builder
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-21T16:31:38
2023-06-26T10:34:43
2023-06-26T10:27:40
CONTRIBUTOR
null
For consistency with the JSON builder and Pandas
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5974/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5974", "html_url": "https://github.com/huggingface/datasets/pull/5974", "diff_url": "https://github.com/huggingface/datasets/pull/5974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5974.patch", "merged_at": "2023-06-26T10:27:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/5972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5972/comments
https://api.github.com/repos/huggingface/datasets/issues/5972/events
https://github.com/huggingface/datasets/pull/5972
1,767,897,485
PR_kwDODunzps5TkE7K
5,972
Filter unsupported extensions
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2023-06-21T15:43:01
2023-06-22T14:23:29
2023-06-22T14:16:26
MEMBER
null
I used a regex to filter the data files based on their extension for packaged builders. I tried and a regex is 10x faster that using `in` to check if the extension is in the list of supported extensions. Supersedes https://github.com/huggingface/datasets/pull/5850 Close https://github.com/huggingface/datasets/issues/5849 I also did a small change to favor the parquet module in case of a draw in the extension counter.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5972/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5972", "html_url": "https://github.com/huggingface/datasets/pull/5972", "diff_url": "https://github.com/huggingface/datasets/pull/5972.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5972.patch", "merged_at": "2023-06-22T14:16:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/5971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5971/comments
https://api.github.com/repos/huggingface/datasets/issues/5971/events
https://github.com/huggingface/datasets/issues/5971
1,767,053,635
I_kwDODunzps5pUxlD
5,971
Docs: make "repository structure" easier to find
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
{ "login": "benjaminbrown038", "id": 35114142, "node_id": "MDQ6VXNlcjM1MTE0MTQy", "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminbrown038", "html_url": "https://github.com/benjaminbrown038", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "type": "User", "site_admin": false }
[ { "login": "benjaminbrown038", "id": 35114142, "node_id": "MDQ6VXNlcjM1MTE0MTQy", "avatar_url": "https://avatars.githubusercontent.com/u/35114142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminbrown038", "html_url": "https://github.com/benjaminbrown038", "followers_url": "https://api.github.com/users/benjaminbrown038/followers", "following_url": "https://api.github.com/users/benjaminbrown038/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminbrown038/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminbrown038/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminbrown038/subscriptions", "organizations_url": "https://api.github.com/users/benjaminbrown038/orgs", "repos_url": "https://api.github.com/users/benjaminbrown038/repos", "events_url": "https://api.github.com/users/benjaminbrown038/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminbrown038/received_events", "type": "User", "site_admin": false } ]
null
5
2023-06-21T08:26:44
2023-07-05T06:51:38
null
CONTRIBUTOR
null
The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script. It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5971/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5971/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5970/comments
https://api.github.com/repos/huggingface/datasets/issues/5970/events
https://github.com/huggingface/datasets/issues/5970
1,766,010,356
I_kwDODunzps5pQy30
5,970
description disappearing from Info when Uploading a Dataset Created with `from_dict`
{ "login": "balisujohn", "id": 20377292, "node_id": "MDQ6VXNlcjIwMzc3Mjky", "avatar_url": "https://avatars.githubusercontent.com/u/20377292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/balisujohn", "html_url": "https://github.com/balisujohn", "followers_url": "https://api.github.com/users/balisujohn/followers", "following_url": "https://api.github.com/users/balisujohn/following{/other_user}", "gists_url": "https://api.github.com/users/balisujohn/gists{/gist_id}", "starred_url": "https://api.github.com/users/balisujohn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/balisujohn/subscriptions", "organizations_url": "https://api.github.com/users/balisujohn/orgs", "repos_url": "https://api.github.com/users/balisujohn/repos", "events_url": "https://api.github.com/users/balisujohn/events{/privacy}", "received_events_url": "https://api.github.com/users/balisujohn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2023-06-20T19:18:26
2023-06-22T14:23:56
null
NONE
null
### Describe the bug When uploading a dataset created locally using `from_dict` with a specified `description` field. It appears before upload, but is missing after upload and re-download. ### Steps to reproduce the bug I think the most relevant pattern in the code might be the following lines: ``` description_json_str = json.dumps( { "dataset_id": dataset.spec.dataset_id, "env_name": dataset.spec.env_spec.id, "action_space": serialize_space(dataset.spec.action_space), "observation_space": serialize_space(dataset.spec.observation_space), } ) hugging_face_dataset = Dataset.from_dict( episodes_dict, info=DatasetInfo(description=description_json_str) ) ``` Which comes from this function https://github.com/balisujohn/minarai/blob/8e023727f0a8488c4451651d9f7a79b981412c40/minari/integrations/hugging_face.py#L39 To replicate, clone this branch of my Minari fork https://github.com/balisujohn/minarai/tree/dev-huggingface then run ``` python3.8 -m venv env source env/bin/activate python3 -m pip install -e . python3 -m pip install pytest ``` The change the hugging face repo path in the test called `test_hugging_face_push_and_pull_dataset` in `tests/integrations/test_hugging_face.py` to one you have permissions to write to. Then run: ``` pytest tests/integrations/test_hugging_face.py::test_hugging_face_push_and_pull_dataset ``` ### Expected behavior DATASET INFO BEFORE UPLOADING DatasetInfo(description='{"dataset_id": "dummy-combo-test-v0", "env_name": "DummyComboEnv-v0", "action_space": "{\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [4.0], \\"high\\": [5.0]}]}", "observation_space": "{\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, {\\"type\\": \\"Dict\\", \\"subspaces\\": {\\"component_1\\": {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [-1.0], \\"high\\": [1.0]}, \\"component_2\\": {\\"type\\": \\"Dict\\", \\"subspaces\\": {\\"subcomponent_1\\": {\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [2.0], \\"high\\": [3.0]}, \\"subcomponent_2\\": {\\"type\\": \\"Tuple\\", \\"subspaces\\": [{\\"type\\": \\"Box\\", \\"dtype\\": \\"float32\\", \\"shape\\": [1], \\"low\\": [4.0], \\"high\\": [5.0]}, {\\"type\\": \\"Discrete\\", \\"dtype\\": \\"int64\\", \\"start\\": 0, \\"n\\": 10}]}}}}}]}]}"}', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None) ... DATASET INFO AFTER UPLOADING AND DOWNLOADING DatasetInfo(description='', citation='', homepage='', license='', features={'observations': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': {'component_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'component_2': {'subcomponent_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'subcomponent_2': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Value(dtype='int64', id=None)}}}}}, 'actions': {'_index_0': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), '_index_1': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None)}, 'rewards': Value(dtype='int64', id=None), 'truncations': Value(dtype='bool', id=None), 'terminations': Value(dtype='bool', id=None), 'episode_ids': Value(dtype='int64', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name=None, config_name=None, version=None, splits={'train': SplitInfo(name='train', num_bytes=4846, num_examples=60, shard_lengths=None, dataset_name='parquet')}, download_checksums={'https://huggingface.co/datasets/balisujohn/minari_test/resolve/8217b614ff9ba5edc1a30c7df430e92a46f65363/data/train-00000-of-00001-7c5900b93b35745e.parquet': {'num_bytes': 9052, 'checksum': None}}, download_size=9052, post_processing_size=None, dataset_size=4846, size_in_bytes=13898) ... ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5970/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5969/comments
https://api.github.com/repos/huggingface/datasets/issues/5969/events
https://github.com/huggingface/datasets/pull/5969
1,765,529,905
PR_kwDODunzps5Tcgq4
5,969
Add `encoding` and `errors` params to JSON loader
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-20T14:28:35
2023-06-21T13:39:50
2023-06-21T13:32:22
CONTRIBUTOR
null
"Requested" in https://discuss.huggingface.co/t/utf-16-for-datasets/43828/3. `pd.read_json` also has these parameters, so it makes sense to be consistent.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5969/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5969", "html_url": "https://github.com/huggingface/datasets/pull/5969", "diff_url": "https://github.com/huggingface/datasets/pull/5969.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5969.patch", "merged_at": "2023-06-21T13:32:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/5968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5968/comments
https://api.github.com/repos/huggingface/datasets/issues/5968/events
https://github.com/huggingface/datasets/issues/5968
1,765,252,561
I_kwDODunzps5pN53R
5,968
Common Voice datasets still need `use_auth_token=True`
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-20T11:58:37
2023-07-29T16:08:59
2023-07-29T16:08:58
MEMBER
null
### Describe the bug We don't need to pass `use_auth_token=True` anymore to download gated datasets or models, so the following should work if correctly logged in. ```py from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation") ``` However it throws an error - probably because something weird is hardcoded into the dataset loading script. ### Steps to reproduce the bug 1.) ``` huggingface-cli login ``` 2.) Make sure that you have accepted the license here: https://huggingface.co/datasets/mozilla-foundation/common_voice_6_1 3.) Run: ```py from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation") ``` 4.) You'll get: ``` File ~/hf/lib/python3.10/site-packages/datasets/builder.py:963, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 961 split_dict = SplitDict(dataset_name=self.name) 962 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 963 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 965 # Checksums verification 966 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File ~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_1/f4d7854c466f5bd4908988dbd39044ec4fc634d89e0515ab0c51715c0127ffe3/common_voice_6_1.py:150, in CommonVoice._split_generators(self, dl_manager) 148 hf_auth_token = dl_manager.download_config.use_auth_token 149 if hf_auth_token is None: --> 150 raise ConnectionError( 151 "Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset" 152 ) 154 bundle_url_template = STATS["bundleURLTemplate"] 155 bundle_version = bundle_url_template.split("/")[0] ConnectionError: Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset ``` ### Expected behavior One should not have to pass `use_auth_token=True`. Also see discussion here: https://github.com/huggingface/blog/pull/1243#discussion_r1235131150 ### Environment info ``` - `datasets` version: 2.13.0 - Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.0.dev0 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5968/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5967/comments
https://api.github.com/repos/huggingface/datasets/issues/5967/events
https://github.com/huggingface/datasets/issues/5967
1,763,926,520
I_kwDODunzps5pI2H4
5,967
Config name / split name lost after map with multiproc
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2023-06-19T17:27:36
2023-06-28T08:55:25
null
CONTRIBUTOR
null
### Describe the bug Performing a `.map` method on a dataset loses it's config name / split name only if run with multiproc ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset from transformers import AutoFeatureExtractor import numpy as np # load dummy dataset libri = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean") # make train / test splits libri = libri["validation"].train_test_split(seed=42, shuffle=True, test_size=0.1) # example feature extractor model_id = "ntu-spml/distilhubert" feature_extractor = AutoFeatureExtractor.from_pretrained(model_id, do_normalize=True, return_attention_mask=True) sampling_rate = feature_extractor.sampling_rate libri = libri.cast_column("audio", Audio(sampling_rate=sampling_rate)) max_duration = 30.0 def preprocess_function(examples): audio_arrays = [x["array"] for x in examples["audio"]] inputs = feature_extractor( audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=int(feature_extractor.sampling_rate * max_duration), truncation=True, return_attention_mask=True, ) return inputs # single proc map libri_encoded = libri.map( preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=1 ) print(10 * "=" ,"Single processing", 10 * "=") print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split) print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split) # multi proc map libri_encoded = libri.map( preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=2 ) print(10 * "=" ,"Multi processing", 10 * "=") print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split) print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split) ``` **Print Output:** ``` ========== Single processing ========== Config name before: clean Split name before: validation Config name after: clean Split name after: validation ========== Multi processing ========== Config name before: clean Split name before: validation Config name after: None Split name after: None ``` => we can see that the config/split names are lost in the multiprocessing setting ### Expected behavior Should retain both config / split names in the multiproc setting ### Environment info - `datasets` version: 2.13.1.dev0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5967/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5966/comments
https://api.github.com/repos/huggingface/datasets/issues/5966/events
https://github.com/huggingface/datasets/pull/5966
1,763,885,914
PR_kwDODunzps5TXBLP
5,966
Fix JSON generation in benchmarks CI
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-19T16:56:06
2023-06-19T17:29:11
2023-06-19T17:22:10
CONTRIBUTOR
null
Related to changes made in https://github.com/iterative/dvc/pull/9475
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5966/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5966", "html_url": "https://github.com/huggingface/datasets/pull/5966", "diff_url": "https://github.com/huggingface/datasets/pull/5966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5966.patch", "merged_at": "2023-06-19T17:22:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/5965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5965/comments
https://api.github.com/repos/huggingface/datasets/issues/5965/events
https://github.com/huggingface/datasets/issues/5965
1,763,648,540
I_kwDODunzps5pHyQc
5,965
"Couldn't cast array of type" in complex datasets
{ "login": "piercefreeman", "id": 1712066, "node_id": "MDQ6VXNlcjE3MTIwNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4", "gravatar_id": "", "url": "https://api.github.com/users/piercefreeman", "html_url": "https://github.com/piercefreeman", "followers_url": "https://api.github.com/users/piercefreeman/followers", "following_url": "https://api.github.com/users/piercefreeman/following{/other_user}", "gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}", "starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions", "organizations_url": "https://api.github.com/users/piercefreeman/orgs", "repos_url": "https://api.github.com/users/piercefreeman/repos", "events_url": "https://api.github.com/users/piercefreeman/events{/privacy}", "received_events_url": "https://api.github.com/users/piercefreeman/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
4
2023-06-19T14:16:14
2023-07-26T15:13:53
2023-07-26T15:13:53
NONE
null
### Describe the bug When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value. This is prone to happen in batch mapping, when the mapper returns a sequence of null/empty values and other batches are non-null. A workaround is to manually cast the new batch to a pyarrow table (like implemented in this [workaround](https://github.com/piercefreeman/lassen/pull/3)) but it feels like this ideally should be solved at the core library level. Note that the reproduction case only throws this error if the first datapoint has the empty list. If it is processed later, datasets already detects its representation as list-type and therefore allows the empty list to be provided. ### Steps to reproduce the bug A trivial reproduction case: ```python from typing import Iterator, Any import pandas as pd from datasets import Dataset def batch_to_examples(batch: dict[str, list[Any]]) -> Iterator[dict[str, Any]]: for i in range(next(iter(lengths))): yield {feature: values[i] for feature, values in batch.items()} def examples_to_batch(examples) -> dict[str, list[Any]]: batch = {} for example in examples: for feature, value in example.items(): if feature not in batch: batch[feature] = [] batch[feature].append(value) return batch def batch_process(examples, explicit_schema: bool): new_examples = [] for example in batch_to_examples(examples): new_examples.append(dict(texts=example["raw_text"].split())) return examples_to_batch(new_examples) df = pd.DataFrame( [ {"raw_text": ""}, {"raw_text": "This is a test"}, {"raw_text": "This is another test"}, ] ) dataset = Dataset.from_pandas(df) # datasets won't be able to typehint a dataset that starts with an empty example. with pytest.raises(TypeError, match="Couldn't cast array of type"): dataset = dataset.map( batch_process, batched=True, batch_size=1, num_proc=1, remove_columns=dataset.column_names, ) ``` This results in crashes like: ```bash File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper return func(array, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 2109, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1819, in wrapper return func(array, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/piercefreeman/Library/Caches/pypoetry/virtualenvs/example-9kBqeSPy-py3.11/lib/python3.11/site-packages/datasets/table.py", line 1998, in array_cast raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") TypeError: Couldn't cast array of type string to null ``` ### Expected behavior The code should successfully map and create a new dataset without error. ### Environment info Mac OSX, Linux
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5965/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5964/comments
https://api.github.com/repos/huggingface/datasets/issues/5964/events
https://github.com/huggingface/datasets/pull/5964
1,763,513,574
PR_kwDODunzps5TVweZ
5,964
Always return list in `list_datasets`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-19T13:07:08
2023-06-19T17:29:37
2023-06-19T17:22:41
CONTRIBUTOR
null
Fix #5925 Plus, deprecate `list_datasets`/`inspect_dataset` in favor of `huggingface_hub.list_datasets`/"git clone workflow" (downloads data files)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5964/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5964", "html_url": "https://github.com/huggingface/datasets/pull/5964", "diff_url": "https://github.com/huggingface/datasets/pull/5964.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5964.patch", "merged_at": "2023-06-19T17:22:41" }
true
https://api.github.com/repos/huggingface/datasets/issues/5963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5963/comments
https://api.github.com/repos/huggingface/datasets/issues/5963/events
https://github.com/huggingface/datasets/issues/5963
1,762,774,457
I_kwDODunzps5pEc25
5,963
Got an error _pickle.PicklingError use Dataset.from_spark.
{ "login": "yanzia12138", "id": 112800614, "node_id": "U_kgDOBrkzZg", "avatar_url": "https://avatars.githubusercontent.com/u/112800614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanzia12138", "html_url": "https://github.com/yanzia12138", "followers_url": "https://api.github.com/users/yanzia12138/followers", "following_url": "https://api.github.com/users/yanzia12138/following{/other_user}", "gists_url": "https://api.github.com/users/yanzia12138/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanzia12138/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanzia12138/subscriptions", "organizations_url": "https://api.github.com/users/yanzia12138/orgs", "repos_url": "https://api.github.com/users/yanzia12138/repos", "events_url": "https://api.github.com/users/yanzia12138/events{/privacy}", "received_events_url": "https://api.github.com/users/yanzia12138/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2023-06-19T05:30:35
2023-07-24T11:55:46
2023-07-24T11:55:46
NONE
null
python 3.9.2 Got an error _pickle.PicklingError use Dataset.from_spark. Did the dataset import load data from spark dataframe using multi-node Spark cluster df = spark.read.parquet(args.input_data).repartition(50) ds = Dataset.from_spark(df, keep_in_memory=True, cache_dir="/pnc-data/data/nuplan/t5_spark/cache_data") ds.save_to_disk(args.output_data) Error : _pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma tion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063. 23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.) _Originally posted by @yanzia12138 in https://github.com/huggingface/datasets/issues/5701#issuecomment-1594674306_ W Traceback (most recent call last): File "/home/work/main.py", line 100, in <module> run(args) File "/home/work/main.py", line 80, in run ds = Dataset.from_spark(df1, keep_in_memory=True, File "/home/work/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1281, in from_spark return SparkDatasetReader( File "/home/work/.local/lib/python3.9/site-packages/datasets/io/spark.py", line 53, in read self.builder.download_and_prepare( File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 909, in download_and_prepare self._download_and_prepare( File "/home/work/.local/lib/python3.9/site-packages/datasets/builder.py", line 1004, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 254, in _prepare_split self._validate_cache_dir() File "/home/work/.local/lib/python3.9/site-packages/datasets/packaged_modules/spark/spark.py", line 122, in _validate_cache_dir self._spark.sparkContext.parallelize(range(1), 1).mapPartitions(create_cache_and_write_probe).collect() File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 950, in collect sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2951, in _jrdd wrapped_func = _wrap_function(self.ctx, self.func, self._prev_jrdd_deserializer, File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2830, in _wrap_function pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command) File "/home/work/.local/lib/python3.9/site-packages/pyspark/rdd.py", line 2816, in _prepare_for_python_RDD pickled_command = ser.dumps(command) File "/home/work/.local/lib/python3.9/site-packages/pyspark/serializers.py", line 447, in dumps raise pickle.PicklingError(msg) _pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. S parkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063. 23/06/19 13:51:21 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5963/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5962/comments
https://api.github.com/repos/huggingface/datasets/issues/5962/events
https://github.com/huggingface/datasets/issues/5962
1,761,589,882
I_kwDODunzps5o_7p6
5,962
Issue with train_test_split maintaining the same underlying PyArrow Table
{ "login": "Oziel14", "id": 70730520, "node_id": "MDQ6VXNlcjcwNzMwNTIw", "avatar_url": "https://avatars.githubusercontent.com/u/70730520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Oziel14", "html_url": "https://github.com/Oziel14", "followers_url": "https://api.github.com/users/Oziel14/followers", "following_url": "https://api.github.com/users/Oziel14/following{/other_user}", "gists_url": "https://api.github.com/users/Oziel14/gists{/gist_id}", "starred_url": "https://api.github.com/users/Oziel14/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Oziel14/subscriptions", "organizations_url": "https://api.github.com/users/Oziel14/orgs", "repos_url": "https://api.github.com/users/Oziel14/repos", "events_url": "https://api.github.com/users/Oziel14/events{/privacy}", "received_events_url": "https://api.github.com/users/Oziel14/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2023-06-17T02:19:58
2023-06-17T02:19:58
null
NONE
null
### Describe the bug I've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table. ### Steps to reproduce the bug 1. Load any dataset ```dataset = load_dataset("lhoestq/demo1")``` 2. Try the next code: ```python from datasets import Dataset, DatasetDict train_size = 0.6 split_train = dataset["train"].train_test_split( train_size=train_size, ) separate_dataset_dict = DatasetDict({ "train": split_train["train"], "test": split_train["test"], }) ``` 3. The next code ```print(separate_dataset_dict)``` when printing the dataset it gives the indication that they have 3 and 2 rows respectively. 4. But the next code: ```python print(len(separate_dataset_dict["train"].data['id'])) print(len(separate_dataset_dict["test"].data['id'])) ``` Indicates that both tables still have 5 rows. ### Expected behavior However, I've noticed that train_test_split["train"].data, test_val_split["train"].data, and test_val_split["test"].data are identical, suggesting that they all point to the same underlying PyArrow Table. This means that the split datasets are not independent, as I expected. I believe this is a bug in the train_test_split implementation, as I would expect this function to return datasets with separate underlying PyArrow Tables. Could you please help me understand if this is expected behavior, or if there's a workaround to create truly independent split datasets? I would appreciate any assistance with this issue. Thank you. ### Environment info I tried in Colab: - `datasets` version: 2.13.0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1 and my PC: - `datasets` version: 2.13.0 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5962/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5961/comments
https://api.github.com/repos/huggingface/datasets/issues/5961/events
https://github.com/huggingface/datasets/issues/5961
1,758,525,111
I_kwDODunzps5o0Pa3
5,961
IterableDataset: split by node and map may preprocess samples that will be skipped anyway
{ "login": "johnchienbronci", "id": 27708347, "node_id": "MDQ6VXNlcjI3NzA4MzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27708347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnchienbronci", "html_url": "https://github.com/johnchienbronci", "followers_url": "https://api.github.com/users/johnchienbronci/followers", "following_url": "https://api.github.com/users/johnchienbronci/following{/other_user}", "gists_url": "https://api.github.com/users/johnchienbronci/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnchienbronci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnchienbronci/subscriptions", "organizations_url": "https://api.github.com/users/johnchienbronci/orgs", "repos_url": "https://api.github.com/users/johnchienbronci/repos", "events_url": "https://api.github.com/users/johnchienbronci/events{/privacy}", "received_events_url": "https://api.github.com/users/johnchienbronci/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
9
2023-06-15T10:29:10
2023-09-01T10:35:11
null
NONE
null
There are two ways an iterable dataset can be split by node: 1. if the number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU 2. otherwise, each GPU iterate on the data and at the end keeps 1 sample out of n(GPUs) - skipping the others. In case 2. it's therefore possible to have the same examples passed to `prepare_dataset` for each GPU. This doesn't sound optimized though, because it runs the preprocessing on samples that won't be used in the end. Could you open a new issue so that we can discuss about this and find a solution ? _Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/5360#issuecomment-1592729051_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5961/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5959/comments
https://api.github.com/repos/huggingface/datasets/issues/5959/events
https://github.com/huggingface/datasets/issues/5959
1,757,397,507
I_kwDODunzps5ov8ID
5,959
read metric glue.py from local file
{ "login": "JiazhaoLi", "id": 31148397, "node_id": "MDQ6VXNlcjMxMTQ4Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/31148397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JiazhaoLi", "html_url": "https://github.com/JiazhaoLi", "followers_url": "https://api.github.com/users/JiazhaoLi/followers", "following_url": "https://api.github.com/users/JiazhaoLi/following{/other_user}", "gists_url": "https://api.github.com/users/JiazhaoLi/gists{/gist_id}", "starred_url": "https://api.github.com/users/JiazhaoLi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JiazhaoLi/subscriptions", "organizations_url": "https://api.github.com/users/JiazhaoLi/orgs", "repos_url": "https://api.github.com/users/JiazhaoLi/repos", "events_url": "https://api.github.com/users/JiazhaoLi/events{/privacy}", "received_events_url": "https://api.github.com/users/JiazhaoLi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-06-14T17:59:35
2023-06-14T18:04:16
2023-06-14T18:04:16
NONE
null
### Describe the bug Currently, The server is off-line. I am using the glue metric from the local file downloaded from the hub. I download / cached datasets using `load_dataset('glue','sst2', cache_dir='/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx/glue.py','sst2', cache_dir='/xxx')`. I can successfully reuse cached datasets. My problem is about the load_metric. When I run `load_dataset('xxx/glue_metric.py','sst2',cache_dir='/xxx')` , it returns ` File "xx/lib64/python3.9/site-packages/datasets/utils/deprecation_utils.py", line 46, in wrapper return deprecated_function(*args, **kwargs) File "xx//lib64/python3.9/site-packages/datasets/load.py", line 1392, in load_metric metric = metric_cls( TypeError: 'NoneType' object is not callable` Thanks in advance for help! ### Steps to reproduce the bug N/A ### Expected behavior N/A ### Environment info `datasets == 2.12.0`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5959/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5958/comments
https://api.github.com/repos/huggingface/datasets/issues/5958/events
https://github.com/huggingface/datasets/pull/5958
1,757,265,971
PR_kwDODunzps5TA3__
5,958
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-14T16:26:34
2023-06-14T16:34:55
2023-06-14T16:26:51
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5958/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5958", "html_url": "https://github.com/huggingface/datasets/pull/5958", "diff_url": "https://github.com/huggingface/datasets/pull/5958.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5958.patch", "merged_at": "2023-06-14T16:26:51" }
true
https://api.github.com/repos/huggingface/datasets/issues/5957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5957/comments
https://api.github.com/repos/huggingface/datasets/issues/5957/events
https://github.com/huggingface/datasets/pull/5957
1,757,252,466
PR_kwDODunzps5TA1EB
5,957
Release: 2.13.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-14T16:17:26
2023-06-14T16:33:39
2023-06-14T16:24:39
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5957/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5957", "html_url": "https://github.com/huggingface/datasets/pull/5957", "diff_url": "https://github.com/huggingface/datasets/pull/5957.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5957.patch", "merged_at": "2023-06-14T16:24:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/5956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5956/comments
https://api.github.com/repos/huggingface/datasets/issues/5956/events
https://github.com/huggingface/datasets/pull/5956
1,756,959,367
PR_kwDODunzps5S_1o2
5,956
Fix ArrowExamplesIterable.shard_data_sources
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-14T13:50:38
2023-06-14T14:43:12
2023-06-14T14:33:45
MEMBER
null
ArrowExamplesIterable.shard_data_sources was outdated I also fixed a warning message by not using format_type= in with_format()
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5956/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5956", "html_url": "https://github.com/huggingface/datasets/pull/5956", "diff_url": "https://github.com/huggingface/datasets/pull/5956.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5956.patch", "merged_at": "2023-06-14T14:33:45" }
true
https://api.github.com/repos/huggingface/datasets/issues/5955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5955/comments
https://api.github.com/repos/huggingface/datasets/issues/5955/events
https://github.com/huggingface/datasets/issues/5955
1,756,827,133
I_kwDODunzps5otw39
5,955
Strange bug in loading local JSON files, using load_dataset
{ "login": "Night-Quiet", "id": 73934131, "node_id": "MDQ6VXNlcjczOTM0MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/73934131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Night-Quiet", "html_url": "https://github.com/Night-Quiet", "followers_url": "https://api.github.com/users/Night-Quiet/followers", "following_url": "https://api.github.com/users/Night-Quiet/following{/other_user}", "gists_url": "https://api.github.com/users/Night-Quiet/gists{/gist_id}", "starred_url": "https://api.github.com/users/Night-Quiet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Night-Quiet/subscriptions", "organizations_url": "https://api.github.com/users/Night-Quiet/orgs", "repos_url": "https://api.github.com/users/Night-Quiet/repos", "events_url": "https://api.github.com/users/Night-Quiet/events{/privacy}", "received_events_url": "https://api.github.com/users/Night-Quiet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-14T12:46:00
2023-06-21T14:42:15
2023-06-21T14:42:15
NONE
null
### Describe the bug I am using 'load_dataset 'loads a JSON file, but I found a strange bug: an error will be reported when the length of the JSON file exceeds 160000 (uncertain exact number). I have checked the data through the following code and there are no issues. So I cannot determine the true reason for this error. The data is a list containing a dictionary. As follows: [ {'input': 'someting...', 'target': 'someting...', 'type': 'someting...', 'history': ['someting...', ...]}, ... ] ### Steps to reproduce the bug ``` import json from datasets import load_dataset path = "target.json" temp_path = "temp.json" with open(path, "r") as f: data = json.load(f) print(f"\n-------the JSON file length is: {len(data)}-------\n") with open(temp_path, "w") as f: json.dump(data[:160000], f) dataset = load_dataset("json", data_files=temp_path) print("\n-------This works when the JSON file length is 160000-------\n") with open(temp_path, "w") as f: json.dump(data[160000:], f) dataset = load_dataset("json", data_files=temp_path) print("\n-------This works and eliminates data issues-------\n") with open(temp_path, "w") as f: json.dump(data[:170000], f) dataset = load_dataset("json", data_files=temp_path) ``` ### Expected behavior ``` -------the JSON file length is: 173049------- Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4... Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 3328.81it/s] Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 639.47it/s] Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-acf3c7f418c5f4b4/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data. 100%|████████████████████████████████████████████| 1/1 [00:00<00:00, 265.85it/s] -------This works when the JSON file length is 160000------- Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4... Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 2038.05it/s] Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 794.83it/s] Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-a42f04b263ceea6a/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4. Subsequent calls will reuse this data. 100%|████████████████████████████████████████████| 1/1 [00:00<00:00, 681.00it/s] -------This works and eliminates data issues------- Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-63f391c89599c7b0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4... Downloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 3682.44it/s] Extracting data files: 100%|█████████████████████| 1/1 [00:00<00:00, 788.70it/s] Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/home/lakala/hjc/code/pycode/glm/temp.json' with error <class 'pyarrow.lib.ArrowInvalid'>: cannot mix list and non-list, non-null values Traceback (most recent call last): File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1858, in _prepare_split_single for _, table in generator: File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 146, in _generate_tables raise ValueError(f"Not able to read records in the JSON file at {file}.") from None ValueError: Not able to read records in the JSON file at /home/lakala/hjc/code/pycode/glm/temp.json. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/lakala/hjc/code/pycode/glm/test.py", line 22, in <module> dataset = load_dataset("json", data_files=temp_path) File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/load.py", line 1797, in load_dataset builder_instance.download_and_prepare( File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 890, in download_and_prepare self._download_and_prepare( File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 985, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1746, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/lakala/conda/envs/glm/lib/python3.8/site-packages/datasets/builder.py", line 1891, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Environment info ``` Ubuntu==22.04 python==3.8 pytorch-transformers==1.2.0 transformers== 4.27.1 datasets==2.12.0 numpy==1.24.3 pandas==1.5.3 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5955/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5954/comments
https://api.github.com/repos/huggingface/datasets/issues/5954/events
https://github.com/huggingface/datasets/pull/5954
1,756,572,994
PR_kwDODunzps5S-hSP
5,954
Better filenotfound for gated
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-14T10:33:10
2023-06-14T12:33:27
2023-06-14T12:26:31
MEMBER
null
close https://github.com/huggingface/datasets/issues/5953 <img width="1292" alt="image" src="https://github.com/huggingface/datasets/assets/42851186/270fe5bc-1739-4878-b7bc-ab6d35336d4d">
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5954/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5954", "html_url": "https://github.com/huggingface/datasets/pull/5954", "diff_url": "https://github.com/huggingface/datasets/pull/5954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5954.patch", "merged_at": "2023-06-14T12:26:31" }
true
https://api.github.com/repos/huggingface/datasets/issues/5953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5953/comments
https://api.github.com/repos/huggingface/datasets/issues/5953/events
https://github.com/huggingface/datasets/issues/5953
1,756,520,523
I_kwDODunzps5osmBL
5,953
Bad error message when trying to download gated dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
8
2023-06-14T10:03:39
2023-06-14T16:36:51
2023-06-14T12:26:32
MEMBER
null
### Describe the bug When I attempt to download a model from the Hub that is gated without being logged in, I get a nice error message. E.g.: E.g. ```sh Repository Not Found for url: https://huggingface.co/api/models/DeepFloyd/IF-I-XL-v1.0. Please make sure you specified the correct `repo_id` and `repo_type`. If you are trying to access a private or gated repo, make sure you are authenticated. Invalid username or password.. Will try to load from local cache. ``` If I do the same for a gated dataset on the Hub, I'm not gated a nice error message IMO: ```sh File ~/hf/lib/python3.10/site-packages/fsspec/implementations/http.py:430, in HTTPFileSystem._info(self, url, **kwargs) 427 except Exception as exc: 428 if policy == "get": 429 # If get failed, then raise a FileNotFoundError --> 430 raise FileNotFoundError(url) from exc 431 logger.debug(str(exc)) 433 return {"name": url, "size": None, **info, "type": "file"} FileNotFoundError: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0/resolve/main/n_shards.json ``` ### Steps to reproduce the bug ``` huggingface-cli logout ``` and then: ```py from datasets import load_dataset, Audio # English stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "en", split="test", streaming=True) stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000)) en_sample = next(iter(stream_data))["audio"]["array"] # Swahili stream_data = load_dataset("mozilla-foundation/common_voice_13_0", "sw", split="test", streaming=True) stream_data = stream_data.cast_column("audio", Audio(sampling_rate=16000)) sw_sample = next(iter(stream_data))["audio"]["array"] ``` ### Expected behavior Better error message ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.12.0 - Platform: Linux-6.2.0-76060200-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.0.dev0 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5953/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5952/comments
https://api.github.com/repos/huggingface/datasets/issues/5952/events
https://github.com/huggingface/datasets/pull/5952
1,756,481,591
PR_kwDODunzps5S-OIh
5,952
Add Arrow builder docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-14T09:42:46
2023-06-14T14:42:31
2023-06-14T14:34:39
MEMBER
null
following https://github.com/huggingface/datasets/pull/5944
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5952/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5952", "html_url": "https://github.com/huggingface/datasets/pull/5952", "diff_url": "https://github.com/huggingface/datasets/pull/5952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5952.patch", "merged_at": "2023-06-14T14:34:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/5951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5951/comments
https://api.github.com/repos/huggingface/datasets/issues/5951/events
https://github.com/huggingface/datasets/issues/5951
1,756,363,546
I_kwDODunzps5or_sa
5,951
What is the Right way to use discofuse dataset??
{ "login": "akesh1235", "id": 125154243, "node_id": "U_kgDOB3Wzww", "avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akesh1235", "html_url": "https://github.com/akesh1235", "followers_url": "https://api.github.com/users/akesh1235/followers", "following_url": "https://api.github.com/users/akesh1235/following{/other_user}", "gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}", "starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions", "organizations_url": "https://api.github.com/users/akesh1235/orgs", "repos_url": "https://api.github.com/users/akesh1235/repos", "events_url": "https://api.github.com/users/akesh1235/events{/privacy}", "received_events_url": "https://api.github.com/users/akesh1235/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-14T08:38:39
2023-06-14T13:25:06
2023-06-14T12:10:16
NONE
null
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6) **Below is the following way, as per my understanding , Is it correct :question: :question:** The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are: [Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6) 1. **coherent_first_sentence** 2. **coherent_second_sentence** 3. **incoherent_first_sentence** 4. **incoherent_second_sentence** [Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6) The **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.** The **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion. Please correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5951/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5950/comments
https://api.github.com/repos/huggingface/datasets/issues/5950/events
https://github.com/huggingface/datasets/issues/5950
1,755,197,946
I_kwDODunzps5onjH6
5,950
Support for data with instance-wise dictionary as features
{ "login": "richardwth", "id": 33274336, "node_id": "MDQ6VXNlcjMzMjc0MzM2", "avatar_url": "https://avatars.githubusercontent.com/u/33274336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richardwth", "html_url": "https://github.com/richardwth", "followers_url": "https://api.github.com/users/richardwth/followers", "following_url": "https://api.github.com/users/richardwth/following{/other_user}", "gists_url": "https://api.github.com/users/richardwth/gists{/gist_id}", "starred_url": "https://api.github.com/users/richardwth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richardwth/subscriptions", "organizations_url": "https://api.github.com/users/richardwth/orgs", "repos_url": "https://api.github.com/users/richardwth/repos", "events_url": "https://api.github.com/users/richardwth/events{/privacy}", "received_events_url": "https://api.github.com/users/richardwth/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
2023-06-13T15:49:00
2023-06-14T12:13:38
null
NONE
null
### Feature request I notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section. It is possible to avoid this behavior, i.e., load dictionary features as it is and do not broadcast the keys among instances? Please note that these dictionaries would have to be processed dynamically at each training iteration into strings (and tokenized). ### Motivation I am trying to load a dataset from a json file. Each instance of the dataset has a feature that is a dictionary but its keys depend on the instance. Every two instances may have different keys. For example, imagine a dataset that contains a set of math expressions from a bunch of mutually redundant expressions: ``` { "index": 0, "feature": { "2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"], ... } }, ... { "index": 9999, "feature": { "x >= 6": ["x >= 6", "x >= 0", "x >= -1"], ... } }, ... ``` When directly loading the dataset using `data = load_dataset("json", data_files=file_paths, split='train')`, each instance would have all the keys from other instances and None as values. That is, instance of index 0 becomes: ``` { "index": 0, "feature": { "2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"], ... "x >= 6": None, # keys from other instances ... } }, ``` This is not desirable. Moreover, issue would be raised if I attempt to combine two such datasets using `data = concatenate_datasets(multi_datasets)`, perhaps because their dictionary features contain different keys. A solution I can think of is to store the dictionary features as a long string, and evaluate it later. Please kindly suggest any other solution using existing methods of datasets. ### Your contribution N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5950/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5949/comments
https://api.github.com/repos/huggingface/datasets/issues/5949/events
https://github.com/huggingface/datasets/pull/5949
1,754,843,717
PR_kwDODunzps5S4oPC
5,949
Replace metadata utils with `huggingface_hub`'s RepoCard API
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
8
2023-06-13T13:03:19
2023-06-27T16:47:51
2023-06-27T16:38:32
CONTRIBUTOR
null
Use `huggingface_hub`'s RepoCard API instead of `DatasetMetadata` for modifying the card's YAML, and deprecate `datasets.utils.metadata` and `datasets.utils.readme`. After removing these modules, we can also delete `datasets.utils.resources` since the moon landing repo now stores its own version of these resources for the metadata UI. PS: this change requires bumping `huggingface_hub` to 0.13.0 (Transformers requires 0.14.0, so should be ok)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5949/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5949/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5949", "html_url": "https://github.com/huggingface/datasets/pull/5949", "diff_url": "https://github.com/huggingface/datasets/pull/5949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5949.patch", "merged_at": "2023-06-27T16:38:32" }
true
https://api.github.com/repos/huggingface/datasets/issues/5948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5948/comments
https://api.github.com/repos/huggingface/datasets/issues/5948/events
https://github.com/huggingface/datasets/pull/5948
1,754,794,611
PR_kwDODunzps5S4dUt
5,948
Fix sequence of array support for most dtype
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-13T12:38:59
2023-06-14T15:11:55
2023-06-14T15:03:33
CONTRIBUTOR
null
Fixes #5936 Also, a related fix to #5927
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5948/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5948", "html_url": "https://github.com/huggingface/datasets/pull/5948", "diff_url": "https://github.com/huggingface/datasets/pull/5948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5948.patch", "merged_at": "2023-06-14T15:03:33" }
true
https://api.github.com/repos/huggingface/datasets/issues/5947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5947/comments
https://api.github.com/repos/huggingface/datasets/issues/5947/events
https://github.com/huggingface/datasets/issues/5947
1,754,359,316
I_kwDODunzps5okWYU
5,947
Return the audio filename when decoding fails due to corrupt files
{ "login": "wetdog", "id": 8949105, "node_id": "MDQ6VXNlcjg5NDkxMDU=", "avatar_url": "https://avatars.githubusercontent.com/u/8949105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wetdog", "html_url": "https://github.com/wetdog", "followers_url": "https://api.github.com/users/wetdog/followers", "following_url": "https://api.github.com/users/wetdog/following{/other_user}", "gists_url": "https://api.github.com/users/wetdog/gists{/gist_id}", "starred_url": "https://api.github.com/users/wetdog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wetdog/subscriptions", "organizations_url": "https://api.github.com/users/wetdog/orgs", "repos_url": "https://api.github.com/users/wetdog/repos", "events_url": "https://api.github.com/users/wetdog/events{/privacy}", "received_events_url": "https://api.github.com/users/wetdog/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
2023-06-13T08:44:09
2023-06-14T12:45:01
null
NONE
null
### Feature request Return the audio filename when the audio decoding fails. Although currently there are some checks for mp3 and opus formats with the library version there are still cases when the audio decoding could fail, eg. Corrupt file. ### Motivation When you try to load an object file dataset and the decoding fails you can't know which file is corrupt ``` raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f5ab7e38290>: Format not recognised. ``` ### Your contribution Make a PR to Add exceptions for LIbsndfileError to return the audio filename or path when soundfile decoding fails.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5947/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5946/comments
https://api.github.com/repos/huggingface/datasets/issues/5946/events
https://github.com/huggingface/datasets/issues/5946
1,754,234,469
I_kwDODunzps5oj35l
5,946
IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??
{ "login": "syngokhan", "id": 70565543, "node_id": "MDQ6VXNlcjcwNTY1NTQz", "avatar_url": "https://avatars.githubusercontent.com/u/70565543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/syngokhan", "html_url": "https://github.com/syngokhan", "followers_url": "https://api.github.com/users/syngokhan/followers", "following_url": "https://api.github.com/users/syngokhan/following{/other_user}", "gists_url": "https://api.github.com/users/syngokhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/syngokhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/syngokhan/subscriptions", "organizations_url": "https://api.github.com/users/syngokhan/orgs", "repos_url": "https://api.github.com/users/syngokhan/repos", "events_url": "https://api.github.com/users/syngokhan/events{/privacy}", "received_events_url": "https://api.github.com/users/syngokhan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
6
2023-06-13T07:34:15
2023-07-14T12:04:48
null
NONE
null
### Describe the bug in <cell line: 1>:1 │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train │ │ │ │ 1534 │ │ inner_training_loop = find_executable_batch_size( │ │ 1535 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │ │ 1536 │ │ ) │ │ ❱ 1537 │ │ return inner_training_loop( │ │ 1538 │ │ │ args=args, │ │ 1539 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1540 │ │ │ trial=trial, │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1789 in _inner_training_loop │ │ │ │ 1786 │ │ │ │ rng_to_sync = True │ │ 1787 │ │ │ │ │ 1788 │ │ │ step = -1 │ │ ❱ 1789 │ │ │ for step, inputs in enumerate(epoch_iterator): │ │ 1790 │ │ │ │ total_batched_samples += 1 │ │ 1791 │ │ │ │ if rng_to_sync: │ │ 1792 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │ │ │ │ /usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py:377 in __iter__ │ │ │ │ 374 │ │ dataloader_iter = super().__iter__() │ │ 375 │ │ # We iterate one batch ahead to check when we are at the end │ │ 376 │ │ try: │ │ ❱ 377 │ │ │ current_batch = next(dataloader_iter) │ │ 378 │ │ except StopIteration: │ │ 379 │ │ │ yield │ │ 380 │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │ │ │ │ 630 │ │ │ if self._sampler_iter is None: │ │ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │ │ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │ │ ❱ 633 │ │ │ data = self._next_data() │ │ 634 │ │ │ self._num_yielded += 1 │ │ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │ │ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │ │ │ │ 674 │ │ │ 675 │ def _next_data(self): │ │ 676 │ │ index = self._next_index() # may raise StopIteration │ │ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │ │ 678 │ │ if self._pin_memory: │ │ 679 │ │ │ data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) │ │ 680 │ │ return data │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py:49 in fetch │ │ │ │ 46 │ def fetch(self, possibly_batched_index): │ │ 47 │ │ if self.auto_collation: │ │ 48 │ │ │ if hasattr(self.dataset, "__getitems__") and self.dataset.__getitems__: │ │ ❱ 49 │ │ │ │ data = self.dataset.__getitems__(possibly_batched_index) │ │ 50 │ │ │ else: │ │ 51 │ │ │ │ data = [self.dataset[idx] for idx in possibly_batched_index] │ │ 52 │ │ else: │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2782 in __getitems__ │ │ │ │ 2779 │ │ │ 2780 │ def __getitems__(self, keys: List) -> List: │ │ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │ │ ❱ 2782 │ │ batch = self.__getitem__(keys) │ │ 2783 │ │ n_examples = len(batch[next(iter(batch))]) │ │ 2784 │ │ return [{col: array[i] for col, array in batch.items()} for i in range(n_example │ │ 2785 │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2778 in __getitem__ │ │ │ │ 2775 │ │ │ 2776 │ def __getitem__(self, key): # noqa: F811 │ │ 2777 │ │ """Can be used to index columns (by string names) or rows (by integer index or i │ │ ❱ 2778 │ │ return self._getitem(key) │ │ 2779 │ │ │ 2780 │ def __getitems__(self, keys: List) -> List: │ │ 2781 │ │ """Can be used to get a batch using a list of integers indices.""" │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:2762 in _getitem │ │ │ │ 2759 │ │ format_kwargs = kwargs["format_kwargs"] if "format_kwargs" in kwargs else self._ │ │ 2760 │ │ format_kwargs = format_kwargs if format_kwargs is not None else {} │ │ 2761 │ │ formatter = get_formatter(format_type, features=self._info.features, **format_kw │ │ ❱ 2762 │ │ pa_subtable = query_table(self._data, key, indices=self._indices if self._indice │ │ 2763 │ │ formatted_output = format_table( │ │ 2764 │ │ │ pa_subtable, key, formatter=formatter, format_columns=format_columns, output │ │ 2765 │ │ ) │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:578 in query_table │ │ │ │ 575 │ │ _check_valid_column_key(key, table.column_names) │ │ 576 │ else: │ │ 577 │ │ size = indices.num_rows if indices is not None else table.num_rows │ │ ❱ 578 │ │ _check_valid_index_key(key, size) │ │ 579 │ # Query the main table │ │ 580 │ if indices is None: │ │ 581 │ │ pa_subtable = _query_table(table, key) │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:531 in │ │ _check_valid_index_key │ │ │ │ 528 │ │ │ _check_valid_index_key(min(key), size=size) │ │ 529 │ elif isinstance(key, Iterable): │ │ 530 │ │ if len(key) > 0: │ │ ❱ 531 │ │ │ _check_valid_index_key(int(max(key)), size=size) │ │ 532 │ │ │ _check_valid_index_key(int(min(key)), size=size) │ │ 533 │ else: │ │ 534 │ │ _raise_bad_key_type(key) │ │ │ │ /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:521 in │ │ _check_valid_index_key │ │ │ │ 518 def _check_valid_index_key(key: Union[int, slice, range, Iterable], size: int) -> None: │ │ 519 │ if isinstance(key, int): │ │ 520 │ │ if (key < 0 and key + size < 0) or (key >= size): │ │ ❱ 521 │ │ │ raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") │ │ 522 │ │ return │ │ 523 │ elif isinstance(key, slice): │ │ 524 │ │ pass ### Steps to reproduce the bug `` import json import os from pprint import pprint import bitsandbytes as bnb import pandas as pd import torch import torch.nn as nn import transformers from datasets import Dataset,load_dataset from peft import ( LoraConfig, PeftConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training ) from transformers import ( AutoConfig, AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, ) os.environ["CUDA_VISIBLE_DEVICES"] = "0" def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) MODEL_NAME = "tiiuae/falcon-7b" bnb_config = BitsAndBytesConfig( load_in_4bit = True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, device_map = "auto", trust_remote_code = True, quantization_config = bnb_config ) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) tokenizer.pad_token = tokenizer.eos_token model.gradient_checkpointing_enable() model = prepare_model_for_kbit_training(model) config = LoraConfig( r = 16, lora_alpha = 32, target_modules = ["query_key_value"], lora_dropout = 0.05, bias = "none", task_type = "CASUAL_LM" ) model = get_peft_model(model,config) print_trainable_parameters(model) def generate_prompt(data_point): return f""" <human>: {data_point["question"]} <assistant>: {data_point["answer"]} """.strip() def generate_and_tokenize_prompt(data_point): full_prompt = generate_prompt(data_point) tokenized_full_prompt = tokenizer(full_prompt, padding = True, truncation = True,return_tensors = None) return dict({ "input_ids" : tokenized_full_prompt["input_ids"], "attention_mask" : tokenized_full_prompt["attention_mask"] }) data = data["train"].shuffle().map(generate_and_tokenize_prompt, batched = False) OUTPUT_DIR = "experiments" trainings_args = transformers.TrainingArguments( per_device_train_batch_size = 1, gradient_accumulation_steps = 4, num_train_epochs = 1, learning_rate = 2e-4, fp16 = True, save_total_limit = 3, logging_steps = 1, output_dir = OUTPUT_DIR, max_steps = 80, optim = "paged_adamw_8bit", lr_scheduler_type = "cosine", warmup_ratio = 0.05, #remove_unused_columns=True ) trainer = transformers.Trainer( model = model, train_dataset = data, args = trainings_args, data_collator = transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) model.config.use_cache = False trainer.train() IndexError: Invalid key: 32 is out of bounds for size 0 DataSet Format is like : [{"question": "How can I create an account?", "answer": "To create an account, click on the 'Sign Up' button on the top right corner of our website and follow the instructions to complete the registration process."}, .... ] ### Expected behavior - ### Environment info !pip install -q pip !pip install -q bitsandbytes==0.39.0 !pip install -q torch==2.0.1 !pip install -q git+https://github.com/huggingface/transformers.git !pip install -q git+https://github.com/huggingface/peft.git !pip install -q git+https://github.com/huggingface/accelerate.git !pip install -q datasets !pip install -q loralib==0.1.1 !pip install -q einops==0.6.1 import json import os from pprint import pprint import bitsandbytes as bnb import pandas as pd import torch import torch.nn as nn import transformers from datasets import Dataset,load_dataset from peft import ( LoraConfig, PeftConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training ) from transformers import ( AutoConfig, AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, ) os.environ["CUDA_VISIBLE_DEVICES"] = "0"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5946/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5945/comments
https://api.github.com/repos/huggingface/datasets/issues/5945/events
https://github.com/huggingface/datasets/issues/5945
1,754,084,577
I_kwDODunzps5ojTTh
5,945
Failing to upload dataset to the hub
{ "login": "Ar770", "id": 77382661, "node_id": "MDQ6VXNlcjc3MzgyNjYx", "avatar_url": "https://avatars.githubusercontent.com/u/77382661?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ar770", "html_url": "https://github.com/Ar770", "followers_url": "https://api.github.com/users/Ar770/followers", "following_url": "https://api.github.com/users/Ar770/following{/other_user}", "gists_url": "https://api.github.com/users/Ar770/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ar770/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ar770/subscriptions", "organizations_url": "https://api.github.com/users/Ar770/orgs", "repos_url": "https://api.github.com/users/Ar770/repos", "events_url": "https://api.github.com/users/Ar770/events{/privacy}", "received_events_url": "https://api.github.com/users/Ar770/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-13T05:46:46
2023-07-24T11:56:40
2023-07-24T11:56:40
NONE
null
### Describe the bug Trying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work. From time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable. Please help. I'm trying to upload the dataset for almost a week. Thanks ### Steps to reproduce the bug not relevant ### Expected behavior Be able to upload thedataset ### Environment info python: 3.9
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5945/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5944/comments
https://api.github.com/repos/huggingface/datasets/issues/5944/events
https://github.com/huggingface/datasets/pull/5944
1,752,882,200
PR_kwDODunzps5Sx7O4
5,944
Arrow dataset builder to be able to load and stream Arrow datasets
{ "login": "mariusz-jachimowicz-83", "id": 10278877, "node_id": "MDQ6VXNlcjEwMjc4ODc3", "avatar_url": "https://avatars.githubusercontent.com/u/10278877?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariusz-jachimowicz-83", "html_url": "https://github.com/mariusz-jachimowicz-83", "followers_url": "https://api.github.com/users/mariusz-jachimowicz-83/followers", "following_url": "https://api.github.com/users/mariusz-jachimowicz-83/following{/other_user}", "gists_url": "https://api.github.com/users/mariusz-jachimowicz-83/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariusz-jachimowicz-83/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariusz-jachimowicz-83/subscriptions", "organizations_url": "https://api.github.com/users/mariusz-jachimowicz-83/orgs", "repos_url": "https://api.github.com/users/mariusz-jachimowicz-83/repos", "events_url": "https://api.github.com/users/mariusz-jachimowicz-83/events{/privacy}", "received_events_url": "https://api.github.com/users/mariusz-jachimowicz-83/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-12T14:21:49
2023-06-13T17:36:02
2023-06-13T17:29:01
CONTRIBUTOR
null
This adds a Arrow dataset builder to be able to load and stream from already preprocessed Arrow files. It's related to https://github.com/huggingface/datasets/issues/3035
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5944/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5944", "html_url": "https://github.com/huggingface/datasets/pull/5944", "diff_url": "https://github.com/huggingface/datasets/pull/5944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5944.patch", "merged_at": "2023-06-13T17:29:01" }
true
https://api.github.com/repos/huggingface/datasets/issues/5942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5942/comments
https://api.github.com/repos/huggingface/datasets/issues/5942/events
https://github.com/huggingface/datasets/pull/5942
1,752,021,681
PR_kwDODunzps5Su-V4
5,942
Pass datasets-cli additional args as kwargs to DatasetBuilder in `run_beam.py`
{ "login": "graelo", "id": 84066822, "node_id": "MDQ6VXNlcjg0MDY2ODIy", "avatar_url": "https://avatars.githubusercontent.com/u/84066822?v=4", "gravatar_id": "", "url": "https://api.github.com/users/graelo", "html_url": "https://github.com/graelo", "followers_url": "https://api.github.com/users/graelo/followers", "following_url": "https://api.github.com/users/graelo/following{/other_user}", "gists_url": "https://api.github.com/users/graelo/gists{/gist_id}", "starred_url": "https://api.github.com/users/graelo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/graelo/subscriptions", "organizations_url": "https://api.github.com/users/graelo/orgs", "repos_url": "https://api.github.com/users/graelo/repos", "events_url": "https://api.github.com/users/graelo/events{/privacy}", "received_events_url": "https://api.github.com/users/graelo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2023-06-12T06:50:50
2023-06-30T09:15:00
null
NONE
null
Hi, Following this <https://discuss.huggingface.co/t/how-to-preprocess-a-wikipedia-dataset-using-dataflowrunner/41991/3>, here is a simple PR to pass any additional args to datasets-cli as kwargs in the DatasetBuilder in `run_beam.py`. I also took the liberty to add missing setup steps to the `beam.mdx` docs in order to help everyone. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5942/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5942", "html_url": "https://github.com/huggingface/datasets/pull/5942", "diff_url": "https://github.com/huggingface/datasets/pull/5942.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5942.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5941/comments
https://api.github.com/repos/huggingface/datasets/issues/5941/events
https://github.com/huggingface/datasets/issues/5941
1,751,838,897
I_kwDODunzps5oavCx
5,941
Load Data Sets Too Slow In Train Seq2seq Model
{ "login": "xyx361100238", "id": 19569322, "node_id": "MDQ6VXNlcjE5NTY5MzIy", "avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xyx361100238", "html_url": "https://github.com/xyx361100238", "followers_url": "https://api.github.com/users/xyx361100238/followers", "following_url": "https://api.github.com/users/xyx361100238/following{/other_user}", "gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}", "starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions", "organizations_url": "https://api.github.com/users/xyx361100238/orgs", "repos_url": "https://api.github.com/users/xyx361100238/repos", "events_url": "https://api.github.com/users/xyx361100238/events{/privacy}", "received_events_url": "https://api.github.com/users/xyx361100238/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
10
2023-06-12T03:58:43
2023-08-15T02:52:22
2023-08-15T02:52:22
NONE
null
### Describe the bug step 'Generating train split' in load_dataset is too slow: ![image](https://github.com/huggingface/datasets/assets/19569322/d9b08eee-95fe-4741-a346-b70416c948f8) ### Steps to reproduce the bug Data: own data,16K16B Mono wav Oficial Script:[ run_speech_recognition_seq2seq.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py) Add Code: if data_args.data_path is not None: print(data_args.data_path) raw_datasets = load_dataset("audiofolder", data_dir=data_args.data_path, cache_dir=model_args.cache_dir) raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000)) raw_datasets = raw_datasets["train"].train_test_split(test_size=0.005, shuffle=True) (change cache_dir to other path ,ex:/DATA/cache) ### Expected behavior load data fast,at least 1000+ `Generating train split: 387875 examples [32:24:45, 1154.83 examples/s]` ### Environment info - `transformers` version: 4.28.0.dev0 - Platform: Linux-5.4.0-149-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.16 - Huggingface_hub version: 0.13.2 - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5941/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5990/comments
https://api.github.com/repos/huggingface/datasets/issues/5990/events
https://github.com/huggingface/datasets/issues/5990
1,774,389,854
I_kwDODunzps5pwwpe
5,990
Pushing a large dataset on the hub consistently hangs
{ "login": "AntreasAntoniou", "id": 10792502, "node_id": "MDQ6VXNlcjEwNzkyNTAy", "avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AntreasAntoniou", "html_url": "https://github.com/AntreasAntoniou", "followers_url": "https://api.github.com/users/AntreasAntoniou/followers", "following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}", "gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}", "starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions", "organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs", "repos_url": "https://api.github.com/users/AntreasAntoniou/repos", "events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}", "received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
44
2023-06-10T14:46:47
2023-08-17T09:54:11
null
NONE
null
### Describe the bug Once I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catch this happening and kill a process and restart, it seems to be extremely time wasting -- so I came to you to report this and to seek help. I already tried installing hf_transfer, but it doesn't support Byte file uploads so I uninstalled it. ### Reproduction ```python import multiprocessing as mp import pathlib from math import ceil import datasets import numpy as np from tqdm.auto import tqdm from tali.data.data import select_subtitles_between_timestamps from tali.utils import load_json tali_dataset_dir = "/data/" if __name__ == "__main__": full_dataset = datasets.load_dataset( "Antreas/TALI", num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir ) def data_generator(set_name, percentage: float = 1.0): dataset = full_dataset[set_name] for item in tqdm(dataset): video_list = item["youtube_content_video"] video_list = np.random.choice( video_list, int(ceil(len(video_list) * percentage)) ) if len(video_list) == 0: continue captions = item["youtube_subtitle_text"] captions = select_subtitles_between_timestamps( subtitle_dict=load_json( captions.replace( "/data/", tali_dataset_dir, ) ), starting_timestamp=0, ending_timestamp=100000000, ) for video_path in video_list: temp_path = video_path.replace("/data/", tali_dataset_dir) video_path_actual: pathlib.Path = pathlib.Path(temp_path) if video_path_actual.exists(): item["youtube_content_video"] = open(video_path_actual, "rb").read() item["youtube_subtitle_text"] = captions yield item train_generator = lambda: data_generator("train", percentage=0.1) val_generator = lambda: data_generator("val") test_generator = lambda: data_generator("test") train_data = datasets.Dataset.from_generator( train_generator, num_proc=mp.cpu_count(), writer_batch_size=5000, cache_dir=tali_dataset_dir, ) val_data = datasets.Dataset.from_generator( val_generator, writer_batch_size=5000, num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir, ) test_data = datasets.Dataset.from_generator( test_generator, writer_batch_size=5000, num_proc=mp.cpu_count(), cache_dir=tali_dataset_dir, ) dataset = datasets.DatasetDict( { "train": train_data, "val": val_data, "test": test_data, } ) succesful_competion = False while not succesful_competion: try: dataset.push_to_hub(repo_id="Antreas/TALI-small", max_shard_size="5GB") succesful_competion = True except Exception as e: print(e) ``` ### Logs ```shell Pushing dataset shards to the dataset hub: 33%|██████████████████████████████████████▎ | 7/21 [24:33<49:06, 210.45s/it] Error while uploading 'data/val-00007-of-00021-6b216a984af1a4c8.parquet' to the Hub. Pushing split train to the Hub. Resuming upload of the dataset shards. Pushing dataset shards to the dataset hub: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [42:10<00:00, 55.01s/it] Pushing split val to the Hub. Resuming upload of the dataset shards. Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.55ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.51s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.39ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:30<00:00, 30.19s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.28ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:24<00:00, 24.08s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.42ba/s] Upload 1 LFS files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:23<00:00, 23.97s/it] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.49ba/s] Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.54ba/s^ Upload 1 LFS files: 0%| | 0/1 [04:42<?, ?it/s] Pushing dataset shards to the dataset hub: 52%|████████████████████████████████████████████████████████████▏ | 11/21 [17:23<15:48, 94.82s/it] That's where it got stuck ``` ### System info ```shell - huggingface_hub version: 0.15.1 - Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.35 - Python version: 3.10.11 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /root/.cache/huggingface/token - Has saved token ?: True - Who am I ?: Antreas - Configured git credential helpers: store - FastAI: N/A - Tensorflow: N/A - Torch: 2.1.0.dev20230606+cu121 - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 9.5.0 - hf_transfer: N/A - gradio: N/A - numpy: 1.24.3 - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets - HF_TOKEN_PATH: /root/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5990/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5939/comments
https://api.github.com/repos/huggingface/datasets/issues/5939/events
https://github.com/huggingface/datasets/issues/5939
1,749,955,883
I_kwDODunzps5oTjUr
5,939
.
{ "login": "flckv", "id": 103381497, "node_id": "U_kgDOBil5-Q", "avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flckv", "html_url": "https://github.com/flckv", "followers_url": "https://api.github.com/users/flckv/followers", "following_url": "https://api.github.com/users/flckv/following{/other_user}", "gists_url": "https://api.github.com/users/flckv/gists{/gist_id}", "starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flckv/subscriptions", "organizations_url": "https://api.github.com/users/flckv/orgs", "repos_url": "https://api.github.com/users/flckv/repos", "events_url": "https://api.github.com/users/flckv/events{/privacy}", "received_events_url": "https://api.github.com/users/flckv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-06-09T14:01:34
2023-06-12T12:19:34
2023-06-12T12:19:19
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5939/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5938/comments
https://api.github.com/repos/huggingface/datasets/issues/5938/events
https://github.com/huggingface/datasets/pull/5938
1,749,462,851
PR_kwDODunzps5SmbkI
5,938
Make get_from_cache use custom temp filename that is locked
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-09T09:01:13
2023-06-14T13:35:38
2023-06-14T13:27:24
MEMBER
null
This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache. This PR stops using `tempfile` to generate the temporary filename. Additionally, the behavior now is aligned for both `resume_download` `True` and `False`. Refactor temp_file_manager so that it uses the filename that is locked: - Use: `cache_path + ".incomplete"`, when the locked one is `cache_path + ".lock"` Before it was using `tempfile` inside `cache_dir`, which was not locked: although very improbable name collision (8 random characters), this was not impossible when huge number of multiple processes. Maybe related to "Stale file handle" issues caused by `tempfile`: - [ ] https://huggingface.co/datasets/tapaco/discussions/4 - [ ] https://huggingface.co/datasets/xcsr/discussions/1 - [ ] https://huggingface.co/datasets/covost2/discussions/3 ``` Error code: ConfigNamesError Exception: OSError Message: [Errno 116] Stale file handle Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1188, in dataset_module_factory return HubDatasetModuleFactoryWithScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 907, in get_module dataset_readme_path = self.download_dataset_readme_file() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 896, in download_dataset_readme_file return cached_path( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path output_path = get_from_cache( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache http_get( File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__ result = self.file.__exit__(exc, value, tb) OSError: [Errno 116] Stale file handle ``` - the stale file handle error can be raised when `tempfile` tries to close (when exiting its context manager) a filename that has been already closed by other process - note that `tempfile` filenames are randomly generated but not locked in our code CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5938/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5938/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5938", "html_url": "https://github.com/huggingface/datasets/pull/5938", "diff_url": "https://github.com/huggingface/datasets/pull/5938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5938.patch", "merged_at": "2023-06-14T13:27:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/5937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5937/comments
https://api.github.com/repos/huggingface/datasets/issues/5937/events
https://github.com/huggingface/datasets/pull/5937
1,749,388,597
PR_kwDODunzps5SmLIs
5,937
Avoid parallel redownload in cache
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-09T08:18:36
2023-06-14T12:30:59
2023-06-14T12:23:57
MEMBER
null
Avoid parallel redownload in cache by retrying inside the lock if path exists.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5937/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5937", "html_url": "https://github.com/huggingface/datasets/pull/5937", "diff_url": "https://github.com/huggingface/datasets/pull/5937.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5937.patch", "merged_at": "2023-06-14T12:23:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/5936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5936/comments
https://api.github.com/repos/huggingface/datasets/issues/5936/events
https://github.com/huggingface/datasets/issues/5936
1,748,424,388
I_kwDODunzps5oNtbE
5,936
Sequence of array not supported for most dtype
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-08T18:18:07
2023-06-14T15:03:34
2023-06-14T15:03:34
CONTRIBUTOR
null
### Describe the bug Create a dataset composed of sequence of array fails for most dtypes (see code below). ### Steps to reproduce the bug ```python from datasets import Sequence, Array2D, Features, Dataset import numpy as np for dtype in [ "bool", # ok "int8", # failed "int16", # failed "int32", # failed "int64", # ok "uint8", # failed "uint16", # failed "uint32", # failed "uint64", # failed "float16", # failed "float32", # failed "float64", # ok ]: features = Features({"foo": Sequence(Array2D(dtype=dtype, shape=(2, 2)))}) sequence = [ [[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]], ] array = np.array(sequence, dtype=dtype) try: dataset = Dataset.from_dict({"foo": [array]}, features=features) except Exception as e: print(f"Failed for dtype={dtype}") ``` Traceback for `dtype="int8"`: ``` Traceback (most recent call last): File "/home/qgallouedec/datasets/a.py", line 29, in <module> raise e File "/home/qgallouedec/datasets/a.py", line 26, in <module> dataset = Dataset.from_dict({"foo": [array]}, features=features) File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 899, in from_dict pa_table = InMemoryTable.from_pydict(mapping=mapping) File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 799, in from_pydict return cls(pa.Table.from_pydict(*args, **kwargs)) File "pyarrow/table.pxi", line 3725, in pyarrow.lib.Table.from_pydict File "pyarrow/table.pxi", line 5254, in pyarrow.lib._from_pydict File "pyarrow/array.pxi", line 350, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 236, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 204, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper return func(array, *args, **kwargs) File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2091, in cast_array_to_feature casted_values = _c(array.values, feature.feature) File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper return func(array, *args, **kwargs) File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2139, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper return func(array, *args, **kwargs) File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1967, in array_cast return pa_type.wrap_array(array) File "pyarrow/types.pxi", line 879, in pyarrow.lib.BaseExtensionType.wrap_array TypeError: Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: int8>>, got list<item: list<item: int64>> ``` ### Expected behavior Not to fail. ### Environment info - Python 3.10.6 - datasets: master branch - Numpy: 1.23.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5936/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5935
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5935/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5935/comments
https://api.github.com/repos/huggingface/datasets/issues/5935/events
https://github.com/huggingface/datasets/pull/5935
1,748,090,220
PR_kwDODunzps5Sh9Mg
5,935
Better row group size in push_to_hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
10
2023-06-08T15:01:15
2023-06-09T17:47:37
2023-06-09T17:40:09
MEMBER
null
This is a very simple change that improves `to_parquet` to use a more reasonable row group size for image and audio datasets. This is especially useful for `push_to_hub` and will provide a better experience with the dataset viewer on HF
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5935/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5935/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5935", "html_url": "https://github.com/huggingface/datasets/pull/5935", "diff_url": "https://github.com/huggingface/datasets/pull/5935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5935.patch", "merged_at": "2023-06-09T17:40:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/5934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5934/comments
https://api.github.com/repos/huggingface/datasets/issues/5934/events
https://github.com/huggingface/datasets/pull/5934
1,747,904,840
PR_kwDODunzps5ShUxQ
5,934
Modify levels of some logging messages
{ "login": "Laurent2916", "id": 21087104, "node_id": "MDQ6VXNlcjIxMDg3MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Laurent2916", "html_url": "https://github.com/Laurent2916", "followers_url": "https://api.github.com/users/Laurent2916/followers", "following_url": "https://api.github.com/users/Laurent2916/following{/other_user}", "gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}", "starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions", "organizations_url": "https://api.github.com/users/Laurent2916/orgs", "repos_url": "https://api.github.com/users/Laurent2916/repos", "events_url": "https://api.github.com/users/Laurent2916/events{/privacy}", "received_events_url": "https://api.github.com/users/Laurent2916/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-08T13:31:44
2023-07-12T18:21:03
2023-07-12T18:21:02
CONTRIBUTOR
null
Some warning messages didn't quite sound like warnings so I modified their logging levels to info.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5934/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5934", "html_url": "https://github.com/huggingface/datasets/pull/5934", "diff_url": "https://github.com/huggingface/datasets/pull/5934.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5934.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5933/comments
https://api.github.com/repos/huggingface/datasets/issues/5933/events
https://github.com/huggingface/datasets/pull/5933
1,747,382,500
PR_kwDODunzps5Sfi5J
5,933
Fix `to_numpy` when None values in the sequence
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-08T08:38:56
2023-06-09T13:49:41
2023-06-09T13:23:48
CONTRIBUTOR
null
Closes #5927 I've realized that the error was overlooked during testing due to the presence of only one None value in the sequence. Unfortunately, it was the only case where the function works as expected. When the sequence contained more than one None value, the function failed. Consequently, I've updated the tests to include sequences with multiple None values.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5933/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5933", "html_url": "https://github.com/huggingface/datasets/pull/5933", "diff_url": "https://github.com/huggingface/datasets/pull/5933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5933.patch", "merged_at": "2023-06-09T13:23:48" }
true
https://api.github.com/repos/huggingface/datasets/issues/5932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5932/comments
https://api.github.com/repos/huggingface/datasets/issues/5932/events
https://github.com/huggingface/datasets/pull/5932
1,746,249,161
PR_kwDODunzps5Sbrzo
5,932
[doc build] Use secrets
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-07T16:09:39
2023-06-09T10:16:58
2023-06-09T09:53:16
CONTRIBUTOR
null
Companion pr to https://github.com/huggingface/doc-builder/pull/379
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5932/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5932/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5932", "html_url": "https://github.com/huggingface/datasets/pull/5932", "diff_url": "https://github.com/huggingface/datasets/pull/5932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5932.patch", "merged_at": "2023-06-09T09:53:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/5931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5931/comments
https://api.github.com/repos/huggingface/datasets/issues/5931/events
https://github.com/huggingface/datasets/issues/5931
1,745,408,784
I_kwDODunzps5oCNMQ
5,931
`datasets.map` not reusing cached copy by default
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-06-07T09:03:33
2023-06-21T16:15:40
2023-06-21T16:15:40
CONTRIBUTOR
null
### Describe the bug When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same? One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage? ### Steps to reproduce the bug ``` # make sure that dataset decodes audio with correct sampling rate dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate if dataset_sampling_rate != self.feature_extractor.sampling_rate: self.raw_datasets = self.raw_datasets.cast_column( "audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate) ) vectorized_datasets = self.raw_datasets.map( self.prepare_dataset, remove_columns=next(iter(self.raw_datasets.values())).column_names, num_proc=self.num_workers, desc="preprocess datasets", ) # filter data that is longer than max_input_length self.vectorized_datasets = vectorized_datasets.filter( self.is_audio_in_length_range, num_proc=self.num_workers, input_columns=["input_length"], ) def prepare_dataset(self, batch): # load audio sample = batch["audio"] inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) batch["input_values"] = inputs.input_values[0] batch["input_length"] = len(batch["input_values"]) batch["labels"] = self.tokenizer(batch["target_text"]).input_ids return batch ``` ### Expected behavior `map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5931/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5930/comments
https://api.github.com/repos/huggingface/datasets/issues/5930/events
https://github.com/huggingface/datasets/issues/5930
1,745,184,395
I_kwDODunzps5oBWaL
5,930
loading private custom dataset script - authentication error
{ "login": "flckv", "id": 103381497, "node_id": "U_kgDOBil5-Q", "avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flckv", "html_url": "https://github.com/flckv", "followers_url": "https://api.github.com/users/flckv/followers", "following_url": "https://api.github.com/users/flckv/following{/other_user}", "gists_url": "https://api.github.com/users/flckv/gists{/gist_id}", "starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flckv/subscriptions", "organizations_url": "https://api.github.com/users/flckv/orgs", "repos_url": "https://api.github.com/users/flckv/repos", "events_url": "https://api.github.com/users/flckv/events{/privacy}", "received_events_url": "https://api.github.com/users/flckv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-06-07T06:58:23
2023-06-15T14:49:21
2023-06-15T14:49:20
NONE
null
### Describe the bug Train model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ? I am logged in in the terminal, in the browser. I receive this error: /python3.8/site-packages/datasets/utils/file_utils.py", line 566, in get_from_cache raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels `(ConnectionError('Unauthorized for URL `https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels. Please use the parameter `**`use_auth_token=True`**` after logging in with `**`huggingface-cli login`**`')) when I added: `use_auth_token=True` and logged in via terminal then I received error: or the same error in different format: raise ConnectionError(f"`Couldn't reach {url} (error {response.status_code}`)") ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels (`error 401`) ### Steps to reproduce the bug 1. cloned transformers library locally: https://huggingface.co/docs/transformers/v4.15.0/examples : > git clone https://github.com/huggingface/transformers > cd transformers > pip install . > cd /transformers/examples/pytorch/audio-classification > pip install -r requirements.txt 2. created **loading script** > https://huggingface.co/docs/datasets/dataset_script added next to dataset: 3. uploaded **private custom dataset** with loading script to HuggingFace > https://huggingface.co/docs/datasets/dataset_script 4. added dataset loading script to **local directory** in the above cloned transformers library: > cd /transformers/examples/pytorch/audio-classification 5. logged in to HuggingFace on local terminal with : > **huggingface-cli login** 6. run the model with the custom dataset stored on HuggingFace with code: https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md cd /transformers/examples/pytorch/audio-classification > python run_audio_classification.py \ > --model_name_or_path facebook/wav2vec2-base \ > --output_dir l/users/flck/outputs/wav2vec2-base-s \ > --overwrite_output_dir \ > --dataset_name s \ > --dataset_config_name s \ > --remove_unused_columns False \ > --do_train \ > --do_eval \ > --fp16 \ > --learning_rate 3e-5 \ > --max_length_seconds 1 \ > --attention_mask False \ > --warmup_ratio 0.1 \ > --num_train_epochs 5 \ > --per_device_train_batch_size 32 \ > --gradient_accumulation_steps 4 \ > --per_device_eval_batch_size 32 \ > --dataloader_num_workers 4 \ > --logging_strategy steps \ > --logging_steps 10 \ > --evaluation_strategy epoch \ > --save_strategy epoch \ > --load_best_model_at_end True \ > --metric_for_best_model accuracy \ > --save_total_limit 3 \ > --seed 0 \ > --push_to_hub \ > **--use_auth_token=True** ### Expected behavior Be able to train a model the https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/ run_audio_classification.py with private custom dataset stored on HuggingFace. ### Environment info - datasets version: 2.12.0 - `transformers` version: 4.30.0.dev0 - Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) Versions of relevant libraries: [pip3] numpy==1.24.3 [pip3] torch==2.0.1 [pip3] torchaudio==2.0.2 [conda] numpy 1.24.3 pypi_0 pypi [conda] torch 2.0.1 pypi_0 pypi [conda] torchaudio 2.0.2 pypi_0 pypi
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5930/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5929/comments
https://api.github.com/repos/huggingface/datasets/issues/5929/events
https://github.com/huggingface/datasets/issues/5929
1,744,478,456
I_kwDODunzps5n-qD4
5,929
Importing PyTorch reduces multiprocessing performance for map
{ "login": "Maxscha", "id": 12814709, "node_id": "MDQ6VXNlcjEyODE0NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/12814709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maxscha", "html_url": "https://github.com/Maxscha", "followers_url": "https://api.github.com/users/Maxscha/followers", "following_url": "https://api.github.com/users/Maxscha/following{/other_user}", "gists_url": "https://api.github.com/users/Maxscha/gists{/gist_id}", "starred_url": "https://api.github.com/users/Maxscha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Maxscha/subscriptions", "organizations_url": "https://api.github.com/users/Maxscha/orgs", "repos_url": "https://api.github.com/users/Maxscha/repos", "events_url": "https://api.github.com/users/Maxscha/events{/privacy}", "received_events_url": "https://api.github.com/users/Maxscha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-06T19:42:25
2023-06-16T13:09:12
2023-06-16T13:09:12
NONE
null
### Describe the bug I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported. ### Steps to reproduce the bug I created two example scripts to reproduce this behavior: ``` import datasets datasets.disable_caching() from datasets import Dataset import time PROC=32 if __name__ == "__main__": dataset = [True] * 10000000 dataset = Dataset.from_dict({'train': dataset}) start = time.time() dataset.map(lambda x: x, num_proc=PROC) end = time.time() print(end - start) ``` Takes around 4 seconds on my machine. While the same code, but with an `import torch`: ``` import datasets datasets.disable_caching() from datasets import Dataset import time import torch PROC=32 if __name__ == "__main__": dataset = [True] * 10000000 dataset = Dataset.from_dict({'train': dataset}) start = time.time() dataset.map(lambda x: x, num_proc=PROC) end = time.time() print(end - start) ``` takes around 22 seconds. ### Expected behavior I would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing. ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2 - torch: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5929/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5928/comments
https://api.github.com/repos/huggingface/datasets/issues/5928/events
https://github.com/huggingface/datasets/pull/5928
1,744,098,371
PR_kwDODunzps5SUXPC
5,928
Fix link to quickstart docs in README.md
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-06T15:23:01
2023-06-06T15:52:34
2023-06-06T15:43:53
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5928/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5928", "html_url": "https://github.com/huggingface/datasets/pull/5928", "diff_url": "https://github.com/huggingface/datasets/pull/5928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5928.patch", "merged_at": "2023-06-06T15:43:53" }
true
https://api.github.com/repos/huggingface/datasets/issues/5927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5927/comments
https://api.github.com/repos/huggingface/datasets/issues/5927/events
https://github.com/huggingface/datasets/issues/5927
1,744,009,032
I_kwDODunzps5n83dI
5,927
`IndexError` when indexing `Sequence` of `Array2D` with `None` values
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-06-06T14:36:22
2023-06-13T12:39:39
2023-06-09T13:23:50
CONTRIBUTOR
null
### Describe the bug Having `None` values in a `Sequence` of `ArrayND` fails. ### Steps to reproduce the bug ```python from datasets import Array2D, Dataset, Features, Sequence data = [ [ [[0]], None, None, ] ] feature = Sequence(Array2D((1, 1), dtype="int64")) dataset = Dataset.from_dict({"a": data}, features=Features({"a": feature})) dataset[0] # error raised only when indexing ``` ``` Traceback (most recent call last): File "/Users/quentingallouedec/gia/c.py", line 13, in <module> dataset[0] # error raised only when indexing File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2658, in __getitem__ return self._getitem(key) File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2643, in _getitem formatted_output = format_table( File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 634, in format_table return formatter(pa_table, query_type=query_type) File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 406, in __call__ return self.format_row(pa_table) File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 441, in format_row row = self.python_arrow_extractor().extract_row(pa_table) File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 144, in extract_row return _unnest(pa_table.to_pydict()) File "pyarrow/table.pxi", line 4146, in pyarrow.lib.Table.to_pydict File "pyarrow/table.pxi", line 1312, in pyarrow.lib.ChunkedArray.to_pylist File "pyarrow/array.pxi", line 1521, in pyarrow.lib.Array.to_pylist File "pyarrow/scalar.pxi", line 675, in pyarrow.lib.ListScalar.as_py File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 760, in to_pylist return self.to_numpy(zero_copy_only=zero_copy_only).tolist() File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 725, in to_numpy numpy_arr = np.insert(numpy_arr.astype(np.float64), null_indices, np.nan, axis=0) File "<__array_function__ internals>", line 200, in insert File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/numpy/lib/function_base.py", line 5426, in insert old_mask[indices] = False IndexError: index 3 is out of bounds for axis 0 with size 3 ``` AFAIK, the problem only occurs when you use a `Sequence` of `ArrayND`. I strongly suspect that the problem comes from this line, or `np.insert` is misused: https://github.com/huggingface/datasets/blob/02ee418831aba68d0be93227bce8b3f42ef8980f/src/datasets/features/features.py#L729 To put t simply, you want something that do that: ```python import numpy as np numpy_arr = np.zeros((1, 1, 1)) null_indices = np.array([1, 2]) np.insert(numpy_arr, null_indices, np.nan, axis=0) # raise an error, instead of outputting # array([[[ 0.]], # [[nan]], # [[nan]]]) ``` ### Expected behavior The previous code should not raise an error. ### Environment info - Python 3.10.11 - datasets 2.10.0 - pyarrow 12.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5927/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5926/comments
https://api.github.com/repos/huggingface/datasets/issues/5926/events
https://github.com/huggingface/datasets/issues/5926
1,743,922,028
I_kwDODunzps5n8iNs
5,926
Uncaught exception when generating the splits from a dataset that miss data
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
2023-06-06T13:51:01
2023-06-07T07:53:16
null
CONTRIBUTOR
null
### Describe the bug Dataset https://huggingface.co/datasets/blog_authorship_corpus has an issue with its hosting platform, since https://drive.google.com/u/0/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error. But when trying to generate the split names, we get an exception which is now correctly caught. Seen originally in https://github.com/huggingface/datasets-server/blob/adbdcd6710ffed4e2eb2e4cd905b5e0dff530a15/services/worker/src/worker/job_runners/config/parquet_and_info.py#L435 ### Steps to reproduce the bug ```python >>> from datasets import StreamingDownloadManager, load_dataset_builder >>> builder = load_dataset_builder(path="blog_authorship_corpus") Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.60k/5.60k [00:00<00:00, 23.1MB/s] Downloading metadata: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.81k/2.81k [00:00<00:00, 14.7MB/s] Downloading readme: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.30k/7.30k [00:00<00:00, 30.8MB/s] >>> dl_manager = StreamingDownloadManager(base_path=builder.base_path) >>> builder._split_generators(dl_manager) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/blog_authorship_corpus/6f5d78241afd8313111956f877a57db7a0e9fc6718255dc85df0928197feb683/blog_authorship_corpus.py", line 79, in _split_generators data = dl_manager.download_and_extract(_DATA_URL) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1087, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1039, in extract urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 435, in map_nested return function(data_struct) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1044, in _extract protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 433, in _get_extraction_protocol with fsspec.open(urlpath, **kwargs) as f: File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 439, in open return open_files( File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 194, in __getitem__ out = super().__getitem__(item) IndexError: list index out of range ``` ### Expected behavior We should have an Exception raised by the datasets library. ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.19.0-1026-aws-x86_64-with-glibc2.35 - Python version: 3.9.15 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5926/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5925/comments
https://api.github.com/repos/huggingface/datasets/issues/5925/events
https://github.com/huggingface/datasets/issues/5925
1,741,941,436
I_kwDODunzps5n0-q8
5,925
Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets
{ "login": "mtkinit", "id": 78868366, "node_id": "MDQ6VXNlcjc4ODY4MzY2", "avatar_url": "https://avatars.githubusercontent.com/u/78868366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mtkinit", "html_url": "https://github.com/mtkinit", "followers_url": "https://api.github.com/users/mtkinit/followers", "following_url": "https://api.github.com/users/mtkinit/following{/other_user}", "gists_url": "https://api.github.com/users/mtkinit/gists{/gist_id}", "starred_url": "https://api.github.com/users/mtkinit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtkinit/subscriptions", "organizations_url": "https://api.github.com/users/mtkinit/orgs", "repos_url": "https://api.github.com/users/mtkinit/repos", "events_url": "https://api.github.com/users/mtkinit/events{/privacy}", "received_events_url": "https://api.github.com/users/mtkinit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-06-05T14:46:04
2023-06-19T17:22:43
2023-06-19T17:22:43
NONE
null
### Describe the bug Hi all, after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`. It would be helpful to indicate that by the return type of the `datasets.list_datasets` function. Thanks, Martin ### Steps to reproduce the bug Here, the code crashed after we updated the `datasets` library: ```python # list_datasets no longer returns a list, which leads to an error when one tries to slice it for datasets.list_datasets(with_details=True)[:limit]: ... ``` ### Expected behavior It would be helpful to indicate that by the return type of the `datasets.list_datasets` function. ### Environment info Ubuntu 22.04 datasets 2.12.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5925/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5924/comments
https://api.github.com/repos/huggingface/datasets/issues/5924/events
https://github.com/huggingface/datasets/pull/5924
1,738,889,236
PR_kwDODunzps5SCiFv
5,924
Add parallel module using joblib for Spark
{ "login": "es94129", "id": 12763339, "node_id": "MDQ6VXNlcjEyNzYzMzM5", "avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/es94129", "html_url": "https://github.com/es94129", "followers_url": "https://api.github.com/users/es94129/followers", "following_url": "https://api.github.com/users/es94129/following{/other_user}", "gists_url": "https://api.github.com/users/es94129/gists{/gist_id}", "starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/es94129/subscriptions", "organizations_url": "https://api.github.com/users/es94129/orgs", "repos_url": "https://api.github.com/users/es94129/repos", "events_url": "https://api.github.com/users/es94129/events{/privacy}", "received_events_url": "https://api.github.com/users/es94129/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
7
2023-06-02T22:25:25
2023-06-14T10:25:10
2023-06-14T10:15:46
CONTRIBUTOR
null
Discussion in https://github.com/huggingface/datasets/issues/5798
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5924/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5924/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5924", "html_url": "https://github.com/huggingface/datasets/pull/5924", "diff_url": "https://github.com/huggingface/datasets/pull/5924.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5924.patch", "merged_at": "2023-06-14T10:15:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/5923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5923/comments
https://api.github.com/repos/huggingface/datasets/issues/5923/events
https://github.com/huggingface/datasets/issues/5923
1,737,436,227
I_kwDODunzps5njyxD
5,923
Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility
{ "login": "ehuangc", "id": 71412682, "node_id": "MDQ6VXNlcjcxNDEyNjgy", "avatar_url": "https://avatars.githubusercontent.com/u/71412682?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ehuangc", "html_url": "https://github.com/ehuangc", "followers_url": "https://api.github.com/users/ehuangc/followers", "following_url": "https://api.github.com/users/ehuangc/following{/other_user}", "gists_url": "https://api.github.com/users/ehuangc/gists{/gist_id}", "starred_url": "https://api.github.com/users/ehuangc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ehuangc/subscriptions", "organizations_url": "https://api.github.com/users/ehuangc/orgs", "repos_url": "https://api.github.com/users/ehuangc/repos", "events_url": "https://api.github.com/users/ehuangc/events{/privacy}", "received_events_url": "https://api.github.com/users/ehuangc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
15
2023-06-02T04:16:32
2023-08-31T02:02:24
null
NONE
null
### Describe the bug When trying to import datasets, I get a pyarrow ValueError: Traceback (most recent call last): File "/Users/edward/test/test.py", line 1, in <module> import datasets File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module> from .arrow_dataset import Dataset File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 65, in <module> from .arrow_reader import ArrowReader File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_reader.py", line 28, in <module> import pyarrow.parquet as pq File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/__init__.py", line 20, in <module> from .core import * File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 45, in <module> from pyarrow.fs import (LocalFileSystem, FileSystem, FileType, File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/fs.py", line 49, in <module> from pyarrow._gcsfs import GcsFileSystem # noqa File "pyarrow/_gcsfs.pyx", line 1, in init pyarrow._gcsfs ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject ### Steps to reproduce the bug `import datasets` ### Expected behavior Successful import ### Environment info Conda environment, MacOS python 3.9.12 datasets 2.12.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5923/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5923/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5922/comments
https://api.github.com/repos/huggingface/datasets/issues/5922/events
https://github.com/huggingface/datasets/issues/5922
1,736,898,953
I_kwDODunzps5nhvmJ
5,922
Length of table does not accurately reflect the split
{ "login": "amogkam", "id": 8068268, "node_id": "MDQ6VXNlcjgwNjgyNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amogkam", "html_url": "https://github.com/amogkam", "followers_url": "https://api.github.com/users/amogkam/followers", "following_url": "https://api.github.com/users/amogkam/following{/other_user}", "gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}", "starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amogkam/subscriptions", "organizations_url": "https://api.github.com/users/amogkam/orgs", "repos_url": "https://api.github.com/users/amogkam/repos", "events_url": "https://api.github.com/users/amogkam/events{/privacy}", "received_events_url": "https://api.github.com/users/amogkam/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
null
2
2023-06-01T18:56:26
2023-06-02T16:13:31
2023-06-02T16:13:31
NONE
null
### Describe the bug I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not. ### Steps to reproduce the bug ![image](https://github.com/huggingface/datasets/assets/8068268/83e5768f-8b4c-422a-945c-832a7585afff) ### Expected behavior The expected behavior is when `len(hf_dataset["train"].data)` should match the length of the train split, and not be the entire unsplit dataset. ### Environment info datasets 2.10.1 python 3.10.11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5922/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5921/comments
https://api.github.com/repos/huggingface/datasets/issues/5921/events
https://github.com/huggingface/datasets/pull/5921
1,736,563,023
PR_kwDODunzps5R6j-y
5,921
Fix streaming parquet with image feature in schema
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-06-01T15:23:10
2023-06-02T10:02:54
2023-06-02T09:53:11
MEMBER
null
It was not reading the feature type from the parquet arrow schema
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5921/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5921", "html_url": "https://github.com/huggingface/datasets/pull/5921", "diff_url": "https://github.com/huggingface/datasets/pull/5921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5921.patch", "merged_at": "2023-06-02T09:53:11" }
true
https://api.github.com/repos/huggingface/datasets/issues/5920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5920/comments
https://api.github.com/repos/huggingface/datasets/issues/5920/events
https://github.com/huggingface/datasets/pull/5920
1,736,196,991
PR_kwDODunzps5R5TRB
5,920
Optimize IterableDataset.from_file using ArrowExamplesIterable
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-06-01T12:14:36
2023-06-01T12:42:10
2023-06-01T12:35:14
MEMBER
null
following https://github.com/huggingface/datasets/pull/5893
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5920/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5920", "html_url": "https://github.com/huggingface/datasets/pull/5920", "diff_url": "https://github.com/huggingface/datasets/pull/5920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5920.patch", "merged_at": "2023-06-01T12:35:14" }
true
https://api.github.com/repos/huggingface/datasets/issues/5919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5919/comments
https://api.github.com/repos/huggingface/datasets/issues/5919/events
https://github.com/huggingface/datasets/pull/5919
1,735,519,227
PR_kwDODunzps5R2_EK
5,919
add support for storage_options for load_dataset API
{ "login": "janineguo", "id": 59083384, "node_id": "MDQ6VXNlcjU5MDgzMzg0", "avatar_url": "https://avatars.githubusercontent.com/u/59083384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/janineguo", "html_url": "https://github.com/janineguo", "followers_url": "https://api.github.com/users/janineguo/followers", "following_url": "https://api.github.com/users/janineguo/following{/other_user}", "gists_url": "https://api.github.com/users/janineguo/gists{/gist_id}", "starred_url": "https://api.github.com/users/janineguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/janineguo/subscriptions", "organizations_url": "https://api.github.com/users/janineguo/orgs", "repos_url": "https://api.github.com/users/janineguo/repos", "events_url": "https://api.github.com/users/janineguo/events{/privacy}", "received_events_url": "https://api.github.com/users/janineguo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
12
2023-06-01T05:52:32
2023-07-18T06:14:32
2023-07-17T17:02:00
CONTRIBUTOR
null
to solve the issue in #5880 1. add s3 support in the link check step, previous we only check `http` and `https`, 2. change the parameter of `use_auth_token` to `download_config` to support both `storage_options` and `use_auth_token` parameter when trying to handle(list, open, read, etc,.) the remote files. 3. integrate the check part's duplicate code to make adding or deleting other sources easier.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5919/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5919/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5919", "html_url": "https://github.com/huggingface/datasets/pull/5919", "diff_url": "https://github.com/huggingface/datasets/pull/5919.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5919.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5918/comments
https://api.github.com/repos/huggingface/datasets/issues/5918/events
https://github.com/huggingface/datasets/issues/5918
1,735,313,549
I_kwDODunzps5nbsiN
5,918
File not found for audio dataset
{ "login": "RobertBaruch", "id": 1783950, "node_id": "MDQ6VXNlcjE3ODM5NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RobertBaruch", "html_url": "https://github.com/RobertBaruch", "followers_url": "https://api.github.com/users/RobertBaruch/followers", "following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}", "gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}", "starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions", "organizations_url": "https://api.github.com/users/RobertBaruch/orgs", "repos_url": "https://api.github.com/users/RobertBaruch/repos", "events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}", "received_events_url": "https://api.github.com/users/RobertBaruch/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
2023-06-01T02:15:29
2023-06-11T06:02:25
null
NONE
null
### Describe the bug After loading an audio dataset, and looking at a sample entry, the `path` element, which is supposed to be the path to the audio file, doesn't actually exist. ### Steps to reproduce the bug Run bug.py: ```py import os.path from datasets import load_dataset def run() -> None: cv13 = load_dataset( "mozilla-foundation/common_voice_13_0", "hi", split="train", ) print(cv13[0]) audio_file = cv13[0]["path"] if not os.path.exists(audio_file): raise ValueError(f'File {audio_file} does not exist.') if __name__ == "__main__": run() ``` The result (on my machine): ```json {'client_id': '0f018a99663f33afbb7d38aee281fb1afcfd07f9e7acd00383f604e1e17c38d6ed8adf1bd2ccbf927a52c5adefb8ac4b158ce27a7c2ed9581e71202eb302dfb3', 'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\common_voice_hi_26008353.mp3', 'audio': {'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\common_voice_hi_26008353.mp3', 'array': array([ 6.46234854e-26, -1.35709319e-25, -8.07793567e-26, ..., 1.06425944e-07, 4.46417090e-08, 2.61451660e-09]), 'sampling_rate': 48000}, 'sentence': 'हमने उसका जन्मदिन मनाया।', 'up_votes': 2, 'down_votes': 0, 'age': '', 'gender': '', 'accent': '', 'locale': 'hi', 'segment': '' ', 'variant': ''} ``` ```txt Traceback (most recent call last): File "F:\eo-reco\bug.py", line 18, in <module> run() File "F:\eo-reco\bug.py", line 15, in run raise ValueError(f'File {audio_file} does not exist.') ValueError: File C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\common_voice_hi_26008353.mp3 does not exist. ``` ### Expected behavior The `path` element points to the correct file, which happens to be: ``` C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\hi_train_0\common_voice_hi_26008353.mp3 ``` That is, there's an extra directory `hi_train_0` that is not in the `path` element. ### Environment info - `datasets` version: 2.12.0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.11.3 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1 -
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5918/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5918/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5917/comments
https://api.github.com/repos/huggingface/datasets/issues/5917/events
https://github.com/huggingface/datasets/pull/5917
1,733,661,588
PR_kwDODunzps5RwoRU
5,917
Refactor extensions
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-05-31T08:33:02
2023-05-31T13:34:35
2023-05-31T13:25:57
MEMBER
null
Related to: - #5850
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5917/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5917", "html_url": "https://github.com/huggingface/datasets/pull/5917", "diff_url": "https://github.com/huggingface/datasets/pull/5917.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5917.patch", "merged_at": "2023-05-31T13:25:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/5916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5916/comments
https://api.github.com/repos/huggingface/datasets/issues/5916/events
https://github.com/huggingface/datasets/pull/5916
1,732,456,392
PR_kwDODunzps5RskTb
5,916
Unpin responses
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-05-30T14:59:48
2023-05-30T18:03:10
2023-05-30T17:53:29
CONTRIBUTOR
null
Fix #5906
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5916/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5916", "html_url": "https://github.com/huggingface/datasets/pull/5916", "diff_url": "https://github.com/huggingface/datasets/pull/5916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5916.patch", "merged_at": "2023-05-30T17:53:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/5915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5915/comments
https://api.github.com/repos/huggingface/datasets/issues/5915/events
https://github.com/huggingface/datasets/pull/5915
1,732,389,984
PR_kwDODunzps5RsVzj
5,915
Raise error in `DatasetBuilder.as_dataset` when `file_format` is not `"arrow"`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-05-30T14:27:55
2023-05-31T13:31:21
2023-05-31T13:23:54
CONTRIBUTOR
null
Raise an error in `DatasetBuilder.as_dataset` when `file_format != "arrow"` (and fix the docstring) Fix #5874
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5915/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5915", "html_url": "https://github.com/huggingface/datasets/pull/5915", "diff_url": "https://github.com/huggingface/datasets/pull/5915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5915.patch", "merged_at": "2023-05-31T13:23:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/5914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5914/comments
https://api.github.com/repos/huggingface/datasets/issues/5914/events
https://github.com/huggingface/datasets/issues/5914
1,731,483,996
I_kwDODunzps5nNFlc
5,914
array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets
{ "login": "ravenouse", "id": 85110830, "node_id": "MDQ6VXNlcjg1MTEwODMw", "avatar_url": "https://avatars.githubusercontent.com/u/85110830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ravenouse", "html_url": "https://github.com/ravenouse", "followers_url": "https://api.github.com/users/ravenouse/followers", "following_url": "https://api.github.com/users/ravenouse/following{/other_user}", "gists_url": "https://api.github.com/users/ravenouse/gists{/gist_id}", "starred_url": "https://api.github.com/users/ravenouse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ravenouse/subscriptions", "organizations_url": "https://api.github.com/users/ravenouse/orgs", "repos_url": "https://api.github.com/users/ravenouse/repos", "events_url": "https://api.github.com/users/ravenouse/events{/privacy}", "received_events_url": "https://api.github.com/users/ravenouse/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2023-05-30T04:25:00
2023-05-30T04:25:00
null
NONE
null
### Describe the bug When using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message "array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size." Detailed error message: Traceback (most recent call last): File "data_processing.py", line 26, in <module> processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split],writer_batch_size = 50) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2405, in map desc=desc, File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2756, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated result = f(decorated_item, *args, **kwargs) File "data_processing.py", line 11, in prepare_dataset audio = batch["audio"] File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 123, in __getitem__ value = decode_nested_example(self.features[key], value) if value is not None else None File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/features.py", line 1260, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 156, in decode_example array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 257, in _decode_non_mp3_path_like array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 176, in load y, sr_native = __soundfile_load(path, offset, duration, dtype) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 222, in __soundfile_load y = sf_desc.read(frames=frame_duration, dtype=dtype, always_2d=False).T File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 891, in read out = self._create_empty_array(frames, always_2d, dtype) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 1323, in _create_empty_array return np.empty(shape, dtype, order='C') ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size. ### Steps to reproduce the bug ```python from datasets import load_dataset, DatasetDict from transformers import WhisperFeatureExtractor from transformers import WhisperTokenizer samromur_children= load_dataset("language-and-voice-lab/samromur_children") feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="icelandic", task="transcribe") def prepare_dataset(batch): # load and resample audio data from 48 to 16kHz audio = batch["audio"] # compute log-Mel input features from input audio array batch["input_features"] = feature_extractor(audio["array"], sampling_rate=16000).input_features[0] # encode target text to label ids batch["labels"] = tokenizer(batch["normalized_text"]).input_ids return batch cache_dict = {"train": "./cache/audio_train.cache", \ "validation": "./cache/audio_validation.cache", \ "test": "./cache/audio_test.cache"} filter_cache_dict = {"train": "./cache/filter_train.arrow", \ "validation": "./cache/filter_validation.arrow", \ "test": "./cache/filter_test.arrow"} print("before filtering") print(samromur_children) #filter the dataset to only include examples with more than 2 seconds of audio samromur_children = samromur_children.filter(lambda example: example["audio"]["array"].shape[0] > 16000*2, cache_file_names=filter_cache_dict) print("after filtering") print(samromur_children) processed_dataset = DatasetDict() # processed_dataset = samromur_children.map(prepare_dataset, cache_file_names=cache_dict, num_proc=10,) for split in ["train", "validation", "test"]: processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split]) ``` ### Expected behavior The dataset is successfully processed and ready to train the model. ### Environment info Python version: 3.7.13 datasets package version: 2.4.0 librosa package version: 0.10.0.post2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5914/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5913/comments
https://api.github.com/repos/huggingface/datasets/issues/5913/events
https://github.com/huggingface/datasets/issues/5913
1,731,427,484
I_kwDODunzps5nM3yc
5,913
I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred.
{ "login": "cjt222", "id": 17508662, "node_id": "MDQ6VXNlcjE3NTA4NjYy", "avatar_url": "https://avatars.githubusercontent.com/u/17508662?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cjt222", "html_url": "https://github.com/cjt222", "followers_url": "https://api.github.com/users/cjt222/followers", "following_url": "https://api.github.com/users/cjt222/following{/other_user}", "gists_url": "https://api.github.com/users/cjt222/gists{/gist_id}", "starred_url": "https://api.github.com/users/cjt222/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cjt222/subscriptions", "organizations_url": "https://api.github.com/users/cjt222/orgs", "repos_url": "https://api.github.com/users/cjt222/repos", "events_url": "https://api.github.com/users/cjt222/events{/privacy}", "received_events_url": "https://api.github.com/users/cjt222/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-05-30T02:55:26
2023-07-24T12:00:38
2023-07-24T12:00:38
NONE
null
### Describe the bug File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/builder.py", line 1858, in _prepare_split_single Downloading and preparing dataset json/default to /home/kas/diffusers/examples/dreambooth/cache_data/datasets/json/default-acf423d8c6ef99d0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4... Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data files: 100%|██████████| 1/1 [00:00<00:00, 84.35it/s] Extracting data files: 0%| | 0/1 [00:00<?, ?it/s] for _, table in generator: File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 114, in _generate_tables io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) File "pyarrow/_json.pyx", line 258, in pyarrow._json.read_json Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 27.72it/s] Generating train split: 0 examples [00:00, ? examples/s] File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 125, in pyarrow.lib.check_status pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2390448764 ### Steps to reproduce the bug 1、data_files = ["1.json", "2.json", "3.json"] 2、dataset = load_dataset('json', data_files=data_files) ### Expected behavior Read the dataset normally. ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-4.15.0-29-generic-x86_64-with-debian-buster-sid - Python version: 3.7.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5913/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5912/comments
https://api.github.com/repos/huggingface/datasets/issues/5912/events
https://github.com/huggingface/datasets/issues/5912
1,730,299,852
I_kwDODunzps5nIkfM
5,912
Missing elements in `map` a batched dataset
{ "login": "sachinruk", "id": 1410927, "node_id": "MDQ6VXNlcjE0MTA5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sachinruk", "html_url": "https://github.com/sachinruk", "followers_url": "https://api.github.com/users/sachinruk/followers", "following_url": "https://api.github.com/users/sachinruk/following{/other_user}", "gists_url": "https://api.github.com/users/sachinruk/gists{/gist_id}", "starred_url": "https://api.github.com/users/sachinruk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sachinruk/subscriptions", "organizations_url": "https://api.github.com/users/sachinruk/orgs", "repos_url": "https://api.github.com/users/sachinruk/repos", "events_url": "https://api.github.com/users/sachinruk/events{/privacy}", "received_events_url": "https://api.github.com/users/sachinruk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-05-29T08:09:19
2023-07-26T15:48:15
2023-07-26T15:48:15
NONE
null
### Describe the bug As outlined [here](https://discuss.huggingface.co/t/length-error-using-map-with-datasets/40969/3?u=sachin), the following collate function drops 5 out of possible 6 elements in the batch (it is 6 because out of the eight, two are bad links in laion). A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here. The weirdest part is when inspecting the sizes of the tensors as shown below, both `tokenized_captions["input_ids"]` and `image_features` show the correct shapes. Simply the output only has one element (with the batch dimension squeezed out). ```python class CollateFn: def get_image(self, url): try: response = requests.get(url) return Image.open(io.BytesIO(response.content)).convert("RGB") except PIL.UnidentifiedImageError: logger.info(f"Reading error: Could not transform f{url}") return None except requests.exceptions.ConnectionError: logger.info(f"Connection error: Could not transform f{url}") return None def __call__(self, batch): images = [self.get_image(url) for url in batch["url"]] captions = [caption for caption, image in zip(batch["caption"], images) if image is not None] images = [image for image in images if image is not None] tokenized_captions = tokenizer( captions, padding="max_length", truncation=True, max_length=tokenizer.model_max_length, return_tensors="pt", ) image_features = torch.stack([torch.Tensor(feature_extractor(image)["pixel_values"][0]) for image in images]) # import pdb; pdb.set_trace() return {"input_ids": tokenized_captions["input_ids"], "images": image_features} collate_fn = CollateFn() laion_ds = datasets.load_dataset("laion/laion400m", split="train", streaming=True) laion_ds_batched = laion_ds.map(collate_fn, batched=True, batch_size=8, remove_columns=next(iter(laion_ds)).keys()) ``` ### Steps to reproduce the bug A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here. ### Expected behavior Would expect `next(iter(laion_ds_batched))` to produce two tensors of shape `(batch_size, 77)` and `batch_size, image_shape`. ### Environment info datasets==2.12.0 python==3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5912/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5910/comments
https://api.github.com/repos/huggingface/datasets/issues/5910/events
https://github.com/huggingface/datasets/issues/5910
1,728,909,790
I_kwDODunzps5nDRHe
5,910
Cannot use both set_format and set_transform
{ "login": "ybouane", "id": 14046002, "node_id": "MDQ6VXNlcjE0MDQ2MDAy", "avatar_url": "https://avatars.githubusercontent.com/u/14046002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ybouane", "html_url": "https://github.com/ybouane", "followers_url": "https://api.github.com/users/ybouane/followers", "following_url": "https://api.github.com/users/ybouane/following{/other_user}", "gists_url": "https://api.github.com/users/ybouane/gists{/gist_id}", "starred_url": "https://api.github.com/users/ybouane/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ybouane/subscriptions", "organizations_url": "https://api.github.com/users/ybouane/orgs", "repos_url": "https://api.github.com/users/ybouane/repos", "events_url": "https://api.github.com/users/ybouane/events{/privacy}", "received_events_url": "https://api.github.com/users/ybouane/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
2023-05-27T19:22:23
2023-07-09T21:40:54
2023-06-16T14:41:24
NONE
null
### Describe the bug I need to process some data using the set_transform method but I also need the data to be formatted for pytorch before processing it. I don't see anywhere in the documentation something that says that both methods cannot be used at the same time. ### Steps to reproduce the bug ``` from datasets import load_dataset ds = load_dataset("mnist", split="train") ds.set_format(type="torch") def transform(entry): return entry["image"].double() ds.set_transform(transform) print(ds[0]) ``` ### Expected behavior It should print the pytorch tensor image as a double, but it errors because "entry" in the transform function doesn't receive a pytorch tensor to begin with, it receives a PIL Image -> entry.double() errors because entry isn't a pytorch tensor. ### Environment info Latest versions. ### Note: It would be at least handy to have access to a function that can do the dataset.set_format in the set_transform function. Something like: ``` from datasets import load_dataset, do_format ds = load_dataset("mnist", split="train") def transform(entry): entry = do_format(entry, type="torch") return entry["image"].double() ds.set_transform(transform) print(ds[0]) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5910/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5910/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5909/comments
https://api.github.com/repos/huggingface/datasets/issues/5909/events
https://github.com/huggingface/datasets/pull/5909
1,728,900,068
PR_kwDODunzps5Rgga6
5,909
Use more efficient and idiomatic way to construct list.
{ "login": "ttsugriy", "id": 172294, "node_id": "MDQ6VXNlcjE3MjI5NA==", "avatar_url": "https://avatars.githubusercontent.com/u/172294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ttsugriy", "html_url": "https://github.com/ttsugriy", "followers_url": "https://api.github.com/users/ttsugriy/followers", "following_url": "https://api.github.com/users/ttsugriy/following{/other_user}", "gists_url": "https://api.github.com/users/ttsugriy/gists{/gist_id}", "starred_url": "https://api.github.com/users/ttsugriy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ttsugriy/subscriptions", "organizations_url": "https://api.github.com/users/ttsugriy/orgs", "repos_url": "https://api.github.com/users/ttsugriy/repos", "events_url": "https://api.github.com/users/ttsugriy/events{/privacy}", "received_events_url": "https://api.github.com/users/ttsugriy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-05-27T18:54:47
2023-05-31T15:37:11
2023-05-31T13:28:29
CONTRIBUTOR
null
Using `*` is ~2X faster according to [benchmark](https://colab.research.google.com/gist/ttsugriy/c964a2604edf70c41911b10335729b6a/for-vs-mult.ipynb) with just 4 patterns. This doesn't matter much since this tiny difference is not going to be noticeable, but why not?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5909/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5909", "html_url": "https://github.com/huggingface/datasets/pull/5909", "diff_url": "https://github.com/huggingface/datasets/pull/5909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5909.patch", "merged_at": "2023-05-31T13:28:28" }
true
https://api.github.com/repos/huggingface/datasets/issues/5908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5908/comments
https://api.github.com/repos/huggingface/datasets/issues/5908/events
https://github.com/huggingface/datasets/issues/5908
1,728,653,935
I_kwDODunzps5nCSpv
5,908
Unbearably slow sorting on big mapped datasets
{ "login": "maximxlss", "id": 29152154, "node_id": "MDQ6VXNlcjI5MTUyMTU0", "avatar_url": "https://avatars.githubusercontent.com/u/29152154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maximxlss", "html_url": "https://github.com/maximxlss", "followers_url": "https://api.github.com/users/maximxlss/followers", "following_url": "https://api.github.com/users/maximxlss/following{/other_user}", "gists_url": "https://api.github.com/users/maximxlss/gists{/gist_id}", "starred_url": "https://api.github.com/users/maximxlss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maximxlss/subscriptions", "organizations_url": "https://api.github.com/users/maximxlss/orgs", "repos_url": "https://api.github.com/users/maximxlss/repos", "events_url": "https://api.github.com/users/maximxlss/events{/privacy}", "received_events_url": "https://api.github.com/users/maximxlss/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
6
2023-05-27T11:08:32
2023-06-13T17:45:10
null
CONTRIBUTOR
null
### Describe the bug For me, with ~40k lines, sorting took 3.5 seconds on a flattened dataset (including the flatten operation) and 22.7 seconds on a mapped dataset (right after sharding), which is about x5 slowdown. Moreover, it seems like it slows down exponentially with bigger datasets (wasn't able to sort 700k lines at all, with flattening takes about a minute). ### Steps to reproduce the bug ```Python from datasets import load_dataset import time dataset = load_dataset("xnli", "en", split="train") dataset = dataset.shard(10, 0) print(len(dataset)) t = time.time() # dataset = dataset.flatten_indices() # uncomment this line and it's fast dataset = dataset.sort("label", reverse=True, load_from_cache_file=False) print(f"finished in {time.time() - t:.4f} seconds") ``` ### Expected behavior Expect sorting to take the same or less time than flattening and then sorting. ### Environment info - `datasets` version: 2.12.1.dev0 (same with 2.12.0 too) - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5908/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5907/comments
https://api.github.com/repos/huggingface/datasets/issues/5907/events
https://github.com/huggingface/datasets/pull/5907
1,728,648,560
PR_kwDODunzps5RfqUU
5,907
Add `flatten_indices` to `DatasetDict`
{ "login": "maximxlss", "id": 29152154, "node_id": "MDQ6VXNlcjI5MTUyMTU0", "avatar_url": "https://avatars.githubusercontent.com/u/29152154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maximxlss", "html_url": "https://github.com/maximxlss", "followers_url": "https://api.github.com/users/maximxlss/followers", "following_url": "https://api.github.com/users/maximxlss/following{/other_user}", "gists_url": "https://api.github.com/users/maximxlss/gists{/gist_id}", "starred_url": "https://api.github.com/users/maximxlss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maximxlss/subscriptions", "organizations_url": "https://api.github.com/users/maximxlss/orgs", "repos_url": "https://api.github.com/users/maximxlss/repos", "events_url": "https://api.github.com/users/maximxlss/events{/privacy}", "received_events_url": "https://api.github.com/users/maximxlss/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-05-27T10:55:44
2023-06-01T11:46:35
2023-06-01T11:39:36
CONTRIBUTOR
null
Add `flatten_indices` to `DatasetDict` for convinience
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5907/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5907", "html_url": "https://github.com/huggingface/datasets/pull/5907", "diff_url": "https://github.com/huggingface/datasets/pull/5907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5907.patch", "merged_at": "2023-06-01T11:39:35" }
true
https://api.github.com/repos/huggingface/datasets/issues/5906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5906/comments
https://api.github.com/repos/huggingface/datasets/issues/5906/events
https://github.com/huggingface/datasets/issues/5906
1,728,171,113
I_kwDODunzps5nAcxp
5,906
Could you unpin responses version?
{ "login": "kenimou", "id": 47789026, "node_id": "MDQ6VXNlcjQ3Nzg5MDI2", "avatar_url": "https://avatars.githubusercontent.com/u/47789026?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kenimou", "html_url": "https://github.com/kenimou", "followers_url": "https://api.github.com/users/kenimou/followers", "following_url": "https://api.github.com/users/kenimou/following{/other_user}", "gists_url": "https://api.github.com/users/kenimou/gists{/gist_id}", "starred_url": "https://api.github.com/users/kenimou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kenimou/subscriptions", "organizations_url": "https://api.github.com/users/kenimou/orgs", "repos_url": "https://api.github.com/users/kenimou/repos", "events_url": "https://api.github.com/users/kenimou/events{/privacy}", "received_events_url": "https://api.github.com/users/kenimou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-05-26T20:02:14
2023-05-30T17:53:31
2023-05-30T17:53:31
NONE
null
### Describe the bug Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version. ### Steps to reproduce the bug could not install this library due to dependency conflict. ### Expected behavior can install datasets ### Environment info linux 64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5906/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5905/comments
https://api.github.com/repos/huggingface/datasets/issues/5905/events
https://github.com/huggingface/datasets/issues/5905
1,727,541,392
I_kwDODunzps5m-DCQ
5,905
Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently
{ "login": "Hubert-Bonisseur", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hubert-Bonisseur", "html_url": "https://github.com/Hubert-Bonisseur", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
2023-05-26T12:33:02
2023-06-15T13:34:18
null
CONTRIBUTOR
null
### Feature request I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset. ### Motivation I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally intensive audio processing to do. As a result I want to load data from my remote when it is needed and perform all processing on the fly. I am currently using the iterable dataset feature of _datasets_. It does everything I need with one exception. My issue is that when resuming training at a step n, we have to download all the data and perform the processing of steps < n, just to get the iterable at the right step. In my case it takes almost as long as training for the same steps, which make resuming training from a checkpoint useless in practice. I understand that the nature of iterators make it probably nearly impossible to quickly resume training. I thought about a possible solution nonetheless : I could in fact index my large dataset and make it a mapped dataset. Then I could use set_transform to perform the processing on the fly. Finally, if I'm not mistaken, the _accelerate_ package allows to [skip steps efficiently](https://github.com/huggingface/accelerate/blob/a73898027a211c3f6dc4460351b0ec246aa824aa/src/accelerate/data_loader.py#L827) for a mapped dataset. Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe something can be done there. If not, I could do it using a plain _Pytorch_ dataset. Then I would need to convert it to a _datasets_' dataset to get all the features of _datasets_. Is it something possible ? ### Your contribution I could provide a PR to allow lazy loading of mapped dataset or the conversion of a mapped _Pytorch_ dataset into a _Datasets_ dataset if you think it is an useful new feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5905/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5904/comments
https://api.github.com/repos/huggingface/datasets/issues/5904/events
https://github.com/huggingface/datasets/pull/5904
1,727,415,626
PR_kwDODunzps5Rbfks
5,904
Validate name parameter in make_file_instructions
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-05-26T11:12:46
2023-05-31T07:43:32
2023-05-31T07:34:57
MEMBER
null
Validate `name` parameter in `make_file_instructions`. This way users get more informative error messages, instead of: ```stacktrace .../huggingface/datasets/src/datasets/arrow_reader.py in make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path) 110 name2len = {info.name: info.num_examples for info in split_infos} 111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos} --> 112 name2filenames = { 113 info.name: filenames_for_dataset_split( 114 path=prefix_path, .../huggingface/datasets/src/datasets/arrow_reader.py in <dictcomp>(.0) 111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos} 112 name2filenames = { --> 113 info.name: filenames_for_dataset_split( 114 path=prefix_path, 115 dataset_name=name, .../huggingface/datasets/src/datasets/naming.py in filenames_for_dataset_split(path, dataset_name, split, filetype_suffix, shard_lengths) 68 69 def filenames_for_dataset_split(path, dataset_name, split, filetype_suffix=None, shard_lengths=None): ---> 70 prefix = filename_prefix_for_split(dataset_name, split) 71 prefix = os.path.join(path, prefix) 72 .../huggingface/datasets/src/datasets/naming.py in filename_prefix_for_split(name, split) 52 53 def filename_prefix_for_split(name, split): ---> 54 if os.path.basename(name) != name: 55 raise ValueError(f"Should be a dataset name, not a path: {name}") 56 if not re.match(_split_re, split): .../lib/python3.9/posixpath.py in basename(p) 140 def basename(p): 141 """Returns the final component of a pathname""" --> 142 p = os.fspath(p) 143 sep = _get_sep(p) 144 i = p.rfind(sep) + 1 TypeError: expected str, bytes or os.PathLike object, not NoneType ``` Related to #5895.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5904/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5904", "html_url": "https://github.com/huggingface/datasets/pull/5904", "diff_url": "https://github.com/huggingface/datasets/pull/5904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5904.patch", "merged_at": "2023-05-31T07:34:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/5903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5903/comments
https://api.github.com/repos/huggingface/datasets/issues/5903/events
https://github.com/huggingface/datasets/pull/5903
1,727,372,549
PR_kwDODunzps5RbV82
5,903
Relax `ci.yml` trigger for `pull_request` based on modified paths
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
2023-05-26T10:46:52
2023-05-26T10:51:37
null
CONTRIBUTOR
null
## What's in this PR? As of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect/impact on the `ci.yml` outcome. So this PR controls the paths that trigger the `ci.yml` to avoid wasting resources when not needed. ## What's pending in this PR? I would like to confirm whether this should affect both `push` and `pull_request`, since just modifications in those files won't change the `ci.yml` outcome, so maybe it's worth skipping it too in the `push` trigger.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5903/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5903", "html_url": "https://github.com/huggingface/datasets/pull/5903", "diff_url": "https://github.com/huggingface/datasets/pull/5903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5903.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5902/comments
https://api.github.com/repos/huggingface/datasets/issues/5902/events
https://github.com/huggingface/datasets/pull/5902
1,727,342,194
PR_kwDODunzps5RbPS9
5,902
Fix `Overview.ipynb` & detach Jupyter Notebooks from `datasets` repository
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
13
2023-05-26T10:25:01
2023-07-25T13:50:06
2023-07-25T13:38:33
CONTRIBUTOR
null
## What's in this PR? This PR solves #5887 since there was a mismatch between the tokenizer and the model used, since the tokenizer was `bert-base-cased` while the model was `distilbert-base-case` both for the PyTorch and TensorFlow alternatives. Since DistilBERT doesn't use/need the `token_type_ids`, the `**batch` was failing, as the batch contained `input_ids`, `attention_mask`, `token_type_ids`, `start_positions` and `end_positions`, and `token_type_ids` was not required. Besides that, at the end `seqeval` was being used to evaluate the model predictions, and just `evaluate` was being installed, so I've also included the `seqeval` installation. Finally, I've re-run everything in Google Colab, and every cell was successfully executed! ## What was done on top of the original PR? Based on the comments from @mariosasko and @stevhliu, I've updated the contents of this PR to also review the `quickstart.mdx` and update what was needed, besides that, we may eventually move the `Overview.ipynb` dataset to `huggingface/notebooks` following @stevhliu suggestions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5902/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5902/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5902", "html_url": "https://github.com/huggingface/datasets/pull/5902", "diff_url": "https://github.com/huggingface/datasets/pull/5902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5902.patch", "merged_at": "2023-07-25T13:38:33" }
true
https://api.github.com/repos/huggingface/datasets/issues/5901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5901/comments
https://api.github.com/repos/huggingface/datasets/issues/5901/events
https://github.com/huggingface/datasets/pull/5901
1,727,179,016
PR_kwDODunzps5Rarux
5,901
Make prepare_split more robust if errors in metadata dataset_info splits
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-05-26T08:48:22
2023-06-02T06:06:38
2023-06-01T13:39:40
MEMBER
null
This PR uses `split_generator.split_info` as default value for `split_info` if any exception is raised while trying to get `split_generator.name` from `self.info.splits` (this may happen if there is any error in the metadata dataset_info splits). Please note that `split_info` is only used by the logger. Fix #5895 if passed `verification_mode="no_checks"`: ```python ds = load_dataset( "ArmelR/stack-exchange-instruction", data_dir="data/finetune", split="train", verification_mode="no_checks", revision="c609f1caade5cfbf3b9fe9cfa17d7cb000b457bd", ) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5901/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5901", "html_url": "https://github.com/huggingface/datasets/pull/5901", "diff_url": "https://github.com/huggingface/datasets/pull/5901.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5901.patch", "merged_at": "2023-06-01T13:39:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/5900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5900/comments
https://api.github.com/repos/huggingface/datasets/issues/5900/events
https://github.com/huggingface/datasets/pull/5900
1,727,129,617
PR_kwDODunzps5RahTR
5,900
Fix minor typo in docs loading.mdx
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-05-26T08:10:54
2023-05-26T09:34:15
2023-05-26T09:25:12
MEMBER
null
Minor fix.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5900/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5900", "html_url": "https://github.com/huggingface/datasets/pull/5900", "diff_url": "https://github.com/huggingface/datasets/pull/5900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5900.patch", "merged_at": "2023-05-26T09:25:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/5899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5899/comments
https://api.github.com/repos/huggingface/datasets/issues/5899/events
https://github.com/huggingface/datasets/pull/5899
1,726,279,011
PR_kwDODunzps5RXods
5,899
canonicalize data dir in config ID hash
{ "login": "kylrth", "id": 5044802, "node_id": "MDQ6VXNlcjUwNDQ4MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kylrth", "html_url": "https://github.com/kylrth", "followers_url": "https://api.github.com/users/kylrth/followers", "following_url": "https://api.github.com/users/kylrth/following{/other_user}", "gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}", "starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kylrth/subscriptions", "organizations_url": "https://api.github.com/users/kylrth/orgs", "repos_url": "https://api.github.com/users/kylrth/repos", "events_url": "https://api.github.com/users/kylrth/events{/privacy}", "received_events_url": "https://api.github.com/users/kylrth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-05-25T18:17:10
2023-06-02T16:02:15
2023-06-02T15:52:04
CONTRIBUTOR
null
fixes #5871 The second commit is optional but improves readability.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5899/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5899", "html_url": "https://github.com/huggingface/datasets/pull/5899", "diff_url": "https://github.com/huggingface/datasets/pull/5899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5899.patch", "merged_at": "2023-06-02T15:52:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/5898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5898/comments
https://api.github.com/repos/huggingface/datasets/issues/5898/events
https://github.com/huggingface/datasets/issues/5898
1,726,190,481
I_kwDODunzps5m45OR
5,898
Loading The flores data set for specific language
{ "login": "106AbdulBasit", "id": 36159918, "node_id": "MDQ6VXNlcjM2MTU5OTE4", "avatar_url": "https://avatars.githubusercontent.com/u/36159918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/106AbdulBasit", "html_url": "https://github.com/106AbdulBasit", "followers_url": "https://api.github.com/users/106AbdulBasit/followers", "following_url": "https://api.github.com/users/106AbdulBasit/following{/other_user}", "gists_url": "https://api.github.com/users/106AbdulBasit/gists{/gist_id}", "starred_url": "https://api.github.com/users/106AbdulBasit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/106AbdulBasit/subscriptions", "organizations_url": "https://api.github.com/users/106AbdulBasit/orgs", "repos_url": "https://api.github.com/users/106AbdulBasit/repos", "events_url": "https://api.github.com/users/106AbdulBasit/events{/privacy}", "received_events_url": "https://api.github.com/users/106AbdulBasit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-05-25T17:08:55
2023-05-25T17:21:38
2023-05-25T17:21:37
NONE
null
### Describe the bug I am trying to load the Flores data set the code which is given is ``` from datasets import load_dataset dataset = load_dataset("facebook/flores") ``` This gives the error of config name ""ValueError: Config name is missing" Now if I add some config it gives me the some error "HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''. " How I can load the data of the specific language ? Couldn't find any tutorial any one can help me out? ### Steps to reproduce the bug step one load the data set `from datasets import load_dataset dataset = load_dataset("facebook/flores")` it gives the error of config once config is given it gives the error of "HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''. " ### Expected behavior Data set should be loaded but I am receiving error ### Environment info Datasets , python ,
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5898/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5897/comments
https://api.github.com/repos/huggingface/datasets/issues/5897/events
https://github.com/huggingface/datasets/pull/5897
1,726,135,494
PR_kwDODunzps5RXJaY
5,897
Fix `FixedSizeListArray` casting
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-05-25T16:26:33
2023-05-26T12:22:04
2023-05-26T11:57:16
CONTRIBUTOR
null
Fix cast on sliced `FixedSizeListArray`s. Fix #5866
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5897/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5897", "html_url": "https://github.com/huggingface/datasets/pull/5897", "diff_url": "https://github.com/huggingface/datasets/pull/5897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5897.patch", "merged_at": "2023-05-26T11:57:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/5896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5896/comments
https://api.github.com/repos/huggingface/datasets/issues/5896/events
https://github.com/huggingface/datasets/issues/5896
1,726,022,500
I_kwDODunzps5m4QNk
5,896
HuggingFace does not cache downloaded files aggressively/early enough
{ "login": "geajack", "id": 2124157, "node_id": "MDQ6VXNlcjIxMjQxNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/geajack", "html_url": "https://github.com/geajack", "followers_url": "https://api.github.com/users/geajack/followers", "following_url": "https://api.github.com/users/geajack/following{/other_user}", "gists_url": "https://api.github.com/users/geajack/gists{/gist_id}", "starred_url": "https://api.github.com/users/geajack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geajack/subscriptions", "organizations_url": "https://api.github.com/users/geajack/orgs", "repos_url": "https://api.github.com/users/geajack/repos", "events_url": "https://api.github.com/users/geajack/events{/privacy}", "received_events_url": "https://api.github.com/users/geajack/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
2023-05-25T15:14:36
2023-05-25T15:14:36
null
NONE
null
### Describe the bug I wrote the following script: ``` import datasets dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]") ``` I ran it and spent 90 minutes downloading a 20GB file. Then I saw: ``` Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20.3G/20.3G [1:30:29<00:00, 3.73MB/s] Traceback (most recent call last): File "/home/jack/Code/Projects/Transformers/Codebase/main.py", line 5, in <module> dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]") File "/home/jack/.local/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset builder_instance.download_and_prepare( File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 883, in download_and_prepare self._save_info() File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 2037, in _save_info import apache_beam as beam ModuleNotFoundError: No module named 'apache_beam' ``` And the 20GB of data was seemingly instantly gone forever, because when I ran the script again, it had to do the download again. ### Steps to reproduce the bug See above ### Expected behavior See above ### Environment info datasets 2.10.1 Python 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5896/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5895/comments
https://api.github.com/repos/huggingface/datasets/issues/5895/events
https://github.com/huggingface/datasets/issues/5895
1,725,467,252
I_kwDODunzps5m2Ip0
5,895
The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset
{ "login": "DongHande", "id": 45357817, "node_id": "MDQ6VXNlcjQ1MzU3ODE3", "avatar_url": "https://avatars.githubusercontent.com/u/45357817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DongHande", "html_url": "https://github.com/DongHande", "followers_url": "https://api.github.com/users/DongHande/followers", "following_url": "https://api.github.com/users/DongHande/following{/other_user}", "gists_url": "https://api.github.com/users/DongHande/gists{/gist_id}", "starred_url": "https://api.github.com/users/DongHande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DongHande/subscriptions", "organizations_url": "https://api.github.com/users/DongHande/orgs", "repos_url": "https://api.github.com/users/DongHande/repos", "events_url": "https://api.github.com/users/DongHande/events{/privacy}", "received_events_url": "https://api.github.com/users/DongHande/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-05-25T09:39:06
2023-05-29T02:32:12
2023-05-29T02:32:12
NONE
null
### Describe the bug When I load the ArmelR/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset. When I use the script "datasets.load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)", it fails. But it succeeds when I add the "streaming = True" parameter. The website of the dataset is https://huggingface.co/datasets/ArmelR/stack-exchange-instruction/ . The traceback logs are as below: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/load.py", line 1797, in load_dataset builder_instance.download_and_prepare( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 890, in download_and_prepare self._download_and_prepare( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 985, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 1706, in _prepare_split split_info = self.info.splits[split_generator.name] File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/splits.py", line 530, in __getitem__ instructions = make_file_instructions( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 112, in make_file_instructions name2filenames = { File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 113, in <dictcomp> info.name: filenames_for_dataset_split( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 70, in filenames_for_dataset_split prefix = filename_prefix_for_split(dataset_name, split) File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 54, in filename_prefix_for_split if os.path.basename(name) != name: File "/home/xxx/miniconda3/envs/code/lib/python3.9/posixpath.py", line 142, in basename p = os.fspath(p) TypeError: expected str, bytes or os.PathLike object, not NoneType ### Steps to reproduce the bug 1. import datasets library function: ```from datasets import load_dataset``` 2. load dataset: ```ds=load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)``` ### Expected behavior The dataset can be loaded successfully without the streaming setting. ### Environment info Linux, python=3.9 datasets=2.12.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5895/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5894/comments
https://api.github.com/repos/huggingface/datasets/issues/5894/events
https://github.com/huggingface/datasets/pull/5894
1,724,774,910
PR_kwDODunzps5RSjot
5,894
Force overwrite existing filesystem protocol
{ "login": "baskrahmer", "id": 24520725, "node_id": "MDQ6VXNlcjI0NTIwNzI1", "avatar_url": "https://avatars.githubusercontent.com/u/24520725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/baskrahmer", "html_url": "https://github.com/baskrahmer", "followers_url": "https://api.github.com/users/baskrahmer/followers", "following_url": "https://api.github.com/users/baskrahmer/following{/other_user}", "gists_url": "https://api.github.com/users/baskrahmer/gists{/gist_id}", "starred_url": "https://api.github.com/users/baskrahmer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/baskrahmer/subscriptions", "organizations_url": "https://api.github.com/users/baskrahmer/orgs", "repos_url": "https://api.github.com/users/baskrahmer/repos", "events_url": "https://api.github.com/users/baskrahmer/events{/privacy}", "received_events_url": "https://api.github.com/users/baskrahmer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-05-24T21:41:53
2023-05-25T06:52:08
2023-05-25T06:42:33
CONTRIBUTOR
null
Fix #5876
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5894/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5894", "html_url": "https://github.com/huggingface/datasets/pull/5894", "diff_url": "https://github.com/huggingface/datasets/pull/5894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5894.patch", "merged_at": "2023-05-25T06:42:33" }
true