url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.16B
2.27B
node_id
stringlengths
18
19
number
int64
3.86k
6.85k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
2
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6845/comments
https://api.github.com/repos/huggingface/datasets/issues/6845/events
https://github.com/huggingface/datasets/issues/6845
2,265,876,551
I_kwDODunzps6HDohH
6,845
load_dataset doesn't support list column
{ "login": "arthasking123", "id": 16257131, "node_id": "MDQ6VXNlcjE2MjU3MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arthasking123", "html_url": "https://github.com/arthasking123", "followers_url": "https://api.github.com/users/arthasking123/followers", "following_url": "https://api.github.com/users/arthasking123/following{/other_user}", "gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}", "starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions", "organizations_url": "https://api.github.com/users/arthasking123/orgs", "repos_url": "https://api.github.com/users/arthasking123/repos", "events_url": "https://api.github.com/users/arthasking123/events{/privacy}", "received_events_url": "https://api.github.com/users/arthasking123/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-26T14:11:44"
"2024-04-26T14:11:44"
null
NONE
null
### Describe the bug dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") got exception: Generating train split: 1834 examples [00:00, 5227.98 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2295, in table_cast return cast_table_to_schema(table, schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2018, in cast_array_to_feature casted_array_values = _c(array.values, feature[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1804, in wrapper return func(array, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2115, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<m.name: string, x.name: string, p.name: string, n.name: string, h.name: string, name: string, c: int64, collect(r.name): list<item: string>, q.name: string, rel.name: string, count(p): int64, 1: int64, p.location: string, max(n.name): null, mn.name: string, p.time: int64, min(q.name): string> to {'q.name': Value(dtype='string', id=None), 'mn.name': Value(dtype='string', id=None), 'x.name': Value(dtype='string', id=None), 'p.name': Value(dtype='string', id=None), 'n.name': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'm.name': Value(dtype='string', id=None), 'h.name': Value(dtype='string', id=None), 'count(p)': Value(dtype='int64', id=None), 'rel.name': Value(dtype='string', id=None), 'c': Value(dtype='int64', id=None), 'collect(r.name)': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '1': Value(dtype='int64', id=None), 'p.location': Value(dtype='string', id=None), 'substring(h.name,0,5)': Value(dtype='string', id=None), 'p.time': Value(dtype='int64', id=None)} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/llm/train-2.py", line 150, in <module> dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 2609, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ### Steps to reproduce the bug dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") ### Expected behavior no exception ### Environment info python 3.11 datasets 2.19.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6845/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6844/comments
https://api.github.com/repos/huggingface/datasets/issues/6844/events
https://github.com/huggingface/datasets/pull/6844
2,265,870,546
PR_kwDODunzps5t2PRA
6,844
Retry on HF Hub error when streaming
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6844). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@Wauplin This PR is indeed not needed as explained in https://github.com/huggingface/datasets/issues/6843#issuecomment-2079630389. \r\n\r\nSo, I'm closing it." ]
"2024-04-26T14:09:04"
"2024-04-26T15:37:42"
"2024-04-26T15:37:42"
CONTRIBUTOR
null
Retry on the `huggingface_hub`'s `HfHubHTTPError` in the streaming mode. Fix #6843
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6844/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6844", "html_url": "https://github.com/huggingface/datasets/pull/6844", "diff_url": "https://github.com/huggingface/datasets/pull/6844.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6844.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6843/comments
https://api.github.com/repos/huggingface/datasets/issues/6843/events
https://github.com/huggingface/datasets/issues/6843
2,265,432,897
I_kwDODunzps6HB8NB
6,843
IterableDataset raises exception instead of retrying
{ "login": "bauwenst", "id": 145220868, "node_id": "U_kgDOCKflBA", "avatar_url": "https://avatars.githubusercontent.com/u/145220868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bauwenst", "html_url": "https://github.com/bauwenst", "followers_url": "https://api.github.com/users/bauwenst/followers", "following_url": "https://api.github.com/users/bauwenst/following{/other_user}", "gists_url": "https://api.github.com/users/bauwenst/gists{/gist_id}", "starred_url": "https://api.github.com/users/bauwenst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bauwenst/subscriptions", "organizations_url": "https://api.github.com/users/bauwenst/orgs", "repos_url": "https://api.github.com/users/bauwenst/repos", "events_url": "https://api.github.com/users/bauwenst/events{/privacy}", "received_events_url": "https://api.github.com/users/bauwenst/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Thanks for reporting! I've opened a PR with a fix.", "Thanks, @mariosasko! Related question (although I guess this is a feature request): could we have some kind of exponential back-off for these retries? Here's my reasoning:\r\n- If a one-time accidental error happens, you should retry immediately and will succeed immediately.\r\n- If the Hub has a small outage on the order of minutes, you don't want to retry on the order of hours. \r\n- If the Hub has a prologned outage of several hours, we don't want to keep retrying on the order of minutes.\r\n\r\nThere actually already exists an implementation for (clipped) exponential backoff in the HuggingFace suite ([here](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/utils/_http.py#L306)), but I don't think it is used here.\r\n\r\nThe requirements are basically that you have an initial minimum waiting time and a maximum waiting time, and with each retry, the waiting time is doubled. We don't want to overload your servers with needless retries, especially when they're down :sweat_smile:", "Oh, I've just remembered that we added retries to the `HfFileSystem` in `huggingface_hub` 0.21.0 (see [this](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/hf_file_system.py#L703)), so I'll close the linked PR as we don't want to retry the retries :).\r\n\r\nI agree with the exponential backoff suggestion, so I'll open another PR.", "@mariosasko The call you linked indeed points to the implementation I linked in my previous comment, yes, but it has no configurability. Arguably, you want to have this hidden backoff under the hood that catches small network disturbances on the time scale of seconds -- perhaps even with hardcoded limits as is the case currently -- but you also still want to have a separate backoff on top of that with the configurability as suggested by @lhoestq in [the comment I linked](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229).\r\n\r\nMy particular use-case is that I'm streaming a dataset while training on a university cluster with a very long scheduling queue. This means that when the backoff runs out of retries (which happens in under 30 seconds with the call you linked), I lose my spot on the cluster and have to queue for a whole day or more. Ideally, I should be able to specify that I want to retry for 2 to 3 hours but with more and more time between requests, so that I can smooth over hours-long outages without a setback of days." ]
"2024-04-26T10:00:43"
"2024-04-26T16:57:31"
null
NONE
null
### Describe the bug In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Since a commit by @lhoestq [last week](https://github.com/huggingface/datasets/commit/a188022dc43a76a119d90c03832d51d6e4a94d91), that code lives here: https://github.com/huggingface/datasets/blob/fe2bea6a4b09b180bd23b88fe96dfd1a11191a4f/src/datasets/utils/file_utils.py#L1097C1-L1111C19 If GitHub code snippets still aren't working, here's a copy: ```python def read_with_retries(*args, **kwargs): disconnect_err = None for retry in range(1, max_retries + 1): try: out = read(*args, **kwargs) break except (ClientError, TimeoutError) as err: disconnect_err = err logger.warning( f"Got disconnected from remote data host. Retrying in {config.STREAMING_READ_RETRY_INTERVAL}sec [{retry}/{max_retries}]" ) time.sleep(config.STREAMING_READ_RETRY_INTERVAL) else: raise ConnectionError("Server Disconnected") from disconnect_err return out ``` With the latest outage, the end of my stack trace looked like this: ``` ... File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries out = read(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 301, in read return self._buffer.read(size) ^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/_compression.py", line 68, in readinto data = self.read(len(byte_view)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 505, in read buf = self._fp.read(io.DEFAULT_BUFFER_SIZE) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 88, in read return self.file.read(size) ^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/spec.py", line 1856, in read out = self.cache._fetch(self.loc, self.loc + length) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/caching.py", line 189, in _fetch self.cache = self.fetcher(start, end) # new block replaces old ^^^^^^^^^^^^^^^^^^^^^^^^ File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range hf_raise_for_status(r) File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/allenai/c4/resolve/1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00346-of-01024.json.gz ``` Indeed, the code for retries only catches `ClientError`s and `TimeoutError`s, and all other exceptions, *including HuggingFace's own custom HTTP error class*, **are not caught. Nothing is retried,** and instead the exception is propagated upwards immediately. ### Steps to reproduce the bug Not sure how you reproduce this. Maybe unplug your Ethernet cable while streaming a dataset; the issue is pretty clear from the stack trace. ### Expected behavior All HTTP errors while iterating a streamable dataset should cause retries. ### Environment info Output from `datasets-cli env`: - `datasets` version: 2.18.0 - Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.7 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6843/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6842/comments
https://api.github.com/repos/huggingface/datasets/issues/6842/events
https://github.com/huggingface/datasets/issues/6842
2,264,692,159
I_kwDODunzps6G_HW_
6,842
Datasets with files with colon : in filenames cannot be used on Windows
{ "login": "jacobjennings", "id": 1038927, "node_id": "MDQ6VXNlcjEwMzg5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/1038927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jacobjennings", "html_url": "https://github.com/jacobjennings", "followers_url": "https://api.github.com/users/jacobjennings/followers", "following_url": "https://api.github.com/users/jacobjennings/following{/other_user}", "gists_url": "https://api.github.com/users/jacobjennings/gists{/gist_id}", "starred_url": "https://api.github.com/users/jacobjennings/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jacobjennings/subscriptions", "organizations_url": "https://api.github.com/users/jacobjennings/orgs", "repos_url": "https://api.github.com/users/jacobjennings/repos", "events_url": "https://api.github.com/users/jacobjennings/events{/privacy}", "received_events_url": "https://api.github.com/users/jacobjennings/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-26T00:14:16"
"2024-04-26T00:14:16"
null
NONE
null
### Describe the bug Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings. ### Steps to reproduce the bug 1. Attempt to run load_dataset on MLCommons/peoples_speech ### Expected behavior Does not crash during extraction ### Environment info Windows 11, NTFS filesystem, Python 3.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6842/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6841/comments
https://api.github.com/repos/huggingface/datasets/issues/6841/events
https://github.com/huggingface/datasets/issues/6841
2,264,687,683
I_kwDODunzps6G_GRD
6,841
Unable to load wiki_auto_asset_turk from GEM
{ "login": "abhinavsethy", "id": 23074600, "node_id": "MDQ6VXNlcjIzMDc0NjAw", "avatar_url": "https://avatars.githubusercontent.com/u/23074600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhinavsethy", "html_url": "https://github.com/abhinavsethy", "followers_url": "https://api.github.com/users/abhinavsethy/followers", "following_url": "https://api.github.com/users/abhinavsethy/following{/other_user}", "gists_url": "https://api.github.com/users/abhinavsethy/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhinavsethy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhinavsethy/subscriptions", "organizations_url": "https://api.github.com/users/abhinavsethy/orgs", "repos_url": "https://api.github.com/users/abhinavsethy/repos", "events_url": "https://api.github.com/users/abhinavsethy/events{/privacy}", "received_events_url": "https://api.github.com/users/abhinavsethy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! I've opened a [PR](https://huggingface.co/datasets/GEM/wiki_auto_asset_turk/discussions/5) with a fix. While waiting for it to be merged, you can load the dataset from the PR branch with `datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")`", "Thanks Mario. Still getting the same issue though with the suggested fix\r\n\r\n#cat gem_sari.py\r\nimport datasets\r\nprint (datasets.__version__)\r\ndataset =datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")\r\n\r\nEnd up with \r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py\", line 2582, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1767, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py\", line 1565, in _prepare_split\r\n split_info = self.info.splits[split_generator.name]\r\n ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py\", line 532, in __getitem__\r\n instructions = make_file_instructions(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py\", line 121, in make_file_instructions\r\n info.name: filenames_for_dataset_split(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py\", line 72, in filenames_for_dataset_split\r\n prefix = os.path.join(path, prefix)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"<frozen posixpath>\", line 76, in join\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType", "Hmm, that's weird. Maybe try deleting the cache with `!rm -rf ~/.cache/huggingface/datasets` and then re-download.", "Tried that a couple of time. It does download the data fresh but end up with same error. Is there a way to see if its using the right version ?", "You can check the version with `python -c \"import datasets; print(datasets.__version__)\"`", "the datasets version is 2.18. \r\n\r\nI wanted to see if the command datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\") is using the right revision (refs/pr/5). \r\n\r\n\r\n\r\n\r\n\r\n " ]
"2024-04-26T00:08:47"
"2024-04-26T17:22:58"
"2024-04-26T16:12:29"
NONE
null
### Describe the bug I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call >>import datasets >>print (datasets.__version__) >>dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") System output: Generating train split: 100%|█| 483801/483801 [00:03<00:00, 127164.26 examples/s Generating validation split: 100%|█| 20000/20000 [00:00<00:00, 116052.94 example Generating test_asset split: 100%|██| 359/359 [00:00<00:00, 76155.93 examples/s] Generating test_turk split: 100%|███| 359/359 [00:00<00:00, 87691.76 examples/s] Traceback (most recent call last): File "/Users/abhinav.sethy/Code/openai_evals/evals/evals/grammarly_tasks/gem_sari.py", line 3, in <module> dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/load.py", line 2582, in load_dataset builder_instance.download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1005, in download_and_prepare self._download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1767, in _download_and_prepare super()._download_and_prepare( File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1100, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/builder.py", line 1565, in _prepare_split split_info = self.info.splits[split_generator.name] ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/splits.py", line 532, in __getitem__ instructions = make_file_instructions( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/arrow_reader.py", line 121, in make_file_instructions info.name: filenames_for_dataset_split( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/datasets/naming.py", line 72, in filenames_for_dataset_split prefix = os.path.join(path, prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen posixpath>", line 76, in join TypeError: expected str, bytes or os.PathLike object, not NoneType ### Steps to reproduce the bug import datasets print (datasets.__version__) dataset = datasets.load_dataset("GEM/wiki_auto_asset_turk") ### Expected behavior Should be able to load the dataset without any issues ### Environment info datasets version 2.18.0 (was able to reproduce bug with older versions 2.16 and 2.14 also) Python 3.12.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6841/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6840/comments
https://api.github.com/repos/huggingface/datasets/issues/6840/events
https://github.com/huggingface/datasets/issues/6840
2,264,604,766
I_kwDODunzps6G-yBe
6,840
Delete uploaded files from the UI
{ "login": "saicharan2804", "id": 62512681, "node_id": "MDQ6VXNlcjYyNTEyNjgx", "avatar_url": "https://avatars.githubusercontent.com/u/62512681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saicharan2804", "html_url": "https://github.com/saicharan2804", "followers_url": "https://api.github.com/users/saicharan2804/followers", "following_url": "https://api.github.com/users/saicharan2804/following{/other_user}", "gists_url": "https://api.github.com/users/saicharan2804/gists{/gist_id}", "starred_url": "https://api.github.com/users/saicharan2804/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saicharan2804/subscriptions", "organizations_url": "https://api.github.com/users/saicharan2804/orgs", "repos_url": "https://api.github.com/users/saicharan2804/repos", "events_url": "https://api.github.com/users/saicharan2804/events{/privacy}", "received_events_url": "https://api.github.com/users/saicharan2804/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2024-04-25T22:33:57"
"2024-04-25T22:33:57"
null
NONE
null
### Feature request Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI. ### Motivation Would be a useful addition ### Your contribution Would love to help out with some guidance
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6840/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6839/comments
https://api.github.com/repos/huggingface/datasets/issues/6839/events
https://github.com/huggingface/datasets/pull/6839
2,263,761,062
PR_kwDODunzps5tvC1c
6,839
Remove token arg from CLI examples
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6839). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005311 / 0.011353 (-0.006042) | 0.003691 / 0.011008 (-0.007317) | 0.063714 / 0.038508 (0.025206) | 0.030875 / 0.023109 (0.007766) | 0.251210 / 0.275898 (-0.024688) | 0.280539 / 0.323480 (-0.042941) | 0.004262 / 0.007986 (-0.003724) | 0.002723 / 0.004328 (-0.001606) | 0.049487 / 0.004250 (0.045237) | 0.045655 / 0.037052 (0.008603) | 0.264399 / 0.258489 (0.005910) | 0.306613 / 0.293841 (0.012772) | 0.028513 / 0.128546 (-0.100033) | 0.010726 / 0.075646 (-0.064921) | 0.210601 / 0.419271 (-0.208670) | 0.036918 / 0.043533 (-0.006614) | 0.257872 / 0.255139 (0.002733) | 0.278951 / 0.283200 (-0.004249) | 0.017900 / 0.141683 (-0.123783) | 1.096749 / 1.452155 (-0.355406) | 1.152603 / 1.492716 (-0.340113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095193 / 0.018006 (0.077187) | 0.303919 / 0.000490 (0.303429) | 0.000226 / 0.000200 (0.000026) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018558 / 0.037411 (-0.018853) | 0.061106 / 0.014526 (0.046580) | 0.076233 / 0.176557 (-0.100323) | 0.122402 / 0.737135 (-0.614734) | 0.075579 / 0.296338 (-0.220760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283586 / 0.215209 (0.068377) | 2.766179 / 2.077655 (0.688524) | 1.481069 / 1.504120 (-0.023051) | 1.355004 / 1.541195 (-0.186191) | 1.392940 / 1.468490 (-0.075550) | 0.578878 / 4.584777 (-4.005899) | 2.432890 / 3.745712 (-1.312822) | 2.837912 / 5.269862 (-2.431949) | 1.762803 / 4.565676 (-2.802873) | 0.063339 / 0.424275 (-0.360937) | 0.005392 / 0.007607 (-0.002215) | 0.340271 / 0.226044 (0.114227) | 3.388371 / 2.268929 (1.119443) | 1.862622 / 55.444624 (-53.582002) | 1.543209 / 6.876477 (-5.333268) | 1.569858 / 2.142072 (-0.572215) | 0.651487 / 4.805227 (-4.153740) | 0.119048 / 6.500664 (-6.381616) | 0.042309 / 0.075469 (-0.033160) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991161 / 1.841788 (-0.850627) | 11.778857 / 8.074308 (3.704549) | 9.586019 / 10.191392 (-0.605373) | 0.148093 / 0.680424 (-0.532331) | 0.014301 / 0.534201 (-0.519900) | 0.287983 / 0.579283 (-0.291301) | 0.266070 / 0.434364 (-0.168293) | 0.328261 / 0.540337 (-0.212076) | 0.417908 / 1.386936 (-0.969028) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005252 / 0.011353 (-0.006100) | 0.003740 / 0.011008 (-0.007268) | 0.049622 / 0.038508 (0.011114) | 0.030040 / 0.023109 (0.006931) | 0.262224 / 0.275898 (-0.013674) | 0.312216 / 0.323480 (-0.011264) | 0.004213 / 0.007986 (-0.003773) | 0.002737 / 0.004328 (-0.001592) | 0.049159 / 0.004250 (0.044908) | 0.041060 / 0.037052 (0.004008) | 0.275826 / 0.258489 (0.017337) | 0.301879 / 0.293841 (0.008038) | 0.029364 / 0.128546 (-0.099182) | 0.010453 / 0.075646 (-0.065193) | 0.058095 / 0.419271 (-0.361176) | 0.032898 / 0.043533 (-0.010635) | 0.263876 / 0.255139 (0.008737) | 0.281686 / 0.283200 (-0.001514) | 0.018711 / 0.141683 (-0.122971) | 1.126056 / 1.452155 (-0.326098) | 1.185125 / 1.492716 (-0.307591) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094153 / 0.018006 (0.076147) | 0.300719 / 0.000490 (0.300229) | 0.000207 / 0.000200 (0.000007) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022610 / 0.037411 (-0.014801) | 0.075502 / 0.014526 (0.060977) | 0.088858 / 0.176557 (-0.087699) | 0.129421 / 0.737135 (-0.607714) | 0.089331 / 0.296338 (-0.207007) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291595 / 0.215209 (0.076386) | 2.864377 / 2.077655 (0.786722) | 1.543387 / 1.504120 (0.039267) | 1.404273 / 1.541195 (-0.136922) | 1.421964 / 1.468490 (-0.046526) | 0.579275 / 4.584777 (-4.005502) | 0.979212 / 3.745712 (-2.766500) | 2.822043 / 5.269862 (-2.447818) | 1.745015 / 4.565676 (-2.820661) | 0.064626 / 0.424275 (-0.359649) | 0.005006 / 0.007607 (-0.002601) | 0.345509 / 0.226044 (0.119464) | 3.410369 / 2.268929 (1.141440) | 1.875930 / 55.444624 (-53.568694) | 1.600841 / 6.876477 (-5.275636) | 1.611818 / 2.142072 (-0.530254) | 0.662277 / 4.805227 (-4.142950) | 0.117861 / 6.500664 (-6.382803) | 0.041061 / 0.075469 (-0.034408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007834 / 1.841788 (-0.833954) | 12.345653 / 8.074308 (4.271345) | 9.775237 / 10.191392 (-0.416155) | 0.135166 / 0.680424 (-0.545258) | 0.016799 / 0.534201 (-0.517402) | 0.289235 / 0.579283 (-0.290048) | 0.126196 / 0.434364 (-0.308168) | 0.382905 / 0.540337 (-0.157432) | 0.435248 / 1.386936 (-0.951688) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22bf5388748611a9255d8e17218d36d2f799f182 \"CML watermark\")\n" ]
"2024-04-25T14:36:58"
"2024-04-26T17:03:51"
"2024-04-26T16:57:40"
MEMBER
null
Remove token arg from CLI examples. Fix #6838. CC: @Wauplin
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6839/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6839/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6839", "html_url": "https://github.com/huggingface/datasets/pull/6839", "diff_url": "https://github.com/huggingface/datasets/pull/6839.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6839.patch", "merged_at": "2024-04-26T16:57:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/6838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6838/comments
https://api.github.com/repos/huggingface/datasets/issues/6838/events
https://github.com/huggingface/datasets/issues/6838
2,263,674,843
I_kwDODunzps6G7O_b
6,838
Remove token arg from CLI examples
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-04-25T14:00:38"
"2024-04-26T16:57:41"
"2024-04-26T16:57:41"
MEMBER
null
As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603 > I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6838/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6838/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6837/comments
https://api.github.com/repos/huggingface/datasets/issues/6837/events
https://github.com/huggingface/datasets/issues/6837
2,263,273,983
I_kwDODunzps6G5tH_
6,837
Cannot use cached dataset without Internet connection (or when servers are down)
{ "login": "DionisMuzenitov", "id": 112088378, "node_id": "U_kgDOBq5VOg", "avatar_url": "https://avatars.githubusercontent.com/u/112088378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DionisMuzenitov", "html_url": "https://github.com/DionisMuzenitov", "followers_url": "https://api.github.com/users/DionisMuzenitov/followers", "following_url": "https://api.github.com/users/DionisMuzenitov/following{/other_user}", "gists_url": "https://api.github.com/users/DionisMuzenitov/gists{/gist_id}", "starred_url": "https://api.github.com/users/DionisMuzenitov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DionisMuzenitov/subscriptions", "organizations_url": "https://api.github.com/users/DionisMuzenitov/orgs", "repos_url": "https://api.github.com/users/DionisMuzenitov/repos", "events_url": "https://api.github.com/users/DionisMuzenitov/events{/privacy}", "received_events_url": "https://api.github.com/users/DionisMuzenitov/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "There are 2 workarounds, tho:\r\n1. Download datasets from web and just load them locally\r\n2. Use metadata directly (temporal solution, since metadata can change)\r\n```\r\nimport datasets\r\nfrom datasets.data_files import DataFilesDict, DataFilesList\r\n\r\ndata_files_list = DataFilesList(\r\n [\r\n \"hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00000-of-01024.json.gz\"\r\n ],\r\n [(\"allenai/c4\", \"1588ec454efa1a09f29cd18ddd04fe05fc8653a2\")],\r\n)\r\ndata_files = DataFilesDict({\"train\": data_files_list})\r\nc4_dataset = datasets.load_dataset(\r\n path=\"allenai/c4\",\r\n data_files=data_files,\r\n split=\"train\",\r\n cache_dir=\"/datesets/cache\",\r\n download_mode=\"reuse_cache_if_exists\",\r\n token=False,\r\n)\r\n```\r\nSecond solution also shows where to find the bug. I suggest that the hashing functions should always use only original parameter `data_files`, and not the one they get after connecting to the server and creating `DataFilesDict`", "Hi! You need to set the `HF_DATASETS_OFFLINE` env variable to `1` to load cached datasets offline, as explained in the docs [here](https://huggingface.co/docs/datasets/v2.19.0/en/loading#offline).", "Just tested. It doesn't work, because of the exact problem I described above: hash of dataset config is different.\r\nThe only error difference is the reason why it cannot connect to HuggingFace (now it's 'offline mode is enabled')\r\n![image](https://github.com/huggingface/datasets/assets/112088378/1a7e1720-d711-46e3-9c90-53d52c441e68)\r\n" ]
"2024-04-25T10:48:20"
"2024-04-26T14:27:15"
null
NONE
null
### Describe the bug I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues). The problem why I can't use it: `data_files` argument from `datasets.load_dataset()` function get it updates from the server before calculating hash for caching. As a result, when I run the same code with and without Internet I get different dataset configuration directory name. ### Steps to reproduce the bug ``` import datasets c4_dataset = datasets.load_dataset( path="allenai/c4", data_files={"train": "en/c4-train.00000-of-01024.json.gz"}, split="train", cache_dir="/datesets/cache", download_mode="reuse_cache_if_exists", token=False, ) ``` 1. Run this code with the Internet. 2. Run the same code without the Internet. ### Expected behavior When running without the Internet connection, the loader should be able to get dataset from cache ### Environment info - `datasets` version: 2.19.0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.13 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6837/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6836/comments
https://api.github.com/repos/huggingface/datasets/issues/6836/events
https://github.com/huggingface/datasets/issues/6836
2,262,249,919
I_kwDODunzps6G1zG_
6,836
ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0
{ "login": "ebsmothers", "id": 24319399, "node_id": "MDQ6VXNlcjI0MzE5Mzk5", "avatar_url": "https://avatars.githubusercontent.com/u/24319399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ebsmothers", "html_url": "https://github.com/ebsmothers", "followers_url": "https://api.github.com/users/ebsmothers/followers", "following_url": "https://api.github.com/users/ebsmothers/following{/other_user}", "gists_url": "https://api.github.com/users/ebsmothers/gists{/gist_id}", "starred_url": "https://api.github.com/users/ebsmothers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ebsmothers/subscriptions", "organizations_url": "https://api.github.com/users/ebsmothers/orgs", "repos_url": "https://api.github.com/users/ebsmothers/repos", "events_url": "https://api.github.com/users/ebsmothers/events{/privacy}", "received_events_url": "https://api.github.com/users/ebsmothers/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-24T21:52:35"
"2024-04-24T21:52:35"
null
NONE
null
### Describe the bug Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us. Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below. ### Steps to reproduce the bug On 2.18.0, things work fine: ``` # First clear the locally cached dataset rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired pip install "datasets==2.18.0" python3 >>> from datasets import load_dataset >>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl') ``` On 2.19.0, they do not: ``` # First clear the locally cached dataset rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired pip install "datasets==2.19.0" python3 >>> from datasets import load_dataset >>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl') ``` The stack trace I see from the 2.19.0 version of load_dataset can be seen [here](https://gist.github.com/ebsmothers/f9b1f1949bee7030a8d7bb8a491550d2). (Maybe unsurprising but) notably if I do not delete the cache first I am able to load the dataset successfully. So based on this I suspect the cause is somewhere in the download logic. ### Expected behavior Download the dataset successfully :) ### Environment info - `datasets` version: 2.19.0 - Platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34 - Python version: 3.11.9 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6836/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6835/comments
https://api.github.com/repos/huggingface/datasets/issues/6835/events
https://github.com/huggingface/datasets/pull/6835
2,261,079,263
PR_kwDODunzps5tl2fc
6,835
LargeListType support #6834
{ "login": "Modexus", "id": 37351874, "node_id": "MDQ6VXNlcjM3MzUxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Modexus", "html_url": "https://github.com/Modexus", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "organizations_url": "https://api.github.com/users/Modexus/orgs", "repos_url": "https://api.github.com/users/Modexus/repos", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "received_events_url": "https://api.github.com/users/Modexus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6835). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Fixed the conversion from `pyarrow` to `python` `Sequence` features. \r\n\r\nThere is still an issue that if `features` are passed the `Sequence` always forces conversion to `ListArray`.\r\nThis probably causes issues if the `LargeListArray` is actually needed.\r\n\r\nThere doesn't seem to be a great solution since this list is created solely on the `schema` for `Sequence`.\r\nOne solution would be to always use `LargeListArray` instead.\r\n" ]
"2024-04-24T11:34:24"
"2024-04-24T13:54:03"
null
CONTRIBUTOR
null
Fixes #6834
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6835/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6835", "html_url": "https://github.com/huggingface/datasets/pull/6835", "diff_url": "https://github.com/huggingface/datasets/pull/6835.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6835.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6834/comments
https://api.github.com/repos/huggingface/datasets/issues/6834/events
https://github.com/huggingface/datasets/issues/6834
2,261,078,104
I_kwDODunzps6GxVBY
6,834
largelisttype not supported (.from_polars())
{ "login": "Modexus", "id": 37351874, "node_id": "MDQ6VXNlcjM3MzUxODc0", "avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Modexus", "html_url": "https://github.com/Modexus", "followers_url": "https://api.github.com/users/Modexus/followers", "following_url": "https://api.github.com/users/Modexus/following{/other_user}", "gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}", "starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Modexus/subscriptions", "organizations_url": "https://api.github.com/users/Modexus/orgs", "repos_url": "https://api.github.com/users/Modexus/repos", "events_url": "https://api.github.com/users/Modexus/events{/privacy}", "received_events_url": "https://api.github.com/users/Modexus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-24T11:33:43"
"2024-04-24T12:06:37"
null
CONTRIBUTOR
null
### Describe the bug The following code fails because LargeListType is not supported. This is especially a problem for .from_polars since polars uses LargeListType. ### Steps to reproduce the bug ```python import datasets import polars as pl df = pl.DataFrame({"list": [[]]}) datasets.Dataset.from_polars(df) ``` ### Expected behavior Convert LargeListType to list. ### Environment info - `datasets` version: 2.19.1.dev0 - Platform: Linux-6.8.7-200.fc39.x86_64-x86_64-with-glibc2.38 - Python version: 3.12.2 - `huggingface_hub` version: 0.22.2 - PyArrow version: 16.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2024.3.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6834/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6833/comments
https://api.github.com/repos/huggingface/datasets/issues/6833/events
https://github.com/huggingface/datasets/issues/6833
2,259,731,274
I_kwDODunzps6GsMNK
6,833
Super slow iteration with trivial custom transform
{ "login": "xslittlegrass", "id": 2780075, "node_id": "MDQ6VXNlcjI3ODAwNzU=", "avatar_url": "https://avatars.githubusercontent.com/u/2780075?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xslittlegrass", "html_url": "https://github.com/xslittlegrass", "followers_url": "https://api.github.com/users/xslittlegrass/followers", "following_url": "https://api.github.com/users/xslittlegrass/following{/other_user}", "gists_url": "https://api.github.com/users/xslittlegrass/gists{/gist_id}", "starred_url": "https://api.github.com/users/xslittlegrass/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xslittlegrass/subscriptions", "organizations_url": "https://api.github.com/users/xslittlegrass/orgs", "repos_url": "https://api.github.com/users/xslittlegrass/repos", "events_url": "https://api.github.com/users/xslittlegrass/events{/privacy}", "received_events_url": "https://api.github.com/users/xslittlegrass/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-23T20:40:59"
"2024-04-23T20:40:59"
null
NONE
null
### Describe the bug Dataset is 10X slower when applying trivial transforms: ``` import time import numpy as np from datasets import Dataset, Features, Array2D a = np.zeros((800, 800)) a = np.stack([a] * 1000) features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")}) ds1 = Dataset.from_dict({"a": a}, features=features).with_format('numpy') def transform(batch): return batch ds2 = ds1.with_transform(transform) %time sum(1 for _ in ds1) %time sum(1 for _ in ds2) ``` ``` CPU times: user 472 ms, sys: 319 ms, total: 791 ms Wall time: 794 ms CPU times: user 9.32 s, sys: 443 ms, total: 9.76 s Wall time: 9.78 s ``` In my real code I'm using set_transform to apply some post-processing on-the-fly for the 2d array, but it significantly slows down the dataset even if the transform itself is trivial. Related issue: https://github.com/huggingface/datasets/issues/5841 ### Steps to reproduce the bug Use code in the description to reproduce. ### Expected behavior Trivial custom transform in the example should not slowdown the dataset iteration. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35 - Python version: 3.11.4 - `huggingface_hub` version: 0.20.2 - PyArrow version: 15.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6833/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6832/comments
https://api.github.com/repos/huggingface/datasets/issues/6832/events
https://github.com/huggingface/datasets/pull/6832
2,258,761,447
PR_kwDODunzps5teFoJ
6,832
Support downloading specific splits in `load_dataset`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6832). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
"2024-04-23T12:32:27"
"2024-04-25T17:05:42"
null
CONTRIBUTOR
null
This PR builds on https://github.com/huggingface/datasets/pull/6639 to support downloading only the specified splits in `load_dataset`. For this to work, a builder's `_split_generators` need to be able to accept the requested splits (as a list) via a `splits` argument to avoid processing the non-requested ones. Also, the builder has to define a `_available_splits` method that lists all the possible `splits` values. Close https://github.com/huggingface/datasets/issues/4101, close https://github.com/huggingface/datasets/issues/2538 (I'm probably missing some) Should also make it possible to address https://github.com/huggingface/datasets/issues/6793
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6832/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6832", "html_url": "https://github.com/huggingface/datasets/pull/6832", "diff_url": "https://github.com/huggingface/datasets/pull/6832.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6832.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6831/comments
https://api.github.com/repos/huggingface/datasets/issues/6831/events
https://github.com/huggingface/datasets/pull/6831
2,258,537,405
PR_kwDODunzps5tdTy_
6,831
Add docs about the CLI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6831). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Concretely, the docs about convert_to_parquet are here: https://moon-ci-docs.huggingface.co/docs/datasets/pr_6831/en/cli#convert-to-parquet", "There is an issue with the example snippet when copy/pasting it: the leading shell dollar sign is also copied. I guess they will not like to fix it in the backend: currently they only support Python code snippets (with leading `>>>` or `...`), as they appear in the IPython interactive console.\r\n\r\nWhat do you suggest, @severo?" ]
"2024-04-23T10:41:03"
"2024-04-26T16:51:09"
"2024-04-25T10:44:10"
MEMBER
null
Add docs about the CLI. Close #6830. CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6831/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6831", "html_url": "https://github.com/huggingface/datasets/pull/6831", "diff_url": "https://github.com/huggingface/datasets/pull/6831.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6831.patch", "merged_at": "2024-04-25T10:44:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/6830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6830/comments
https://api.github.com/repos/huggingface/datasets/issues/6830/events
https://github.com/huggingface/datasets/issues/6830
2,258,433,178
I_kwDODunzps6GnPSa
6,830
Add a doc page for the convert_to_parquet CLI
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2024-04-23T09:49:04"
"2024-04-25T10:44:11"
"2024-04-25T10:44:11"
CONTRIBUTOR
null
Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6830/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6830/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6829/comments
https://api.github.com/repos/huggingface/datasets/issues/6829/events
https://github.com/huggingface/datasets/issues/6829
2,258,424,577
I_kwDODunzps6GnNMB
6,829
Load and save from/to disk no longer accept pathlib.Path
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
"2024-04-23T09:44:45"
"2024-04-23T09:44:46"
null
MEMBER
null
Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296: > This change is breaking in > https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515 > when the input is `pathlib.Path`. The issue is that `url_to_fs` expects a `str` and cannot deal with `Path`. `get_fs_token_paths` converts to `str` so it is not a problem This change was introduced in: - #6704
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6829/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6828/comments
https://api.github.com/repos/huggingface/datasets/issues/6828/events
https://github.com/huggingface/datasets/pull/6828
2,258,420,421
PR_kwDODunzps5tc55y
6,828
Support PathLike input in save_to_disk / load_from_disk
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6828). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
"2024-04-23T09:42:38"
"2024-04-23T11:05:52"
null
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6828/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6828", "html_url": "https://github.com/huggingface/datasets/pull/6828", "diff_url": "https://github.com/huggingface/datasets/pull/6828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6828.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6827/comments
https://api.github.com/repos/huggingface/datasets/issues/6827/events
https://github.com/huggingface/datasets/issues/6827
2,254,011,833
I_kwDODunzps6GWX25
6,827
Loading a remote dataset fails in the last release (v2.19.0)
{ "login": "zrthxn", "id": 35369637, "node_id": "MDQ6VXNlcjM1MzY5NjM3", "avatar_url": "https://avatars.githubusercontent.com/u/35369637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zrthxn", "html_url": "https://github.com/zrthxn", "followers_url": "https://api.github.com/users/zrthxn/followers", "following_url": "https://api.github.com/users/zrthxn/following{/other_user}", "gists_url": "https://api.github.com/users/zrthxn/gists{/gist_id}", "starred_url": "https://api.github.com/users/zrthxn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zrthxn/subscriptions", "organizations_url": "https://api.github.com/users/zrthxn/orgs", "repos_url": "https://api.github.com/users/zrthxn/repos", "events_url": "https://api.github.com/users/zrthxn/events{/privacy}", "received_events_url": "https://api.github.com/users/zrthxn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-19T21:11:58"
"2024-04-19T21:13:42"
null
NONE
null
While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>` I am loading the dataset like so, nothing out of the ordinary. This dataset needs a token to access it. ``` token="hf_myhftoken-sdhbdsjgkhbd" load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token=token) ``` I get the following error ![Screenshot 2024-04-19 at 11 03 07 PM](https://github.com/huggingface/datasets/assets/35369637/8dce757f-08ff-45dd-85b5-890fced7c5bc) Now you can see that the URL that it is trying to reach has the JSON object of the dataset split appended to the base URL. I think this may be due to a newly introduced issue. I did not have this issue with the previous version of the datasets. Everything was fine for me yesterday and after the release 12 hours ago, this seems to have broken. Also, the dataset in question runs custom code and I checked and there have been no commits to the dataset on Huggingface in 6 months. ### Steps to reproduce the bug Since this happened with one particular dataset for me, I am listing steps to use that dataset. 1. Open https://huggingface.co/datasets/speechcolab/gigaspeech and fill the form to get access. 2. Create a token on your huggingface account with read access. 3. Run the following line, substituing `<your_token_here>` with your token. ``` load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token="<your_token_here>") ``` ### Expected behavior Be able to load the dataset in question. ### Environment info datasets == 2.19.0 python == 3.10 kernel == Linux 6.1.58+
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6827/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6826/comments
https://api.github.com/repos/huggingface/datasets/issues/6826/events
https://github.com/huggingface/datasets/pull/6826
2,252,445,242
PR_kwDODunzps5tJMZh
6,826
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6826). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004893 / 0.011353 (-0.006460) | 0.003238 / 0.011008 (-0.007771) | 0.063143 / 0.038508 (0.024635) | 0.029770 / 0.023109 (0.006661) | 0.229052 / 0.275898 (-0.046846) | 0.254534 / 0.323480 (-0.068945) | 0.003083 / 0.007986 (-0.004903) | 0.002615 / 0.004328 (-0.001714) | 0.049684 / 0.004250 (0.045434) | 0.043745 / 0.037052 (0.006693) | 0.248985 / 0.258489 (-0.009504) | 0.275957 / 0.293841 (-0.017884) | 0.027323 / 0.128546 (-0.101223) | 0.010372 / 0.075646 (-0.065275) | 0.206494 / 0.419271 (-0.212778) | 0.035230 / 0.043533 (-0.008303) | 0.234235 / 0.255139 (-0.020904) | 0.252395 / 0.283200 (-0.030805) | 0.019442 / 0.141683 (-0.122240) | 1.130677 / 1.452155 (-0.321478) | 1.161721 / 1.492716 (-0.330996) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091659 / 0.018006 (0.073653) | 0.301323 / 0.000490 (0.300833) | 0.000212 / 0.000200 (0.000012) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018360 / 0.037411 (-0.019051) | 0.061101 / 0.014526 (0.046575) | 0.072383 / 0.176557 (-0.104174) | 0.117656 / 0.737135 (-0.619479) | 0.073903 / 0.296338 (-0.222436) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272768 / 0.215209 (0.057558) | 2.655714 / 2.077655 (0.578059) | 1.446254 / 1.504120 (-0.057866) | 1.330543 / 1.541195 (-0.210652) | 1.352527 / 1.468490 (-0.115964) | 0.561428 / 4.584777 (-4.023349) | 2.368182 / 3.745712 (-1.377530) | 2.746508 / 5.269862 (-2.523353) | 1.713972 / 4.565676 (-2.851705) | 0.062046 / 0.424275 (-0.362229) | 0.005427 / 0.007607 (-0.002180) | 0.321652 / 0.226044 (0.095607) | 3.181812 / 2.268929 (0.912883) | 1.766778 / 55.444624 (-53.677846) | 1.492502 / 6.876477 (-5.383975) | 1.534658 / 2.142072 (-0.607415) | 0.640372 / 4.805227 (-4.164856) | 0.118180 / 6.500664 (-6.382484) | 0.042698 / 0.075469 (-0.032771) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993262 / 1.841788 (-0.848525) | 11.512827 / 8.074308 (3.438518) | 9.602140 / 10.191392 (-0.589252) | 0.144723 / 0.680424 (-0.535701) | 0.014122 / 0.534201 (-0.520079) | 0.302211 / 0.579283 (-0.277072) | 0.268026 / 0.434364 (-0.166338) | 0.326524 / 0.540337 (-0.213813) | 0.423781 / 1.386936 (-0.963155) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.003535 / 0.011008 (-0.007473) | 0.050139 / 0.038508 (0.011631) | 0.031813 / 0.023109 (0.008704) | 0.269501 / 0.275898 (-0.006397) | 0.294355 / 0.323480 (-0.029125) | 0.004128 / 0.007986 (-0.003858) | 0.002684 / 0.004328 (-0.001644) | 0.049295 / 0.004250 (0.045045) | 0.040129 / 0.037052 (0.003077) | 0.282406 / 0.258489 (0.023917) | 0.309822 / 0.293841 (0.015981) | 0.028506 / 0.128546 (-0.100040) | 0.010434 / 0.075646 (-0.065213) | 0.057890 / 0.419271 (-0.361382) | 0.032487 / 0.043533 (-0.011046) | 0.270631 / 0.255139 (0.015492) | 0.288734 / 0.283200 (0.005534) | 0.018710 / 0.141683 (-0.122973) | 1.151571 / 1.452155 (-0.300583) | 1.195222 / 1.492716 (-0.297494) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090939 / 0.018006 (0.072932) | 0.300278 / 0.000490 (0.299788) | 0.000202 / 0.000200 (0.000002) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022036 / 0.037411 (-0.015376) | 0.075131 / 0.014526 (0.060605) | 0.087775 / 0.176557 (-0.088782) | 0.125719 / 0.737135 (-0.611416) | 0.088491 / 0.296338 (-0.207848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300363 / 0.215209 (0.085154) | 2.931852 / 2.077655 (0.854197) | 1.633688 / 1.504120 (0.129568) | 1.512641 / 1.541195 (-0.028554) | 1.527703 / 1.468490 (0.059213) | 0.572781 / 4.584777 (-4.011996) | 2.445950 / 3.745712 (-1.299762) | 2.883667 / 5.269862 (-2.386195) | 1.761396 / 4.565676 (-2.804280) | 0.064422 / 0.424275 (-0.359853) | 0.005332 / 0.007607 (-0.002275) | 0.346730 / 0.226044 (0.120686) | 3.443815 / 2.268929 (1.174886) | 1.988677 / 55.444624 (-53.455948) | 1.707688 / 6.876477 (-5.168789) | 1.694216 / 2.142072 (-0.447856) | 0.634834 / 4.805227 (-4.170393) | 0.115044 / 6.500664 (-6.385620) | 0.040853 / 0.075469 (-0.034616) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009382 / 1.841788 (-0.832405) | 12.327511 / 8.074308 (4.253203) | 10.123296 / 10.191392 (-0.068097) | 0.130770 / 0.680424 (-0.549654) | 0.015548 / 0.534201 (-0.518653) | 0.286650 / 0.579283 (-0.292633) | 0.270267 / 0.434364 (-0.164097) | 0.333485 / 0.540337 (-0.206852) | 0.428288 / 1.386936 (-0.958648) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f96e74d5c633cd5435dd526adb4a74631eb05c43 \"CML watermark\")\n" ]
"2024-04-19T08:51:42"
"2024-04-19T09:05:25"
"2024-04-19T08:52:14"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6826/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6826/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6826", "html_url": "https://github.com/huggingface/datasets/pull/6826", "diff_url": "https://github.com/huggingface/datasets/pull/6826.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6826.patch", "merged_at": "2024-04-19T08:52:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/6825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6825/comments
https://api.github.com/repos/huggingface/datasets/issues/6825/events
https://github.com/huggingface/datasets/pull/6825
2,252,404,599
PR_kwDODunzps5tJEMw
6,825
Release: 2.19.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6825). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004945 / 0.011353 (-0.006407) | 0.003290 / 0.011008 (-0.007718) | 0.062404 / 0.038508 (0.023896) | 0.040056 / 0.023109 (0.016946) | 0.246574 / 0.275898 (-0.029324) | 0.275074 / 0.323480 (-0.048406) | 0.004118 / 0.007986 (-0.003867) | 0.002604 / 0.004328 (-0.001724) | 0.048618 / 0.004250 (0.044367) | 0.044088 / 0.037052 (0.007035) | 0.263059 / 0.258489 (0.004570) | 0.294602 / 0.293841 (0.000761) | 0.027425 / 0.128546 (-0.101121) | 0.010263 / 0.075646 (-0.065383) | 0.205925 / 0.419271 (-0.213346) | 0.048917 / 0.043533 (0.005384) | 0.264227 / 0.255139 (0.009088) | 0.273339 / 0.283200 (-0.009860) | 0.017783 / 0.141683 (-0.123900) | 1.137526 / 1.452155 (-0.314629) | 1.179551 / 1.492716 (-0.313165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096809 / 0.018006 (0.078802) | 0.303854 / 0.000490 (0.303364) | 0.000207 / 0.000200 (0.000007) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017756 / 0.037411 (-0.019655) | 0.061005 / 0.014526 (0.046479) | 0.072986 / 0.176557 (-0.103571) | 0.119851 / 0.737135 (-0.617284) | 0.074733 / 0.296338 (-0.221605) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278270 / 0.215209 (0.063061) | 2.737874 / 2.077655 (0.660219) | 1.460658 / 1.504120 (-0.043462) | 1.337695 / 1.541195 (-0.203499) | 1.364376 / 1.468490 (-0.104114) | 0.565622 / 4.584777 (-4.019155) | 2.365167 / 3.745712 (-1.380546) | 2.694544 / 5.269862 (-2.575317) | 1.699689 / 4.565676 (-2.865987) | 0.062564 / 0.424275 (-0.361712) | 0.005296 / 0.007607 (-0.002311) | 0.340122 / 0.226044 (0.114077) | 3.382133 / 2.268929 (1.113204) | 1.816907 / 55.444624 (-53.627718) | 1.530825 / 6.876477 (-5.345652) | 1.533266 / 2.142072 (-0.608807) | 0.638215 / 4.805227 (-4.167012) | 0.116227 / 6.500664 (-6.384437) | 0.041548 / 0.075469 (-0.033921) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971031 / 1.841788 (-0.870757) | 11.117905 / 8.074308 (3.043597) | 9.358159 / 10.191392 (-0.833233) | 0.127954 / 0.680424 (-0.552470) | 0.013634 / 0.534201 (-0.520567) | 0.285399 / 0.579283 (-0.293885) | 0.267980 / 0.434364 (-0.166383) | 0.320219 / 0.540337 (-0.220119) | 0.416035 / 1.386936 (-0.970901) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005177 / 0.011353 (-0.006176) | 0.003078 / 0.011008 (-0.007930) | 0.049650 / 0.038508 (0.011142) | 0.030897 / 0.023109 (0.007787) | 0.271186 / 0.275898 (-0.004712) | 0.296050 / 0.323480 (-0.027430) | 0.004204 / 0.007986 (-0.003781) | 0.002755 / 0.004328 (-0.001574) | 0.049550 / 0.004250 (0.045300) | 0.039801 / 0.037052 (0.002749) | 0.283243 / 0.258489 (0.024753) | 0.310932 / 0.293841 (0.017091) | 0.029136 / 0.128546 (-0.099410) | 0.010278 / 0.075646 (-0.065368) | 0.059300 / 0.419271 (-0.359971) | 0.032965 / 0.043533 (-0.010568) | 0.272646 / 0.255139 (0.017507) | 0.293697 / 0.283200 (0.010497) | 0.018330 / 0.141683 (-0.123353) | 1.144251 / 1.452155 (-0.307904) | 1.209660 / 1.492716 (-0.283056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091020 / 0.018006 (0.073014) | 0.298294 / 0.000490 (0.297804) | 0.000214 / 0.000200 (0.000014) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021879 / 0.037411 (-0.015532) | 0.074728 / 0.014526 (0.060202) | 0.085499 / 0.176557 (-0.091057) | 0.125743 / 0.737135 (-0.611392) | 0.086130 / 0.296338 (-0.210208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292311 / 0.215209 (0.077102) | 2.861240 / 2.077655 (0.783585) | 1.590426 / 1.504120 (0.086306) | 1.472288 / 1.541195 (-0.068907) | 1.472901 / 1.468490 (0.004411) | 0.574924 / 4.584777 (-4.009853) | 2.450817 / 3.745712 (-1.294895) | 2.781903 / 5.269862 (-2.487959) | 1.747110 / 4.565676 (-2.818566) | 0.064680 / 0.424275 (-0.359595) | 0.005376 / 0.007607 (-0.002231) | 0.356846 / 0.226044 (0.130802) | 3.457851 / 2.268929 (1.188922) | 1.952678 / 55.444624 (-53.491946) | 1.670824 / 6.876477 (-5.205653) | 1.655872 / 2.142072 (-0.486200) | 0.655874 / 4.805227 (-4.149353) | 0.117098 / 6.500664 (-6.383566) | 0.040230 / 0.075469 (-0.035239) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007423 / 1.841788 (-0.834365) | 11.818228 / 8.074308 (3.743920) | 10.153699 / 10.191392 (-0.037693) | 0.132073 / 0.680424 (-0.548351) | 0.015101 / 0.534201 (-0.519100) | 0.286555 / 0.579283 (-0.292728) | 0.281953 / 0.434364 (-0.152411) | 0.323647 / 0.540337 (-0.216691) | 0.418698 / 1.386936 (-0.968238) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0d3c7462bc67407c42d3ad102b7f9d5914219d9d \"CML watermark\")\n" ]
"2024-04-19T08:29:02"
"2024-04-19T08:50:57"
"2024-04-19T08:44:57"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6825/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6825", "html_url": "https://github.com/huggingface/datasets/pull/6825", "diff_url": "https://github.com/huggingface/datasets/pull/6825.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6825.patch", "merged_at": "2024-04-19T08:44:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/6824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6824/comments
https://api.github.com/repos/huggingface/datasets/issues/6824/events
https://github.com/huggingface/datasets/issues/6824
2,251,076,197
I_kwDODunzps6GLLJl
6,824
Winogrande does not seem to be compatible with datasets version of 1.18.0
{ "login": "spliew", "id": 7878204, "node_id": "MDQ6VXNlcjc4NzgyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7878204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spliew", "html_url": "https://github.com/spliew", "followers_url": "https://api.github.com/users/spliew/followers", "following_url": "https://api.github.com/users/spliew/following{/other_user}", "gists_url": "https://api.github.com/users/spliew/gists{/gist_id}", "starred_url": "https://api.github.com/users/spliew/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spliew/subscriptions", "organizations_url": "https://api.github.com/users/spliew/orgs", "repos_url": "https://api.github.com/users/spliew/repos", "events_url": "https://api.github.com/users/spliew/events{/privacy}", "received_events_url": "https://api.github.com/users/spliew/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Do you mean 2.18 ? Can you try to update `fsspec` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U fsspec huggingface_hub\r\n```", "Yes I meant 2.18, and it works after updating `fsspec` and `huggingface_hub`. Thanks!" ]
"2024-04-18T16:11:04"
"2024-04-19T09:53:15"
"2024-04-19T09:52:33"
NONE
null
### Describe the bug I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`. I do not have such an issue in the 1.17.0 version. ```Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2556, in load_dataset builder_instance = load_dataset_builder( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2265, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 371, in __init__ self.config, self.config_id = self._create_builder_config( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 620, in _create_builder_config builder_config._resolve_data_files( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 211, in _resolve_data_files self.data_files = self.data_files.resolve(base_path, download_config) File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 799, in resolve out[key] = data_files_patterns_list.resolve(base_path, download_config) File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 752, in resolve resolve_pattern( File "/usr/local/lib/python3.10/dist-packages/datasets/data_files.py", line 393, in resolve_pattern raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find 'hf://datasets/winogrande@ebf71e3c7b5880d019ecf6099c0b09311b1084f5/winogrande_xl/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']``` ### Steps to reproduce the bug from datasets import load_dataset datasets = load_dataset('winogrande','winogrande_xl') ### Expected behavior ```Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.06M/2.06M [00:00<00:00, 5.16MB/s] Downloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118k/118k [00:00<00:00, 360kB/s] Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 85.9k/85.9k [00:00<00:00, 242kB/s] Generating train split: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 40398/40398 [00:00<00:00, 845491.12 examples/s] Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1767/1767 [00:00<00:00, 362501.11 examples/s] Generating validation split: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 1267/1267 [00:00<00:00, 318768.11 examples/s]``` ### Environment info datasets version: 1.18.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6824/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6823/comments
https://api.github.com/repos/huggingface/datasets/issues/6823/events
https://github.com/huggingface/datasets/issues/6823
2,250,775,569
I_kwDODunzps6GKBwR
6,823
Loading problems of Datasets with a single shard
{ "login": "andjoer", "id": 60151338, "node_id": "MDQ6VXNlcjYwMTUxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/60151338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andjoer", "html_url": "https://github.com/andjoer", "followers_url": "https://api.github.com/users/andjoer/followers", "following_url": "https://api.github.com/users/andjoer/following{/other_user}", "gists_url": "https://api.github.com/users/andjoer/gists{/gist_id}", "starred_url": "https://api.github.com/users/andjoer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andjoer/subscriptions", "organizations_url": "https://api.github.com/users/andjoer/orgs", "repos_url": "https://api.github.com/users/andjoer/repos", "events_url": "https://api.github.com/users/andjoer/events{/privacy}", "received_events_url": "https://api.github.com/users/andjoer/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2024-04-18T13:59:00"
"2024-04-18T17:51:08"
null
NONE
null
### Describe the bug When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip. ### Steps to reproduce the bug The code below reproduces the behavior. All works well when the range of the loop is 10000 but it fails when it is 1000. ``` from PIL import Image import numpy as np from datasets import Dataset, DatasetDict, load_dataset def load_image(): # Generate random noise image noise = np.random.randint(0, 256, (256, 256, 3), dtype=np.uint8) return Image.fromarray(noise) def create_dataset(): input_images = [] output_images = [] text_prompts = [] for _ in range(10000): # this is the problematic parameter input_images.append(load_image()) output_images.append(load_image()) text_prompts.append('test prompt') data = {'input_image': input_images, 'output_image': output_images, 'text_prompt': text_prompts} dataset = Dataset.from_dict(data) return DatasetDict({'train': dataset}) dataset = create_dataset() print('dataset before saving') print(dataset) print(dataset['train'].column_names) dataset.save_to_disk('test_ds') print('dataset after loading') dataset_loaded = load_dataset('test_ds') print(dataset_loaded) print(dataset_loaded['train'].column_names) ``` The output for 1000 iterations is: ``` dataset before saving DatasetDict({ train: Dataset({ features: ['input_image', 'output_image', 'text_prompt'], num_rows: 1000 }) }) ['input_image', 'output_image', 'text_prompt'] Saving the dataset (1/1 shards): 100%|█| 1000/1000 [00:00<00:00, 5156.00 example dataset after loading Generating train split: 1 examples [00:00, 230.52 examples/s] DatasetDict({ train: Dataset({ features: ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'], num_rows: 1 }) }) ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'] ``` For 10000 iteration (8 shards) it is correct: ``` dataset before saving DatasetDict({ train: Dataset({ features: ['input_image', 'output_image', 'text_prompt'], num_rows: 10000 }) }) ['input_image', 'output_image', 'text_prompt'] Saving the dataset (8/8 shards): 100%|█| 10000/10000 [00:01<00:00, 6237.68 examp dataset after loading Generating train split: 10000 examples [00:00, 10773.16 examples/s] DatasetDict({ train: Dataset({ features: ['input_image', 'output_image', 'text_prompt'], num_rows: 10000 }) }) ['input_image', 'output_image', 'text_prompt'] ``` ### Expected behavior The procedure should work for a dataset with one shrad the same as for one with multiple shards ### Environment info - `datasets` version: 2.18.0 - Platform: macOS-14.1-arm64-arm-64bit - Python version: 3.11.8 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0 Edit: I looked in the source code of load.py in datasets. I should have used "load_from_disk" and it indeed works that way. But ideally load_dataset would have raisen an error the same way as if I call a path: ``` if Path(path, config.DATASET_STATE_JSON_FILENAME).exists(): raise ValueError( "You are trying to load a dataset that was saved using `save_to_disk`. " "Please use `load_from_disk` instead." ) ``` nevertheless I find it interesting that it works just well and without a warning if there are multiple shards.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6823/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6822/comments
https://api.github.com/repos/huggingface/datasets/issues/6822/events
https://github.com/huggingface/datasets/pull/6822
2,250,316,258
PR_kwDODunzps5tB8aD
6,822
Fix parquet export infos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6822). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005084 / 0.011353 (-0.006269) | 0.003658 / 0.011008 (-0.007351) | 0.063369 / 0.038508 (0.024860) | 0.030739 / 0.023109 (0.007630) | 0.244335 / 0.275898 (-0.031564) | 0.271731 / 0.323480 (-0.051749) | 0.004133 / 0.007986 (-0.003853) | 0.002798 / 0.004328 (-0.001530) | 0.048790 / 0.004250 (0.044540) | 0.044054 / 0.037052 (0.007002) | 0.261514 / 0.258489 (0.003025) | 0.292155 / 0.293841 (-0.001686) | 0.027971 / 0.128546 (-0.100575) | 0.010723 / 0.075646 (-0.064923) | 0.207328 / 0.419271 (-0.211944) | 0.035928 / 0.043533 (-0.007605) | 0.245320 / 0.255139 (-0.009819) | 0.268774 / 0.283200 (-0.014426) | 0.017119 / 0.141683 (-0.124564) | 1.107052 / 1.452155 (-0.345103) | 1.151752 / 1.492716 (-0.340965) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089941 / 0.018006 (0.071935) | 0.299788 / 0.000490 (0.299298) | 0.000211 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018159 / 0.037411 (-0.019252) | 0.061876 / 0.014526 (0.047350) | 0.074733 / 0.176557 (-0.101824) | 0.122070 / 0.737135 (-0.615065) | 0.076100 / 0.296338 (-0.220238) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282209 / 0.215209 (0.067000) | 2.758098 / 2.077655 (0.680444) | 1.482454 / 1.504120 (-0.021666) | 1.372649 / 1.541195 (-0.168546) | 1.373171 / 1.468490 (-0.095319) | 0.563606 / 4.584777 (-4.021171) | 2.406760 / 3.745712 (-1.338952) | 2.796322 / 5.269862 (-2.473540) | 1.732327 / 4.565676 (-2.833350) | 0.063623 / 0.424275 (-0.360652) | 0.005338 / 0.007607 (-0.002269) | 0.337562 / 0.226044 (0.111518) | 3.345225 / 2.268929 (1.076296) | 1.844353 / 55.444624 (-53.600271) | 1.551003 / 6.876477 (-5.325474) | 1.570623 / 2.142072 (-0.571449) | 0.644843 / 4.805227 (-4.160385) | 0.118811 / 6.500664 (-6.381853) | 0.041731 / 0.075469 (-0.033738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970469 / 1.841788 (-0.871319) | 11.775531 / 8.074308 (3.701222) | 9.757852 / 10.191392 (-0.433540) | 0.130187 / 0.680424 (-0.550237) | 0.013654 / 0.534201 (-0.520547) | 0.328387 / 0.579283 (-0.250896) | 0.268181 / 0.434364 (-0.166183) | 0.325230 / 0.540337 (-0.215107) | 0.421055 / 1.386936 (-0.965881) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005846 / 0.011353 (-0.005507) | 0.003606 / 0.011008 (-0.007402) | 0.050787 / 0.038508 (0.012279) | 0.031635 / 0.023109 (0.008526) | 0.277040 / 0.275898 (0.001142) | 0.300544 / 0.323480 (-0.022936) | 0.004200 / 0.007986 (-0.003786) | 0.002749 / 0.004328 (-0.001580) | 0.049449 / 0.004250 (0.045198) | 0.041616 / 0.037052 (0.004564) | 0.289570 / 0.258489 (0.031081) | 0.316138 / 0.293841 (0.022297) | 0.029578 / 0.128546 (-0.098969) | 0.010582 / 0.075646 (-0.065064) | 0.058284 / 0.419271 (-0.360988) | 0.033078 / 0.043533 (-0.010455) | 0.277964 / 0.255139 (0.022825) | 0.295008 / 0.283200 (0.011808) | 0.017753 / 0.141683 (-0.123930) | 1.128635 / 1.452155 (-0.323519) | 1.190142 / 1.492716 (-0.302575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091504 / 0.018006 (0.073498) | 0.303875 / 0.000490 (0.303385) | 0.000221 / 0.000200 (0.000021) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021413 / 0.037411 (-0.015998) | 0.074825 / 0.014526 (0.060299) | 0.086329 / 0.176557 (-0.090228) | 0.125632 / 0.737135 (-0.611503) | 0.087918 / 0.296338 (-0.208420) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297914 / 0.215209 (0.082705) | 2.922885 / 2.077655 (0.845230) | 1.625758 / 1.504120 (0.121638) | 1.500174 / 1.541195 (-0.041021) | 1.517162 / 1.468490 (0.048672) | 0.576885 / 4.584777 (-4.007892) | 2.458723 / 3.745712 (-1.286989) | 2.798471 / 5.269862 (-2.471391) | 1.762499 / 4.565676 (-2.803178) | 0.064736 / 0.424275 (-0.359539) | 0.005325 / 0.007607 (-0.002282) | 0.351697 / 0.226044 (0.125652) | 3.496223 / 2.268929 (1.227294) | 1.977535 / 55.444624 (-53.467090) | 1.695223 / 6.876477 (-5.181254) | 1.689692 / 2.142072 (-0.452381) | 0.656404 / 4.805227 (-4.148823) | 0.123106 / 6.500664 (-6.377558) | 0.040980 / 0.075469 (-0.034489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.036972 / 1.841788 (-0.804816) | 12.163931 / 8.074308 (4.089623) | 10.297927 / 10.191392 (0.106535) | 0.144087 / 0.680424 (-0.536337) | 0.015553 / 0.534201 (-0.518648) | 0.286225 / 0.579283 (-0.293058) | 0.275567 / 0.434364 (-0.158797) | 0.332717 / 0.540337 (-0.207620) | 0.423804 / 1.386936 (-0.963132) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0bc709af303c8dc64c973a17016bd5aa5db2f3d5 \"CML watermark\")\n" ]
"2024-04-18T10:21:41"
"2024-04-18T11:15:41"
"2024-04-18T11:09:13"
MEMBER
null
Don't use the parquet export infos when USE_PARQUET_EXPORT is False. Otherwise the `datasets-server` might reuse erroneous data when re-running a job this follows https://github.com/huggingface/datasets/pull/6714
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6822/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6822", "html_url": "https://github.com/huggingface/datasets/pull/6822", "diff_url": "https://github.com/huggingface/datasets/pull/6822.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6822.patch", "merged_at": "2024-04-18T11:09:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/6820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6820/comments
https://api.github.com/repos/huggingface/datasets/issues/6820/events
https://github.com/huggingface/datasets/pull/6820
2,248,471,673
PR_kwDODunzps5s7sgy
6,820
Allow deleting a subset/config from a no-script dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6820). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "This is ready for review, @huggingface/datasets.", "I am adding a test...", "@lhoestq I am getting an error in the test and I think it happens because the CI endpoint does not have the /preupload functionality:\r\n```\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-662a4de9-7134df595e29e4c073ac1298;332ff6e3-597a-4dfc-89df-4e9ac64215ad)\r\n\r\nRepository Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-6c54e2-17140484441915/preupload/main?create_pr=1.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\nInvalid username or password.\r\nNote: Creating a commit assumes that the repo already exists on the Huggingface Hub. Please use `create_repo` if it's not the case.\r\n```" ]
"2024-04-17T14:41:12"
"2024-04-25T14:41:13"
null
MEMBER
null
TODO: - [x] Add docs - [ ] Delete token arg from CLI example - See: #6839 Close #6810.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6820/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6820", "html_url": "https://github.com/huggingface/datasets/pull/6820", "diff_url": "https://github.com/huggingface/datasets/pull/6820.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6820.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6819/comments
https://api.github.com/repos/huggingface/datasets/issues/6819/events
https://github.com/huggingface/datasets/issues/6819
2,248,043,797
I_kwDODunzps6F_m0V
6,819
Give more details in `DataFilesNotFoundError` when getting the config names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2024-04-17T11:19:47"
"2024-04-17T11:19:47"
null
CONTRIBUTOR
null
### Feature request After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error: ``` { "error": "Cannot get the config names for the dataset.", "cause_exception": "DataFilesNotFoundError", "cause_message": "No (supported) data files found in cis-lmu/Glot500", "cause_traceback": [ "Traceback (most recent call last):\n", " File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 73, in compute_config_names_response\n config_names = get_dataset_config_names(\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 347, in get_dataset_config_names\n dataset_module = dataset_module_factory(\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1873, in dataset_module_factory\n raise e1 from None\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1854, in dataset_module_factory\n return HubDatasetModuleFactoryWithoutScript(\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1245, in get_module\n module_name, default_builder_kwargs = infer_module_for_data_files(\n", " File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 595, in infer_module_for_data_files\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\"))\n", "datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in cis-lmu/Glot500\n" ] } ``` because the deleted files were still listed in the README, see https://huggingface.co/datasets/cis-lmu/Glot500/discussions/4 Ideally, the error message would include the name of the first configuration with missing files, to help the user understand how to fix it. Here, it would tell that configuration `aze_Ethi` has no supported data files, instead of telling that the `cis-lmu/Glot500` *dataset* has no supported data files (which is not true). ### Motivation Giving more detail in the error would help the Datasets Hub users to debug why the dataset viewer does not work. ### Your contribution Not sure how to best fix this, as there are a lot of loops on the dataset configs in the traceback methods. "maybe" it would be easier to handle if the code was completely isolating each config.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6819/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6817/comments
https://api.github.com/repos/huggingface/datasets/issues/6817/events
https://github.com/huggingface/datasets/pull/6817
2,246,578,480
PR_kwDODunzps5s1RAN
6,817
Support indexable objects in `Dataset.__getitem__`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6817). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005464 / 0.011353 (-0.005889) | 0.004174 / 0.011008 (-0.006834) | 0.064252 / 0.038508 (0.025744) | 0.033305 / 0.023109 (0.010196) | 0.245831 / 0.275898 (-0.030067) | 0.275575 / 0.323480 (-0.047905) | 0.003359 / 0.007986 (-0.004626) | 0.004196 / 0.004328 (-0.000132) | 0.049961 / 0.004250 (0.045710) | 0.048940 / 0.037052 (0.011888) | 0.261037 / 0.258489 (0.002548) | 0.295329 / 0.293841 (0.001488) | 0.028570 / 0.128546 (-0.099976) | 0.010747 / 0.075646 (-0.064900) | 0.216021 / 0.419271 (-0.203251) | 0.036885 / 0.043533 (-0.006648) | 0.251169 / 0.255139 (-0.003970) | 0.286233 / 0.283200 (0.003034) | 0.021253 / 0.141683 (-0.120429) | 1.150669 / 1.452155 (-0.301485) | 1.187577 / 1.492716 (-0.305140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094443 / 0.018006 (0.076436) | 0.304410 / 0.000490 (0.303920) | 0.000213 / 0.000200 (0.000013) | 0.000041 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019568 / 0.037411 (-0.017844) | 0.065734 / 0.014526 (0.051208) | 0.076042 / 0.176557 (-0.100515) | 0.123624 / 0.737135 (-0.613511) | 0.078047 / 0.296338 (-0.218291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295725 / 0.215209 (0.080515) | 2.752501 / 2.077655 (0.674846) | 1.461856 / 1.504120 (-0.042264) | 1.353692 / 1.541195 (-0.187503) | 1.391777 / 1.468490 (-0.076713) | 0.563423 / 4.584777 (-4.021354) | 2.384620 / 3.745712 (-1.361092) | 2.876092 / 5.269862 (-2.393769) | 1.803913 / 4.565676 (-2.761763) | 0.062678 / 0.424275 (-0.361597) | 0.005428 / 0.007607 (-0.002179) | 0.333797 / 0.226044 (0.107753) | 3.304458 / 2.268929 (1.035530) | 1.801768 / 55.444624 (-53.642856) | 1.569406 / 6.876477 (-5.307070) | 1.614535 / 2.142072 (-0.527538) | 0.650178 / 4.805227 (-4.155049) | 0.119693 / 6.500664 (-6.380971) | 0.042832 / 0.075469 (-0.032637) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982035 / 1.841788 (-0.859753) | 12.390006 / 8.074308 (4.315698) | 10.127018 / 10.191392 (-0.064374) | 0.131963 / 0.680424 (-0.548461) | 0.013926 / 0.534201 (-0.520275) | 0.289587 / 0.579283 (-0.289696) | 0.270302 / 0.434364 (-0.164062) | 0.327231 / 0.540337 (-0.213107) | 0.422522 / 1.386936 (-0.964414) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005666 / 0.011353 (-0.005687) | 0.003914 / 0.011008 (-0.007094) | 0.050315 / 0.038508 (0.011807) | 0.032367 / 0.023109 (0.009257) | 0.271732 / 0.275898 (-0.004166) | 0.297248 / 0.323480 (-0.026231) | 0.005101 / 0.007986 (-0.002884) | 0.002882 / 0.004328 (-0.001447) | 0.049651 / 0.004250 (0.045401) | 0.043773 / 0.037052 (0.006721) | 0.288011 / 0.258489 (0.029522) | 0.311863 / 0.293841 (0.018023) | 0.029147 / 0.128546 (-0.099399) | 0.010722 / 0.075646 (-0.064925) | 0.058832 / 0.419271 (-0.360440) | 0.033092 / 0.043533 (-0.010441) | 0.274686 / 0.255139 (0.019547) | 0.294174 / 0.283200 (0.010975) | 0.019196 / 0.141683 (-0.122486) | 1.126615 / 1.452155 (-0.325540) | 1.193107 / 1.492716 (-0.299609) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097547 / 0.018006 (0.079541) | 0.316018 / 0.000490 (0.315529) | 0.000330 / 0.000200 (0.000130) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022336 / 0.037411 (-0.015076) | 0.077092 / 0.014526 (0.062566) | 0.088873 / 0.176557 (-0.087684) | 0.128517 / 0.737135 (-0.608619) | 0.094061 / 0.296338 (-0.202278) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300100 / 0.215209 (0.084891) | 2.893114 / 2.077655 (0.815460) | 1.570541 / 1.504120 (0.066421) | 1.453538 / 1.541195 (-0.087657) | 1.505325 / 1.468490 (0.036835) | 0.567955 / 4.584777 (-4.016822) | 2.458547 / 3.745712 (-1.287166) | 2.969181 / 5.269862 (-2.300680) | 1.850082 / 4.565676 (-2.715594) | 0.063811 / 0.424275 (-0.360464) | 0.005378 / 0.007607 (-0.002229) | 0.348219 / 0.226044 (0.122175) | 3.443986 / 2.268929 (1.175057) | 1.943005 / 55.444624 (-53.501620) | 1.686541 / 6.876477 (-5.189935) | 1.715552 / 2.142072 (-0.426520) | 0.641361 / 4.805227 (-4.163866) | 0.116652 / 6.500664 (-6.384012) | 0.042216 / 0.075469 (-0.033253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020102 / 1.841788 (-0.821686) | 12.966127 / 8.074308 (4.891819) | 10.748397 / 10.191392 (0.557005) | 0.132601 / 0.680424 (-0.547823) | 0.016643 / 0.534201 (-0.517558) | 0.289422 / 0.579283 (-0.289861) | 0.275524 / 0.434364 (-0.158840) | 0.332835 / 0.540337 (-0.207503) | 0.427867 / 1.386936 (-0.959069) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5eb93f61f9f6e7fefba5d800defe21e50ddf8c58 \"CML watermark\")\n" ]
"2024-04-16T17:41:27"
"2024-04-16T18:27:44"
"2024-04-16T18:17:29"
CONTRIBUTOR
null
As discussed in https://github.com/huggingface/datasets/pull/6816, this is needed to support objects that implement `__index__` such as `np.int64` in `Dataset.__getitem__`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6817/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6817", "html_url": "https://github.com/huggingface/datasets/pull/6817", "diff_url": "https://github.com/huggingface/datasets/pull/6817.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6817.patch", "merged_at": "2024-04-16T18:17:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/6816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6816/comments
https://api.github.com/repos/huggingface/datasets/issues/6816/events
https://github.com/huggingface/datasets/pull/6816
2,246,264,911
PR_kwDODunzps5s0MYO
6,816
Improve typing of Dataset.search, matching definition
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6816). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi! This is a breaking change. A better solution is to check for \"indexable\" types in `__getitem__` to support keys such as `np.int64`:\r\n```python\r\nimport operator\r\n\r\ndef _query_table_with_indices_mapping(...): # or _query_table\r\n ...\r\n try:\r\n operator.index(key)\r\n except TypeError:\r\n pass\r\n \r\n _raise_bad_key_type(key)\r\n```", "Sounds good! We should still update type annotations for SearchResult in my opinion." ]
"2024-04-16T14:53:39"
"2024-04-16T15:54:10"
"2024-04-16T15:54:10"
CONTRIBUTOR
null
Previously, the output of `score, indices = Dataset.search(...)` would be numpy arrays. The definition in `SearchResult` is a `List[int]` so this PR now matched the expected type. The previous behavior is a bit annoying as `Dataset.__getitem__` doesn't support `numpy.int64` which forced me to convert `indices` to int eg: ```python score, indices = ds.search(...) item = ds[int(indices[0])] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6816/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6816", "html_url": "https://github.com/huggingface/datasets/pull/6816", "diff_url": "https://github.com/huggingface/datasets/pull/6816.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6816.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6815/comments
https://api.github.com/repos/huggingface/datasets/issues/6815/events
https://github.com/huggingface/datasets/pull/6815
2,246,197,070
PR_kwDODunzps5sz9eC
6,815
Remove `os.path.relpath` in `resolve_patterns`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6815). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005101 / 0.011353 (-0.006252) | 0.003478 / 0.011008 (-0.007531) | 0.063634 / 0.038508 (0.025126) | 0.030670 / 0.023109 (0.007561) | 0.240057 / 0.275898 (-0.035841) | 0.258726 / 0.323480 (-0.064754) | 0.004136 / 0.007986 (-0.003849) | 0.002667 / 0.004328 (-0.001662) | 0.048968 / 0.004250 (0.044718) | 0.043125 / 0.037052 (0.006073) | 0.249033 / 0.258489 (-0.009456) | 0.282630 / 0.293841 (-0.011211) | 0.027528 / 0.128546 (-0.101018) | 0.009987 / 0.075646 (-0.065660) | 0.210614 / 0.419271 (-0.208657) | 0.034965 / 0.043533 (-0.008567) | 0.239199 / 0.255139 (-0.015940) | 0.276891 / 0.283200 (-0.006309) | 0.017781 / 0.141683 (-0.123902) | 1.142795 / 1.452155 (-0.309360) | 1.184171 / 1.492716 (-0.308545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092075 / 0.018006 (0.074068) | 0.300709 / 0.000490 (0.300220) | 0.000217 / 0.000200 (0.000017) | 0.000046 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017887 / 0.037411 (-0.019525) | 0.061134 / 0.014526 (0.046608) | 0.077075 / 0.176557 (-0.099482) | 0.118808 / 0.737135 (-0.618327) | 0.074961 / 0.296338 (-0.221377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280404 / 0.215209 (0.065194) | 2.759453 / 2.077655 (0.681798) | 1.437552 / 1.504120 (-0.066568) | 1.318703 / 1.541195 (-0.222492) | 1.313075 / 1.468490 (-0.155416) | 0.564876 / 4.584777 (-4.019901) | 2.381595 / 3.745712 (-1.364118) | 2.759171 / 5.269862 (-2.510691) | 1.725878 / 4.565676 (-2.839799) | 0.062627 / 0.424275 (-0.361648) | 0.005295 / 0.007607 (-0.002312) | 0.335245 / 0.226044 (0.109201) | 3.276266 / 2.268929 (1.007337) | 1.843272 / 55.444624 (-53.601353) | 1.519948 / 6.876477 (-5.356529) | 1.519626 / 2.142072 (-0.622447) | 0.637891 / 4.805227 (-4.167336) | 0.116260 / 6.500664 (-6.384404) | 0.041768 / 0.075469 (-0.033701) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981739 / 1.841788 (-0.860049) | 11.354768 / 8.074308 (3.280460) | 9.900585 / 10.191392 (-0.290807) | 0.130683 / 0.680424 (-0.549741) | 0.014122 / 0.534201 (-0.520079) | 0.297451 / 0.579283 (-0.281832) | 0.264786 / 0.434364 (-0.169577) | 0.337559 / 0.540337 (-0.202778) | 0.425131 / 1.386936 (-0.961805) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005182 / 0.011353 (-0.006171) | 0.003355 / 0.011008 (-0.007653) | 0.049842 / 0.038508 (0.011334) | 0.031094 / 0.023109 (0.007985) | 0.270080 / 0.275898 (-0.005818) | 0.291602 / 0.323480 (-0.031878) | 0.004210 / 0.007986 (-0.003776) | 0.002720 / 0.004328 (-0.001608) | 0.048986 / 0.004250 (0.044736) | 0.055187 / 0.037052 (0.018135) | 0.280085 / 0.258489 (0.021595) | 0.308148 / 0.293841 (0.014308) | 0.029300 / 0.128546 (-0.099246) | 0.009976 / 0.075646 (-0.065670) | 0.057930 / 0.419271 (-0.361341) | 0.032543 / 0.043533 (-0.010990) | 0.277485 / 0.255139 (0.022346) | 0.289345 / 0.283200 (0.006145) | 0.018070 / 0.141683 (-0.123613) | 1.140977 / 1.452155 (-0.311178) | 1.190543 / 1.492716 (-0.302173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093416 / 0.018006 (0.075410) | 0.298732 / 0.000490 (0.298242) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022167 / 0.037411 (-0.015244) | 0.074970 / 0.014526 (0.060444) | 0.086047 / 0.176557 (-0.090509) | 0.125228 / 0.737135 (-0.611907) | 0.088330 / 0.296338 (-0.208008) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292016 / 0.215209 (0.076807) | 2.845712 / 2.077655 (0.768057) | 1.576951 / 1.504120 (0.072831) | 1.452298 / 1.541195 (-0.088897) | 1.456918 / 1.468490 (-0.011572) | 0.560529 / 4.584777 (-4.024248) | 2.425333 / 3.745712 (-1.320379) | 2.739416 / 5.269862 (-2.530445) | 1.715779 / 4.565676 (-2.849898) | 0.062568 / 0.424275 (-0.361707) | 0.005327 / 0.007607 (-0.002280) | 0.351376 / 0.226044 (0.125332) | 3.401855 / 2.268929 (1.132927) | 1.921844 / 55.444624 (-53.522780) | 1.648423 / 6.876477 (-5.228054) | 1.642003 / 2.142072 (-0.500069) | 0.640789 / 4.805227 (-4.164438) | 0.114699 / 6.500664 (-6.385965) | 0.040451 / 0.075469 (-0.035018) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004186 / 1.841788 (-0.837602) | 11.879918 / 8.074308 (3.805609) | 9.981852 / 10.191392 (-0.209540) | 0.141298 / 0.680424 (-0.539126) | 0.015005 / 0.534201 (-0.519196) | 0.291537 / 0.579283 (-0.287746) | 0.272093 / 0.434364 (-0.162271) | 0.331361 / 0.540337 (-0.208977) | 0.422940 / 1.386936 (-0.963996) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed8860faef3e751f3b77c08e09ce723a74d2c2e5 \"CML watermark\")\n" ]
"2024-04-16T14:23:13"
"2024-04-16T16:06:48"
"2024-04-16T15:58:22"
CONTRIBUTOR
null
... to save a few seconds when resolving repos with many data files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6815/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6815", "html_url": "https://github.com/huggingface/datasets/pull/6815", "diff_url": "https://github.com/huggingface/datasets/pull/6815.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6815.patch", "merged_at": "2024-04-16T15:58:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/6814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6814/comments
https://api.github.com/repos/huggingface/datasets/issues/6814/events
https://github.com/huggingface/datasets/issues/6814
2,245,857,902
I_kwDODunzps6F3RJu
6,814
`map` with `num_proc` > 1 leads to OOM
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! You can try to reduce `writer_batch_size`. It corresponds to the number of samples that stay in RAM before being flushed to disk" ]
"2024-04-16T11:56:03"
"2024-04-19T11:53:41"
null
CONTRIBUTOR
null
### Describe the bug When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this? ### Steps to reproduce the bug ``` ds = load_dataset("parquet", data_files=dataset_path, split="train") ds = ds.shard(num_shards=4, index=0) ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) ds = ds.map(prepare_dataset, num_proc=32, writer_batch_size=1000, keep_in_memory=False, desc="preprocess dataset") ``` ``` def prepare_dataset(batch): # load audio sample = batch["audio"] inputs = feature_extractor(sample["array"], sampling_rate=16000) batch["input_values"] = inputs.input_values[0] batch["input_length"] = len(sample["array"].squeeze()) return batch ``` ### Expected behavior It shouldn't run into OOM problem. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17 - Python version: 3.8.19 - `huggingface_hub` version: 0.22.2 - PyArrow version: 15.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2024.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6814/timeline
null
null
null
null
false
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card