url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
758M
1.95B
node_id
stringlengths
18
32
number
int64
1.2k
6.31k
title
stringlengths
1
290
user
dict
labels
listlengths
0
3
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3799
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3799/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3799/comments
https://api.github.com/repos/huggingface/datasets/issues/3799/events
https://github.com/huggingface/datasets/pull/3799
1,155,356,102
PR_kwDODunzps4zus9R
3,799
Xtreme-S Metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[ "@lhoestq - if you could take a final review here this would be great (if you have 5min :-) ) ", "Don't think the failures are related but not 100% sure", "Yes the CI fail is unrelated - you can ignore it" ]
"2022-03-01T13:42:28Z"
"2022-03-16T14:40:29Z"
"2022-03-16T14:40:26Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3799.diff", "html_url": "https://github.com/huggingface/datasets/pull/3799", "merged_at": "2022-03-16T14:40:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/3799.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3799" }
**Added datasets (TODO)**: - [x] MLS - [x] Covost2 - [x] Minds-14 - [x] Voxpopuli - [x] FLoRes (need data) **Metrics**: Done
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3799/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3799/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3343/comments
https://api.github.com/repos/huggingface/datasets/issues/3343/events
https://github.com/huggingface/datasets/pull/3343
1,067,505,507
PR_kwDODunzps4vM8yB
3,343
Better error message when download fails
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-11-30T17:38:50Z"
"2021-12-01T11:27:59Z"
"2021-12-01T11:27:58Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3343.diff", "html_url": "https://github.com/huggingface/datasets/pull/3343", "merged_at": "2021-12-01T11:27:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/3343.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3343" }
From our discussions in https://github.com/huggingface/datasets/issues/3269 and https://github.com/huggingface/datasets/issues/3282 it would be nice to have better messages if a download fails. In particular the error now shows: - the error from the HEAD request if there's one - otherwise the response code of the HEAD request I also added an error to tell users to pass `use_auth_token` when the Hugging Face Hub returns 401 (Unauthorized). While paying around with this I also fixed a minor issue with the `force_download` parameter that was not always taken into account
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3343/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3343/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4483/comments
https://api.github.com/repos/huggingface/datasets/issues/4483/events
https://github.com/huggingface/datasets/issues/4483
1,269,253,840
I_kwDODunzps5Lp0bQ
4,483
Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists
{ "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanderland", "id": 48946947, "login": "sanderland", "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "organizations_url": "https://api.github.com/users/sanderland/orgs", "received_events_url": "https://api.github.com/users/sanderland/received_events", "repos_url": "https://api.github.com/users/sanderland/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "type": "User", "url": "https://api.github.com/users/sanderland" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Hi @sanderland ! Thanks for reporting :) This is a bug, I opened a PR to fix it. We'll do a new release soon\r\n\r\nIn the meantime you can fix it by specifying in advance that the \"label\" are integers:\r\n```python\r\nimport numpy as np\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"text\": [\"the lazy dog jumps over the quick fox\", \"another sentence\"],\r\n \"label\": [[], []],\r\n }\r\n)\r\n# explicitly say that the \"label\" type is int64, even though it contains only null values\r\nds = ds.cast_column(\"label\", Sequence(Value(\"int64\")))\r\n\r\ndef mapper(features):\r\n features['label'] = [\r\n [0,0,0] for l in features['label']\r\n ]\r\n return features\r\n\r\nds_mapped = ds.map(mapper,batched=True)\r\n```" ]
"2022-06-13T10:47:52Z"
"2022-06-14T13:34:14Z"
"2022-06-14T13:34:14Z"
CONTRIBUTOR
null
null
null
## Describe the bug Dataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'. This appears to be due to the interaction of arrow internals and some assumptions made by datasets. The bug appeared when binarizing some labels, and then adding a dataset which had all these labels absent (to force the model to not label empty strings such with anything) Particularly the fact that this only happens in batched mode is strange. ## Steps to reproduce the bug ```python import numpy as np ds = Dataset.from_dict( { "text": ["the lazy dog jumps over the quick fox", "another sentence"], "label": [[], []], } ) def mapper(features): features['label'] = [ [0,0,0] for l in features['label'] ] return features ds_mapped = ds.map(mapper,batched=True) ``` ## Expected results Not crashing ## Actual results ``` ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2346: in map return self._map_single( ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:532: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:499: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/fingerprint.py:458: in wrapper out = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2751: in _map_single writer.write_batch(batch) ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:503: in write_batch arrays.append(pa.array(typed_sequence)) pyarrow/array.pxi:230: in pyarrow.lib.array ??? pyarrow/array.pxi:110: in pyarrow.lib._handle_arrow_array_protocol ??? ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:198: in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1812: in cast_array_to_feature casted_values = _c(array.values, feature.feature) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1843: in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1752: in array_cast return array.cast(pa_type) pyarrow/array.pxi:915: in pyarrow.lib.Array.cast ??? ../.venv/lib/python3.8/site-packages/pyarrow/compute.py:376: in cast return call_function("cast", [arr], options) pyarrow/_compute.pyx:542: in pyarrow._compute.call_function ??? pyarrow/_compute.pyx:341: in pyarrow._compute.Function.call ??? pyarrow/error.pxi:144: in pyarrow.lib.pyarrow_internal_check_status ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null pyarrow/error.pxi:121: ArrowNotImplementedError ``` ## Workarounds * Not using batched=True * Using an np.array([],dtype=float) or similar instead of [] in the input * Naming the output column differently from the input column ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu - Python version: 3.8 - PyArrow version: 8.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4483/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4483/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3005/comments
https://api.github.com/repos/huggingface/datasets/issues/3005/events
https://github.com/huggingface/datasets/issues/3005
1,014,615,420
I_kwDODunzps48ec18
3,005
DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument
{ "avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4", "events_url": "https://api.github.com/users/DrMatters/events{/privacy}", "followers_url": "https://api.github.com/users/DrMatters/followers", "following_url": "https://api.github.com/users/DrMatters/following{/other_user}", "gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DrMatters", "id": 22641583, "login": "DrMatters", "node_id": "MDQ6VXNlcjIyNjQxNTgz", "organizations_url": "https://api.github.com/users/DrMatters/orgs", "received_events_url": "https://api.github.com/users/DrMatters/received_events", "repos_url": "https://api.github.com/users/DrMatters/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions", "type": "User", "url": "https://api.github.com/users/DrMatters" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @DrMatters, thanks for reporting.\r\n\r\nThis issue was fixed 14 days ago: #2950.\r\n\r\nCurrently, the fix is only in the master branch and will be made available in our next library release.\r\n\r\nIn the meantime, you can incorporate the fix by installing datasets from the master branch:\r\n```shell\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```shell\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```", "Thanks, sorry for bothering" ]
"2021-10-04T00:49:29Z"
"2021-10-11T10:18:01Z"
"2021-10-04T08:46:13Z"
NONE
null
null
null
## Describe the bug The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument ## Steps to reproduce the bug ```python import datasets example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}}) def filter_value(example, value): return example['a'] == value filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3}) ``` ## Expected results `filtered` is a dataset containing {"a": {3}} ## Actual results > Traceback (most recent call last): > File "C:\Users\qsemi\Documents\git\nlp_experiments\gpt_celebrity\src\test_faulty_filter.py", line 8, in <module> > filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3}) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper > out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper > out = func(self, *args, **kwargs) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2169, in filter > indices = self.map( > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1686, in map > return self._map_single( > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper > out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper > out = func(self, *args, **kwargs) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2048, in _map_single > batch = apply_function_on_filtered_inputs( > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1939, in apply_function_on_filtered_inputs > function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) > TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'value' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3005/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3005/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4473/comments
https://api.github.com/repos/huggingface/datasets/issues/4473/events
https://github.com/huggingface/datasets/pull/4473
1,267,555,994
PR_kwDODunzps45d5-R
4,473
Add SST-2 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "on the hub this dataset is referenced as `sst-2` not `sst2` – is there a canonical orthography? If not, could we name it `sst-2`?", "@julien-c, we normally do not use hyphens for dataset names: whenever the original dataset name contains a hyphen, we usually:\r\n- either suppress it: CoNLL-2000 (`conll2000`), CORD-19 (`cord19`)\r\n- or replace it with underscore: CC-News (`cc_news`), SQuAD-es (`squad_es`)\r\n\r\nThere are some exceptions though... (I wonder why)\r\n\r\nI think, the reason is there was a 1-to-1 relation with the corresponding Python module name.\r\n\r\nI personally find confusing not having a rule and using both hyphens and underscores indistinctly: you never know which is the right orthography.\r\n\r\nWhichever the decision we make, I would prefer to be applied consistently.\r\n\r\nAlso note that we already implemented this dataset as part of GLUE: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py#L163\r\n- dataset name: `glue`\r\n- config name: `sst2`\r\n\r\nOn the other hand, let's see how other libraries name it:\r\n- torchtext: `SST2` https://pytorch.org/text/stable/datasets.html#sst2\r\n- OpenAI CLIP: `rendered-sst2` https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md\r\n- Kaggle: `SST2` https://www.kaggle.com/datasets/atulanandjha/stanford-sentiment-treebank-v2-sst2/version/22\r\n- TensorFlow Datasets: `glue/sst2` https://www.tensorflow.org/datasets/catalog/glue#gluesst2", "Ok, another option is to open PRs against the models in https://huggingface.co/models?datasets=sst-2 to change their dataset reference to `sst2`\r\n\r\n(BTW some models refer to `sst2` already – but they're less popular: https://huggingface.co/models?datasets=sst2)", "OK, I'm taking care of the subsequent PRs on models to align with this dataset name." ]
"2022-06-10T13:37:26Z"
"2022-06-13T14:11:34Z"
"2022-06-13T14:01:09Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4473.diff", "html_url": "https://github.com/huggingface/datasets/pull/4473", "merged_at": "2022-06-13T14:01:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/4473.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4473" }
Add SST-2 dataset. Currently it is part of GLUE benchmark. This PR adds it as a standalone dataset. CC: @julien-c
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4473/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3726/comments
https://api.github.com/repos/huggingface/datasets/issues/3726/events
https://github.com/huggingface/datasets/pull/3726
1,138,870,362
PR_kwDODunzps4y3iSv
3,726
Use config pandas version in CSV dataset builder
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2022-02-15T15:47:49Z"
"2022-02-15T16:55:45Z"
"2022-02-15T16:55:44Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3726.diff", "html_url": "https://github.com/huggingface/datasets/pull/3726", "merged_at": "2022-02-15T16:55:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/3726.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3726" }
Fix #3724.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3726/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3726/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6214/comments
https://api.github.com/repos/huggingface/datasets/issues/6214/events
https://github.com/huggingface/datasets/issues/6214
1,881,736,469
I_kwDODunzps5wKQUV
6,214
Unpin fsspec < 2023.9.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[]
"2023-09-05T11:02:58Z"
"2023-09-26T15:32:52Z"
"2023-09-26T15:32:52Z"
MEMBER
null
null
null
Once root issue is fixed, remove temporary pin of fsspec < 2023.9.0 introduced by: - #6210 Related to issue: - #6209 After investigation, I think the root issue is related to the new glob behavior with double asterisk `**` they have introduced in: - https://github.com/fsspec/filesystem_spec/pull/1329
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6214/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6214/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1288/comments
https://api.github.com/repos/huggingface/datasets/issues/1288/events
https://github.com/huggingface/datasets/pull/1288
759,309,457
MDExOlB1bGxSZXF1ZXN0NTM0MzM2Mzgz
1,288
Add CodeSearchNet corpus dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
[]
closed
false
null
[]
null
[ "@lhoestq ready for a second review" ]
"2020-12-08T10:07:50Z"
"2020-12-09T17:05:28Z"
"2020-12-09T17:05:28Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1288.diff", "html_url": "https://github.com/huggingface/datasets/pull/1288", "merged_at": "2020-12-09T17:05:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/1288.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1288" }
This PR adds the CodeSearchNet corpus proxy dataset for semantic code search: https://github.com/github/CodeSearchNet I have had a few issues, mentioned below. Would appreciate some help on how to solve them. ## Issues generating dataset card Is there something wrong with my declaration of the dataset features ? ``` features=datasets.Features( { "repository_name": datasets.Value("string"), "func_path_in_repository": datasets.Value("string"), "func_name": datasets.Value("string"), "whole_func_string": datasets.Value("string"), "language": datasets.Value("string"), "func_code_string": datasets.Value("string"), "func_code_tokens": datasets.Sequence(datasets.Value("string")), "func_documentation_string": datasets.Value("string"), "func_documentation_tokens": datasets.Sequence(datasets.Value("string")), "split_name": datasets.Value("string"), "func_code_url": datasets.Value("string"), # TODO - add licensing info in the examples } ), ``` When running the streamlite app for tagging the dataset on my machine, I get the following error : ![image](https://user-images.githubusercontent.com/33657802/101469132-9ed12c80-3944-11eb-94ff-2d9c1d0ea080.png) ## Issues with dummy data Due to the unusual structure of the data, I have been unable to generate dummy data automatically. I tried to generate it manually, but pytests fail when using the manually-generated dummy data ! Pytests work fine when using the real data. ``` ============================================================================================== test session starts ============================================================================================== platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 plugins: xdist-2.1.0, forked-1.3.0 collected 1 item tests/test_dataset_common.py F [100%] =================================================================================================== FAILURES ==================================================================================================== ________________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_code_search_net _________________________________________________________________________ self = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_code_search_net>, dataset_name = 'code_search_net' @slow def test_load_dataset_all_configs(self, dataset_name): configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True) > self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True) tests/test_dataset_common.py:237: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/test_dataset_common.py:198: in check_load_dataset self.parent.assertTrue(len(dataset[split]) > 0) E AssertionError: False is not true --------------------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------------------- Downloading and preparing dataset code_search_net/all (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /tmp/tmppx78sj24/code_search_net/all/1.0.0... Dataset code_search_net downloaded and prepared to /tmp/tmppx78sj24/code_search_net/all/1.0.0. Subsequent calls will reuse this data. --------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------- ... (irrelevant info - Deprecation warnings) ============================================================================================ short test summary info ============================================================================================ FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_code_search_net - AssertionError: False is not true ========================================================================================= 1 failed, 4 warnings in 3.00s ======================================================================================== ``` ## Note : Data structure in S3 The data is stored on S3, and organized by programming languages. It is stored in the following repository structure: ``` . ├── <language_name> # e.g. python │   └── final │   └── jsonl │   ├── test │   │   └── <language_name>_test_0.jsonl.gz │   ├── train │   │   ├── <language_name>_train_0.jsonl.gz │   │   ├── <language_name>_train_1.jsonl.gz │   │   ├── ... │   │   └── <language_name>_train_n.jsonl.gz │   └── valid │   └── <language_name>_valid_0.jsonl.gz ├── <language_name>_dedupe_definitions_v2.pkl └── <language_name>_licenses.pkl ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1288/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1288/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6257/comments
https://api.github.com/repos/huggingface/datasets/issues/6257/events
https://github.com/huggingface/datasets/issues/6257
1,910,741,044
I_kwDODunzps5x45g0
6,257
HfHubHTTPError - exceeded our hourly quotas for action: commit
{ "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuvalkirstain", "id": 57996478, "login": "yuvalkirstain", "node_id": "MDQ6VXNlcjU3OTk2NDc4", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "type": "User", "url": "https://api.github.com/users/yuvalkirstain" }
[]
closed
false
null
[]
null
[ "how is your dataset structured? (file types, how many commits and files are you trying to push, etc)", "I succeeded in uploading it after several attempts with an hour gap between each attempt (inconvenient but worked). The final dataset is [here](https://huggingface.co/datasets/yuvalkirstain/pickapic_v2), code and context to the dataset can be found [here](https://github.com/yuvalkirstain/PickScore/).\r\nI can close the issue if this behavior is intended, as most users probably do not need to upload large-scale datasets.", "We could fix this by creating a single commit for all the (Parquet) shards in `push_to_hub` instead of one commit per shard, as we currently do. \r\n\r\n@Wauplin Any updates on the 2-step commit process suggested by you that we need to implement this?", "> Any updates on the 2-step commit process suggested by you that we need to implement this?\r\n\r\nRe-prioritizing this, sorry. Will let you know but probably can be done this week." ]
"2023-09-25T06:11:43Z"
"2023-10-16T13:30:49Z"
"2023-10-16T13:30:48Z"
NONE
null
null
null
### Describe the bug I try to upload a very large dataset of images, and get the following error: ``` File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/hf_api.py:2712, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads, parent_commit, run_as_future) 2710 try: 2711 commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params) -> 2712 hf_raise_for_status(commit_resp, endpoint_name="commit") 2713 except RepositoryNotFoundError as e: 2714 e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE) File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name) 297 raise BadRequestError(message, response=response) from e 299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information 300 # as well (request id and/or server error message) --> 301 raise HfHubHTTPError(str(e), response=response) from e HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/yuvalkirstain/pickapic_v2/commit/main (Request ID: Root=1-65112399-12d63f7d7f28bfa40a36a0fd) You have exceeded our hourly quotas for action: commit. We invite you to retry later. ``` this makes it much less convenient to host large datasets on HF hub. ### Steps to reproduce the bug Upload a very large dataset of images ### Expected behavior the upload to work well ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6257/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6257/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5523/comments
https://api.github.com/repos/huggingface/datasets/issues/5523/events
https://github.com/huggingface/datasets/issues/5523
1,580,193,015
I_kwDODunzps5eL9T3
5,523
Checking that split name is correct happens only after the data is downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" } ]
null
[]
"2023-02-10T19:13:03Z"
"2023-02-10T19:14:50Z"
null
CONTRIBUTOR
null
null
null
### Describe the bug Verification of split names (=indexing data by split) happens after downloading the data. So when the split name is incorrect, users learn about that only after the data is fully downloaded, for large datasets it might take a lot of time. ### Steps to reproduce the bug Load any dataset with random split name, for example: ```python from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_11_0", "en", split="blabla") ``` and the download will start smoothly, despite there is no split named "blabla". ### Expected behavior Raise error when split name is incorrect. ### Environment info `datasets==2.9.1.dev0`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5523/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5523/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5512/comments
https://api.github.com/repos/huggingface/datasets/issues/5512/events
https://github.com/huggingface/datasets/pull/5512
1,576,142,432
PR_kwDODunzps5JhtQy
5,512
Speed up batched PyTorch DataLoader
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008882 / 0.011353 (-0.002471) | 0.004562 / 0.011008 (-0.006446) | 0.100035 / 0.038508 (0.061527) | 0.030654 / 0.023109 (0.007545) | 0.298745 / 0.275898 (0.022847) | 0.356869 / 0.323480 (0.033389) | 0.007170 / 0.007986 (-0.000815) | 0.003471 / 0.004328 (-0.000858) | 0.077975 / 0.004250 (0.073725) | 0.037861 / 0.037052 (0.000809) | 0.311643 / 0.258489 (0.053154) | 0.343504 / 0.293841 (0.049663) | 0.033768 / 0.128546 (-0.094778) | 0.011342 / 0.075646 (-0.064304) | 0.323953 / 0.419271 (-0.095319) | 0.040818 / 0.043533 (-0.002715) | 0.298492 / 0.255139 (0.043353) | 0.327292 / 0.283200 (0.044092) | 0.088423 / 0.141683 (-0.053260) | 1.489520 / 1.452155 (0.037366) | 1.532962 / 1.492716 (0.040245) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223654 / 0.018006 (0.205647) | 0.415134 / 0.000490 (0.414644) | 0.007394 / 0.000200 (0.007194) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023616 / 0.037411 (-0.013795) | 0.096652 / 0.014526 (0.082126) | 0.105239 / 0.176557 (-0.071318) | 0.148637 / 0.737135 (-0.588498) | 0.107937 / 0.296338 (-0.188402) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426816 / 0.215209 (0.211607) | 4.241533 / 2.077655 (2.163878) | 1.946493 / 1.504120 (0.442373) | 1.735765 / 1.541195 (0.194570) | 1.781424 / 1.468490 (0.312934) | 0.688082 / 4.584777 (-3.896694) | 3.396444 / 3.745712 (-0.349268) | 1.920333 / 5.269862 (-3.349528) | 1.293833 / 4.565676 (-3.271843) | 0.081967 / 0.424275 (-0.342308) | 0.012911 / 0.007607 (0.005304) | 0.536928 / 0.226044 (0.310884) | 5.452327 / 2.268929 (3.183399) | 2.505785 / 55.444624 (-52.938840) | 2.173627 / 6.876477 (-4.702850) | 2.119978 / 2.142072 (-0.022095) | 0.809012 / 4.805227 (-3.996215) | 0.149124 / 6.500664 (-6.351540) | 0.066008 / 0.075469 (-0.009461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215702 / 1.841788 (-0.626085) | 13.757525 / 8.074308 (5.683217) | 13.999208 / 10.191392 (3.807816) | 0.164875 / 0.680424 (-0.515549) | 0.028517 / 0.534201 (-0.505684) | 0.394829 / 0.579283 (-0.184454) | 0.404962 / 0.434364 (-0.029401) | 0.484455 / 0.540337 (-0.055882) | 0.575008 / 1.386936 (-0.811928) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006754 / 0.011353 (-0.004598) | 0.004579 / 0.011008 (-0.006430) | 0.076617 / 0.038508 (0.038109) | 0.027902 / 0.023109 (0.004793) | 0.346278 / 0.275898 (0.070380) | 0.398060 / 0.323480 (0.074580) | 0.004938 / 0.007986 (-0.003047) | 0.004681 / 0.004328 (0.000353) | 0.076336 / 0.004250 (0.072086) | 0.038018 / 0.037052 (0.000966) | 0.358701 / 0.258489 (0.100212) | 0.408413 / 0.293841 (0.114572) | 0.031772 / 0.128546 (-0.096774) | 0.011604 / 0.075646 (-0.064042) | 0.085964 / 0.419271 (-0.333308) | 0.042030 / 0.043533 (-0.001502) | 0.343568 / 0.255139 (0.088429) | 0.381805 / 0.283200 (0.098605) | 0.090759 / 0.141683 (-0.050924) | 1.504553 / 1.452155 (0.052398) | 1.594006 / 1.492716 (0.101289) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227395 / 0.018006 (0.209389) | 0.403097 / 0.000490 (0.402608) | 0.000413 / 0.000200 (0.000213) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024693 / 0.037411 (-0.012718) | 0.100470 / 0.014526 (0.085944) | 0.108481 / 0.176557 (-0.068076) | 0.142791 / 0.737135 (-0.594345) | 0.109949 / 0.296338 (-0.186389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443674 / 0.215209 (0.228465) | 4.412207 / 2.077655 (2.334553) | 2.073752 / 1.504120 (0.569632) | 1.863153 / 1.541195 (0.321958) | 1.940063 / 1.468490 (0.471573) | 0.696456 / 4.584777 (-3.888321) | 3.422120 / 3.745712 (-0.323592) | 1.902579 / 5.269862 (-3.367282) | 1.184948 / 4.565676 (-3.380729) | 0.083079 / 0.424275 (-0.341196) | 0.012649 / 0.007607 (0.005042) | 0.542035 / 0.226044 (0.315991) | 5.421826 / 2.268929 (3.152897) | 2.525092 / 55.444624 (-52.919532) | 2.177144 / 6.876477 (-4.699332) | 2.225224 / 2.142072 (0.083151) | 0.804739 / 4.805227 (-4.000488) | 0.151000 / 6.500664 (-6.349664) | 0.066987 / 0.075469 (-0.008482) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277199 / 1.841788 (-0.564589) | 14.184146 / 8.074308 (6.109838) | 13.413348 / 10.191392 (3.221956) | 0.128551 / 0.680424 (-0.551872) | 0.016461 / 0.534201 (-0.517740) | 0.379963 / 0.579283 (-0.199320) | 0.381350 / 0.434364 (-0.053014) | 0.439044 / 0.540337 (-0.101293) | 0.521559 / 1.386936 (-0.865377) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4f3c152c1c35df250d2fbeb25d5823a65714f2d8 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008876 / 0.011353 (-0.002477) | 0.004629 / 0.011008 (-0.006379) | 0.101697 / 0.038508 (0.063189) | 0.030373 / 0.023109 (0.007264) | 0.302206 / 0.275898 (0.026308) | 0.365835 / 0.323480 (0.042355) | 0.007877 / 0.007986 (-0.000109) | 0.004473 / 0.004328 (0.000144) | 0.077334 / 0.004250 (0.073084) | 0.038066 / 0.037052 (0.001014) | 0.308064 / 0.258489 (0.049575) | 0.347329 / 0.293841 (0.053488) | 0.034478 / 0.128546 (-0.094068) | 0.011651 / 0.075646 (-0.063995) | 0.323481 / 0.419271 (-0.095791) | 0.043515 / 0.043533 (-0.000018) | 0.299885 / 0.255139 (0.044746) | 0.328959 / 0.283200 (0.045760) | 0.095308 / 0.141683 (-0.046375) | 1.474058 / 1.452155 (0.021903) | 1.535335 / 1.492716 (0.042619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197416 / 0.018006 (0.179410) | 0.421935 / 0.000490 (0.421446) | 0.003490 / 0.000200 (0.003290) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024519 / 0.037411 (-0.012892) | 0.100710 / 0.014526 (0.086185) | 0.104520 / 0.176557 (-0.072036) | 0.142048 / 0.737135 (-0.595087) | 0.109274 / 0.296338 (-0.187064) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408766 / 0.215209 (0.193557) | 4.101720 / 2.077655 (2.024065) | 1.812375 / 1.504120 (0.308256) | 1.605819 / 1.541195 (0.064624) | 1.688923 / 1.468490 (0.220433) | 0.691198 / 4.584777 (-3.893579) | 3.422137 / 3.745712 (-0.323575) | 1.921318 / 5.269862 (-3.348544) | 1.168770 / 4.565676 (-3.396906) | 0.082840 / 0.424275 (-0.341435) | 0.012740 / 0.007607 (0.005133) | 0.524333 / 0.226044 (0.298289) | 5.258077 / 2.268929 (2.989149) | 2.273177 / 55.444624 (-53.171447) | 1.931919 / 6.876477 (-4.944558) | 1.988415 / 2.142072 (-0.153658) | 0.812227 / 4.805227 (-3.993000) | 0.150043 / 6.500664 (-6.350622) | 0.066422 / 0.075469 (-0.009047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188069 / 1.841788 (-0.653718) | 13.942681 / 8.074308 (5.868373) | 14.104658 / 10.191392 (3.913266) | 0.151966 / 0.680424 (-0.528458) | 0.028833 / 0.534201 (-0.505368) | 0.395125 / 0.579283 (-0.184158) | 0.408512 / 0.434364 (-0.025852) | 0.487587 / 0.540337 (-0.052751) | 0.570023 / 1.386936 (-0.816913) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006860 / 0.011353 (-0.004493) | 0.004582 / 0.011008 (-0.006426) | 0.079902 / 0.038508 (0.041394) | 0.027565 / 0.023109 (0.004456) | 0.341393 / 0.275898 (0.065495) | 0.378911 / 0.323480 (0.055431) | 0.005847 / 0.007986 (-0.002138) | 0.004681 / 0.004328 (0.000353) | 0.079422 / 0.004250 (0.075171) | 0.039135 / 0.037052 (0.002083) | 0.342026 / 0.258489 (0.083537) | 0.387510 / 0.293841 (0.093669) | 0.031999 / 0.128546 (-0.096547) | 0.011782 / 0.075646 (-0.063865) | 0.088563 / 0.419271 (-0.330709) | 0.042435 / 0.043533 (-0.001098) | 0.343055 / 0.255139 (0.087916) | 0.367437 / 0.283200 (0.084237) | 0.091578 / 0.141683 (-0.050104) | 1.506828 / 1.452155 (0.054673) | 1.599590 / 1.492716 (0.106874) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217939 / 0.018006 (0.199932) | 0.408352 / 0.000490 (0.407863) | 0.000394 / 0.000200 (0.000194) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026344 / 0.037411 (-0.011067) | 0.102968 / 0.014526 (0.088442) | 0.110340 / 0.176557 (-0.066217) | 0.145696 / 0.737135 (-0.591439) | 0.111632 / 0.296338 (-0.184707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440764 / 0.215209 (0.225555) | 4.423179 / 2.077655 (2.345524) | 2.057016 / 1.504120 (0.552896) | 1.848741 / 1.541195 (0.307546) | 1.939827 / 1.468490 (0.471337) | 0.699370 / 4.584777 (-3.885407) | 3.472521 / 3.745712 (-0.273191) | 3.232557 / 5.269862 (-2.037305) | 1.755534 / 4.565676 (-2.810143) | 0.083469 / 0.424275 (-0.340807) | 0.012980 / 0.007607 (0.005373) | 0.557662 / 0.226044 (0.331618) | 5.435657 / 2.268929 (3.166729) | 2.545106 / 55.444624 (-52.899519) | 2.168047 / 6.876477 (-4.708430) | 2.234070 / 2.142072 (0.091997) | 0.804662 / 4.805227 (-4.000565) | 0.152832 / 6.500664 (-6.347833) | 0.069372 / 0.075469 (-0.006097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.299189 / 1.841788 (-0.542598) | 14.752880 / 8.074308 (6.678572) | 13.607676 / 10.191392 (3.416284) | 0.150773 / 0.680424 (-0.529650) | 0.016701 / 0.534201 (-0.517500) | 0.379507 / 0.579283 (-0.199776) | 0.389401 / 0.434364 (-0.044963) | 0.444199 / 0.540337 (-0.096139) | 0.524264 / 1.386936 (-0.862672) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12be850b36c0b9d4841af86c75e08c0a726ffb5c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008694 / 0.011353 (-0.002659) | 0.004549 / 0.011008 (-0.006459) | 0.101164 / 0.038508 (0.062656) | 0.029644 / 0.023109 (0.006535) | 0.294849 / 0.275898 (0.018950) | 0.366755 / 0.323480 (0.043275) | 0.007205 / 0.007986 (-0.000780) | 0.004255 / 0.004328 (-0.000074) | 0.077433 / 0.004250 (0.073183) | 0.038024 / 0.037052 (0.000972) | 0.310380 / 0.258489 (0.051891) | 0.347093 / 0.293841 (0.053252) | 0.033232 / 0.128546 (-0.095314) | 0.011404 / 0.075646 (-0.064242) | 0.323341 / 0.419271 (-0.095930) | 0.040586 / 0.043533 (-0.002946) | 0.296083 / 0.255139 (0.040944) | 0.321870 / 0.283200 (0.038671) | 0.087377 / 0.141683 (-0.054306) | 1.466869 / 1.452155 (0.014715) | 1.514763 / 1.492716 (0.022046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010272 / 0.018006 (-0.007734) | 0.414645 / 0.000490 (0.414155) | 0.003730 / 0.000200 (0.003530) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024093 / 0.037411 (-0.013318) | 0.098718 / 0.014526 (0.084192) | 0.105526 / 0.176557 (-0.071030) | 0.141578 / 0.737135 (-0.595557) | 0.109679 / 0.296338 (-0.186660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412907 / 0.215209 (0.197698) | 4.134934 / 2.077655 (2.057280) | 1.881180 / 1.504120 (0.377060) | 1.693207 / 1.541195 (0.152012) | 1.753725 / 1.468490 (0.285235) | 0.693077 / 4.584777 (-3.891700) | 3.367409 / 3.745712 (-0.378303) | 2.749035 / 5.269862 (-2.520827) | 1.565015 / 4.565676 (-3.000662) | 0.082609 / 0.424275 (-0.341666) | 0.012500 / 0.007607 (0.004892) | 0.523619 / 0.226044 (0.297575) | 5.250188 / 2.268929 (2.981259) | 2.314255 / 55.444624 (-53.130369) | 1.962357 / 6.876477 (-4.914120) | 2.020632 / 2.142072 (-0.121441) | 0.812504 / 4.805227 (-3.992724) | 0.149921 / 6.500664 (-6.350743) | 0.065816 / 0.075469 (-0.009653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.230811 / 1.841788 (-0.610977) | 14.008566 / 8.074308 (5.934258) | 14.371285 / 10.191392 (4.179893) | 0.166323 / 0.680424 (-0.514101) | 0.029702 / 0.534201 (-0.504499) | 0.408629 / 0.579283 (-0.170654) | 0.410529 / 0.434364 (-0.023835) | 0.484482 / 0.540337 (-0.055855) | 0.572360 / 1.386936 (-0.814576) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006873 / 0.011353 (-0.004480) | 0.004609 / 0.011008 (-0.006400) | 0.075492 / 0.038508 (0.036984) | 0.028560 / 0.023109 (0.005450) | 0.340321 / 0.275898 (0.064423) | 0.376758 / 0.323480 (0.053278) | 0.005271 / 0.007986 (-0.002715) | 0.004786 / 0.004328 (0.000457) | 0.074843 / 0.004250 (0.070592) | 0.041072 / 0.037052 (0.004019) | 0.339952 / 0.258489 (0.081463) | 0.384375 / 0.293841 (0.090534) | 0.031771 / 0.128546 (-0.096775) | 0.011607 / 0.075646 (-0.064039) | 0.084338 / 0.419271 (-0.334933) | 0.042251 / 0.043533 (-0.001282) | 0.338904 / 0.255139 (0.083765) | 0.365360 / 0.283200 (0.082160) | 0.093151 / 0.141683 (-0.048532) | 1.449833 / 1.452155 (-0.002322) | 1.601946 / 1.492716 (0.109229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225149 / 0.018006 (0.207142) | 0.409855 / 0.000490 (0.409365) | 0.000384 / 0.000200 (0.000184) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025914 / 0.037411 (-0.011497) | 0.100443 / 0.014526 (0.085917) | 0.108557 / 0.176557 (-0.067999) | 0.150338 / 0.737135 (-0.586798) | 0.111472 / 0.296338 (-0.184866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440221 / 0.215209 (0.225012) | 4.409268 / 2.077655 (2.331613) | 2.096008 / 1.504120 (0.591888) | 1.849443 / 1.541195 (0.308248) | 1.934901 / 1.468490 (0.466410) | 0.704072 / 4.584777 (-3.880705) | 3.371370 / 3.745712 (-0.374343) | 3.185478 / 5.269862 (-2.084384) | 1.514541 / 4.565676 (-3.051135) | 0.083724 / 0.424275 (-0.340551) | 0.012674 / 0.007607 (0.005067) | 0.542155 / 0.226044 (0.316111) | 5.413456 / 2.268929 (3.144528) | 2.508567 / 55.444624 (-52.936057) | 2.163235 / 6.876477 (-4.713242) | 2.193914 / 2.142072 (0.051842) | 0.810955 / 4.805227 (-3.994272) | 0.152769 / 6.500664 (-6.347895) | 0.068009 / 0.075469 (-0.007460) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272511 / 1.841788 (-0.569276) | 14.334861 / 8.074308 (6.260553) | 13.555445 / 10.191392 (3.364053) | 0.160520 / 0.680424 (-0.519904) | 0.018363 / 0.534201 (-0.515838) | 0.384937 / 0.579283 (-0.194346) | 0.409138 / 0.434364 (-0.025225) | 0.484037 / 0.540337 (-0.056300) | 0.565595 / 1.386936 (-0.821341) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#23f076ef0187a4009d3c62b14a02e146baf0e35f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010077 / 0.011353 (-0.001276) | 0.005650 / 0.011008 (-0.005359) | 0.101285 / 0.038508 (0.062777) | 0.039571 / 0.023109 (0.016462) | 0.291855 / 0.275898 (0.015957) | 0.363582 / 0.323480 (0.040102) | 0.008513 / 0.007986 (0.000527) | 0.004472 / 0.004328 (0.000144) | 0.077314 / 0.004250 (0.073064) | 0.050707 / 0.037052 (0.013654) | 0.317282 / 0.258489 (0.058792) | 0.342348 / 0.293841 (0.048507) | 0.042951 / 0.128546 (-0.085595) | 0.012295 / 0.075646 (-0.063351) | 0.337269 / 0.419271 (-0.082003) | 0.048953 / 0.043533 (0.005420) | 0.292547 / 0.255139 (0.037408) | 0.325436 / 0.283200 (0.042236) | 0.111859 / 0.141683 (-0.029824) | 1.501958 / 1.452155 (0.049804) | 1.522281 / 1.492716 (0.029565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011775 / 0.018006 (-0.006231) | 0.513283 / 0.000490 (0.512793) | 0.002941 / 0.000200 (0.002741) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028702 / 0.037411 (-0.008710) | 0.108465 / 0.014526 (0.093940) | 0.121806 / 0.176557 (-0.054750) | 0.158424 / 0.737135 (-0.578712) | 0.128077 / 0.296338 (-0.168262) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395392 / 0.215209 (0.180183) | 3.944138 / 2.077655 (1.866483) | 1.773698 / 1.504120 (0.269578) | 1.588907 / 1.541195 (0.047712) | 1.697794 / 1.468490 (0.229304) | 0.690281 / 4.584777 (-3.894496) | 3.819661 / 3.745712 (0.073948) | 3.228006 / 5.269862 (-2.041856) | 1.755625 / 4.565676 (-2.810052) | 0.083169 / 0.424275 (-0.341106) | 0.012337 / 0.007607 (0.004730) | 0.504730 / 0.226044 (0.278686) | 5.016916 / 2.268929 (2.747988) | 2.245484 / 55.444624 (-53.199141) | 1.911682 / 6.876477 (-4.964795) | 1.957659 / 2.142072 (-0.184413) | 0.818361 / 4.805227 (-3.986866) | 0.162386 / 6.500664 (-6.338279) | 0.062461 / 0.075469 (-0.013008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197654 / 1.841788 (-0.644134) | 15.465611 / 8.074308 (7.391303) | 14.409126 / 10.191392 (4.217734) | 0.171776 / 0.680424 (-0.508647) | 0.028749 / 0.534201 (-0.505452) | 0.439666 / 0.579283 (-0.139618) | 0.445159 / 0.434364 (0.010795) | 0.543992 / 0.540337 (0.003655) | 0.643911 / 1.386936 (-0.743025) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007036 / 0.011353 (-0.004317) | 0.005273 / 0.011008 (-0.005735) | 0.075314 / 0.038508 (0.036806) | 0.033075 / 0.023109 (0.009966) | 0.350133 / 0.275898 (0.074235) | 0.399366 / 0.323480 (0.075886) | 0.005945 / 0.007986 (-0.002041) | 0.004276 / 0.004328 (-0.000052) | 0.074975 / 0.004250 (0.070725) | 0.051758 / 0.037052 (0.014706) | 0.355077 / 0.258489 (0.096588) | 0.430296 / 0.293841 (0.136455) | 0.036257 / 0.128546 (-0.092290) | 0.012376 / 0.075646 (-0.063270) | 0.087441 / 0.419271 (-0.331830) | 0.049066 / 0.043533 (0.005534) | 0.339867 / 0.255139 (0.084728) | 0.384379 / 0.283200 (0.101179) | 0.104843 / 0.141683 (-0.036840) | 1.498897 / 1.452155 (0.046742) | 1.551400 / 1.492716 (0.058684) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.334504 / 0.018006 (0.316498) | 0.516551 / 0.000490 (0.516061) | 0.000450 / 0.000200 (0.000250) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029313 / 0.037411 (-0.008099) | 0.110667 / 0.014526 (0.096141) | 0.124001 / 0.176557 (-0.052556) | 0.159154 / 0.737135 (-0.577981) | 0.129503 / 0.296338 (-0.166836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416749 / 0.215209 (0.201540) | 4.171163 / 2.077655 (2.093508) | 1.981071 / 1.504120 (0.476951) | 1.788303 / 1.541195 (0.247108) | 1.912118 / 1.468490 (0.443628) | 0.708764 / 4.584777 (-3.876013) | 3.815222 / 3.745712 (0.069510) | 2.121633 / 5.269862 (-3.148229) | 1.347866 / 4.565676 (-3.217811) | 0.086340 / 0.424275 (-0.337935) | 0.012646 / 0.007607 (0.005039) | 0.525286 / 0.226044 (0.299241) | 5.254922 / 2.268929 (2.985994) | 2.488743 / 55.444624 (-52.955881) | 2.128069 / 6.876477 (-4.748408) | 2.180358 / 2.142072 (0.038286) | 0.841011 / 4.805227 (-3.964216) | 0.168732 / 6.500664 (-6.331932) | 0.065559 / 0.075469 (-0.009910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270518 / 1.841788 (-0.571270) | 15.557563 / 8.074308 (7.483255) | 13.660757 / 10.191392 (3.469365) | 0.185636 / 0.680424 (-0.494788) | 0.018152 / 0.534201 (-0.516049) | 0.423553 / 0.579283 (-0.155730) | 0.412718 / 0.434364 (-0.021646) | 0.528455 / 0.540337 (-0.011882) | 0.635274 / 1.386936 (-0.751662) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d40f05ef827c52344a2c6e83f7c8d13bb6b660d3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011194 / 0.011353 (-0.000159) | 0.006344 / 0.011008 (-0.004664) | 0.122013 / 0.038508 (0.083505) | 0.044323 / 0.023109 (0.021214) | 0.356665 / 0.275898 (0.080767) | 0.439871 / 0.323480 (0.116391) | 0.010694 / 0.007986 (0.002709) | 0.004648 / 0.004328 (0.000320) | 0.091140 / 0.004250 (0.086890) | 0.052457 / 0.037052 (0.015404) | 0.369282 / 0.258489 (0.110793) | 0.403279 / 0.293841 (0.109438) | 0.054075 / 0.128546 (-0.074472) | 0.014484 / 0.075646 (-0.061162) | 0.407932 / 0.419271 (-0.011340) | 0.060681 / 0.043533 (0.017148) | 0.350889 / 0.255139 (0.095750) | 0.392041 / 0.283200 (0.108841) | 0.121252 / 0.141683 (-0.020431) | 1.809527 / 1.452155 (0.357373) | 1.835141 / 1.492716 (0.342425) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227372 / 0.018006 (0.209366) | 0.481908 / 0.000490 (0.481418) | 0.007262 / 0.000200 (0.007062) | 0.000148 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031039 / 0.037411 (-0.006372) | 0.133947 / 0.014526 (0.119421) | 0.141935 / 0.176557 (-0.034622) | 0.197854 / 0.737135 (-0.539281) | 0.152393 / 0.296338 (-0.143945) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.517400 / 0.215209 (0.302191) | 4.899972 / 2.077655 (2.822317) | 2.171023 / 1.504120 (0.666903) | 2.008706 / 1.541195 (0.467511) | 1.988777 / 1.468490 (0.520287) | 0.859872 / 4.584777 (-3.724905) | 4.673923 / 3.745712 (0.928211) | 2.703189 / 5.269862 (-2.566672) | 1.891680 / 4.565676 (-2.673997) | 0.109601 / 0.424275 (-0.314674) | 0.014622 / 0.007607 (0.007015) | 0.618990 / 0.226044 (0.392946) | 6.255608 / 2.268929 (3.986679) | 2.822199 / 55.444624 (-52.622425) | 2.457684 / 6.876477 (-4.418793) | 2.500041 / 2.142072 (0.357968) | 1.054529 / 4.805227 (-3.750698) | 0.209501 / 6.500664 (-6.291163) | 0.074929 / 0.075469 (-0.000540) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.532780 / 1.841788 (-0.309008) | 19.159455 / 8.074308 (11.085147) | 17.817063 / 10.191392 (7.625671) | 0.194078 / 0.680424 (-0.486346) | 0.038211 / 0.534201 (-0.495990) | 0.537366 / 0.579283 (-0.041917) | 0.538995 / 0.434364 (0.104631) | 0.679431 / 0.540337 (0.139094) | 0.801960 / 1.386936 (-0.584976) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008729 / 0.011353 (-0.002624) | 0.005711 / 0.011008 (-0.005297) | 0.091570 / 0.038508 (0.053062) | 0.039805 / 0.023109 (0.016696) | 0.413507 / 0.275898 (0.137609) | 0.456342 / 0.323480 (0.132862) | 0.006201 / 0.007986 (-0.001785) | 0.009700 / 0.004328 (0.005372) | 0.089146 / 0.004250 (0.084896) | 0.057543 / 0.037052 (0.020490) | 0.420806 / 0.258489 (0.162317) | 0.471962 / 0.293841 (0.178121) | 0.043940 / 0.128546 (-0.084606) | 0.014457 / 0.075646 (-0.061190) | 0.106674 / 0.419271 (-0.312598) | 0.058930 / 0.043533 (0.015397) | 0.419111 / 0.255139 (0.163972) | 0.452974 / 0.283200 (0.169774) | 0.124573 / 0.141683 (-0.017110) | 1.864753 / 1.452155 (0.412599) | 1.935387 / 1.492716 (0.442670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275657 / 0.018006 (0.257651) | 0.498096 / 0.000490 (0.497606) | 0.000480 / 0.000200 (0.000280) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034377 / 0.037411 (-0.003035) | 0.138050 / 0.014526 (0.123524) | 0.153718 / 0.176557 (-0.022838) | 0.201445 / 0.737135 (-0.535690) | 0.160346 / 0.296338 (-0.135992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.540670 / 0.215209 (0.325461) | 5.376291 / 2.077655 (3.298636) | 2.581799 / 1.504120 (1.077679) | 2.328858 / 1.541195 (0.787663) | 2.446458 / 1.468490 (0.977968) | 0.923005 / 4.584777 (-3.661772) | 4.815977 / 3.745712 (1.070265) | 4.205725 / 5.269862 (-1.064137) | 2.400466 / 4.565676 (-2.165211) | 0.107207 / 0.424275 (-0.317068) | 0.015427 / 0.007607 (0.007819) | 0.657267 / 0.226044 (0.431222) | 6.491256 / 2.268929 (4.222327) | 3.179099 / 55.444624 (-52.265525) | 2.722434 / 6.876477 (-4.154042) | 2.788202 / 2.142072 (0.646129) | 1.060016 / 4.805227 (-3.745211) | 0.206899 / 6.500664 (-6.293766) | 0.077868 / 0.075469 (0.002399) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567894 / 1.841788 (-0.273893) | 19.314330 / 8.074308 (11.240022) | 17.597614 / 10.191392 (7.406222) | 0.195777 / 0.680424 (-0.484647) | 0.022160 / 0.534201 (-0.512041) | 0.530592 / 0.579283 (-0.048691) | 0.508591 / 0.434364 (0.074227) | 0.619794 / 0.540337 (0.079457) | 0.749773 / 1.386936 (-0.637163) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8637141a67639c510294620306c9bb25d31d34ef \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012431 / 0.011353 (0.001078) | 0.006526 / 0.011008 (-0.004482) | 0.132266 / 0.038508 (0.093757) | 0.043199 / 0.023109 (0.020089) | 0.405230 / 0.275898 (0.129332) | 0.494643 / 0.323480 (0.171163) | 0.009927 / 0.007986 (0.001941) | 0.005227 / 0.004328 (0.000899) | 0.110914 / 0.004250 (0.106664) | 0.047815 / 0.037052 (0.010763) | 0.419099 / 0.258489 (0.160610) | 0.463405 / 0.293841 (0.169564) | 0.057858 / 0.128546 (-0.070688) | 0.018918 / 0.075646 (-0.056728) | 0.450584 / 0.419271 (0.031313) | 0.060457 / 0.043533 (0.016924) | 0.408234 / 0.255139 (0.153095) | 0.433722 / 0.283200 (0.150523) | 0.119403 / 0.141683 (-0.022280) | 1.966742 / 1.452155 (0.514587) | 1.980685 / 1.492716 (0.487969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292853 / 0.018006 (0.274847) | 0.619697 / 0.000490 (0.619207) | 0.002135 / 0.000200 (0.001935) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031283 / 0.037411 (-0.006129) | 0.128649 / 0.014526 (0.114123) | 0.150116 / 0.176557 (-0.026441) | 0.187605 / 0.737135 (-0.549530) | 0.153334 / 0.296338 (-0.143005) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659660 / 0.215209 (0.444451) | 6.459749 / 2.077655 (4.382094) | 2.764566 / 1.504120 (1.260446) | 2.362630 / 1.541195 (0.821435) | 2.426421 / 1.468490 (0.957931) | 1.282407 / 4.584777 (-3.302370) | 5.668865 / 3.745712 (1.923153) | 3.236255 / 5.269862 (-2.033606) | 2.248836 / 4.565676 (-2.316841) | 0.145861 / 0.424275 (-0.278414) | 0.015707 / 0.007607 (0.008100) | 0.805218 / 0.226044 (0.579174) | 8.146831 / 2.268929 (5.877903) | 3.506283 / 55.444624 (-51.938341) | 2.736682 / 6.876477 (-4.139795) | 2.959039 / 2.142072 (0.816967) | 1.528428 / 4.805227 (-3.276799) | 0.270980 / 6.500664 (-6.229684) | 0.086824 / 0.075469 (0.011355) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.682506 / 1.841788 (-0.159282) | 18.844103 / 8.074308 (10.769795) | 21.008471 / 10.191392 (10.817079) | 0.258372 / 0.680424 (-0.422052) | 0.046505 / 0.534201 (-0.487696) | 0.574760 / 0.579283 (-0.004523) | 0.663745 / 0.434364 (0.229381) | 0.702411 / 0.540337 (0.162074) | 0.824024 / 1.386936 (-0.562912) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010016 / 0.011353 (-0.001337) | 0.007459 / 0.011008 (-0.003549) | 0.103954 / 0.038508 (0.065446) | 0.036363 / 0.023109 (0.013254) | 0.464079 / 0.275898 (0.188181) | 0.504730 / 0.323480 (0.181250) | 0.007865 / 0.007986 (-0.000121) | 0.005210 / 0.004328 (0.000882) | 0.105018 / 0.004250 (0.100767) | 0.062191 / 0.037052 (0.025139) | 0.483304 / 0.258489 (0.224815) | 0.547030 / 0.293841 (0.253189) | 0.055436 / 0.128546 (-0.073110) | 0.021073 / 0.075646 (-0.054573) | 0.120952 / 0.419271 (-0.298319) | 0.075593 / 0.043533 (0.032060) | 0.459930 / 0.255139 (0.204791) | 0.486924 / 0.283200 (0.203724) | 0.129465 / 0.141683 (-0.012218) | 1.902322 / 1.452155 (0.450167) | 1.980809 / 1.492716 (0.488092) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259263 / 0.018006 (0.241257) | 0.596703 / 0.000490 (0.596213) | 0.004520 / 0.000200 (0.004320) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032802 / 0.037411 (-0.004609) | 0.138751 / 0.014526 (0.124225) | 0.147106 / 0.176557 (-0.029451) | 0.194791 / 0.737135 (-0.542345) | 0.152643 / 0.296338 (-0.143696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678455 / 0.215209 (0.463246) | 6.673643 / 2.077655 (4.595989) | 2.943368 / 1.504120 (1.439248) | 2.591223 / 1.541195 (1.050029) | 2.741097 / 1.468490 (1.272607) | 1.261178 / 4.584777 (-3.323599) | 5.773853 / 3.745712 (2.028141) | 3.171559 / 5.269862 (-2.098303) | 2.124898 / 4.565676 (-2.440779) | 0.161849 / 0.424275 (-0.262426) | 0.015498 / 0.007607 (0.007891) | 0.857984 / 0.226044 (0.631940) | 8.456946 / 2.268929 (6.188018) | 3.818787 / 55.444624 (-51.625837) | 3.009953 / 6.876477 (-3.866523) | 3.113006 / 2.142072 (0.970934) | 1.477299 / 4.805227 (-3.327929) | 0.267207 / 6.500664 (-6.233457) | 0.087590 / 0.075469 (0.012121) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.757389 / 1.841788 (-0.084398) | 19.287690 / 8.074308 (11.213381) | 21.601991 / 10.191392 (11.410599) | 0.260464 / 0.680424 (-0.419960) | 0.028552 / 0.534201 (-0.505649) | 0.558934 / 0.579283 (-0.020349) | 0.673651 / 0.434364 (0.239287) | 0.714448 / 0.540337 (0.174111) | 0.857608 / 1.386936 (-0.529328) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d3bd0134de444ffd10c4a39873dbf9aa3732c08 \"CML watermark\")\n", "Ready for review @mariosasko, LMKWYT :)\r\n\r\nSorry it tooks me a few tries to fix the CI - I ended up not trying to use the latest `torch` version in the CI.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009474 / 0.011353 (-0.001878) | 0.005507 / 0.011008 (-0.005501) | 0.101219 / 0.038508 (0.062711) | 0.035591 / 0.023109 (0.012481) | 0.305841 / 0.275898 (0.029943) | 0.339135 / 0.323480 (0.015656) | 0.007920 / 0.007986 (-0.000066) | 0.004252 / 0.004328 (-0.000077) | 0.076912 / 0.004250 (0.072662) | 0.041923 / 0.037052 (0.004871) | 0.301405 / 0.258489 (0.042916) | 0.356488 / 0.293841 (0.062647) | 0.039342 / 0.128546 (-0.089204) | 0.012711 / 0.075646 (-0.062935) | 0.334193 / 0.419271 (-0.085079) | 0.049112 / 0.043533 (0.005579) | 0.301484 / 0.255139 (0.046345) | 0.315306 / 0.283200 (0.032106) | 0.102959 / 0.141683 (-0.038724) | 1.420677 / 1.452155 (-0.031478) | 1.549493 / 1.492716 (0.056777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284639 / 0.018006 (0.266633) | 0.501226 / 0.000490 (0.500736) | 0.004328 / 0.000200 (0.004128) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027034 / 0.037411 (-0.010377) | 0.108066 / 0.014526 (0.093540) | 0.122106 / 0.176557 (-0.054451) | 0.162908 / 0.737135 (-0.574227) | 0.127233 / 0.296338 (-0.169105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394023 / 0.215209 (0.178813) | 3.932729 / 2.077655 (1.855075) | 1.771195 / 1.504120 (0.267075) | 1.582788 / 1.541195 (0.041594) | 1.703219 / 1.468490 (0.234728) | 0.702629 / 4.584777 (-3.882148) | 3.780187 / 3.745712 (0.034475) | 2.180433 / 5.269862 (-3.089428) | 1.504806 / 4.565676 (-3.060871) | 0.085289 / 0.424275 (-0.338986) | 0.012580 / 0.007607 (0.004973) | 0.515408 / 0.226044 (0.289363) | 5.010613 / 2.268929 (2.741685) | 2.256648 / 55.444624 (-53.187976) | 1.914971 / 6.876477 (-4.961505) | 2.038436 / 2.142072 (-0.103636) | 0.846240 / 4.805227 (-3.958987) | 0.164920 / 6.500664 (-6.335744) | 0.063899 / 0.075469 (-0.011570) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224160 / 1.841788 (-0.617627) | 15.089995 / 8.074308 (7.015687) | 14.777003 / 10.191392 (4.585611) | 0.169873 / 0.680424 (-0.510551) | 0.029233 / 0.534201 (-0.504968) | 0.445424 / 0.579283 (-0.133859) | 0.439194 / 0.434364 (0.004830) | 0.536370 / 0.540337 (-0.003968) | 0.636694 / 1.386936 (-0.750242) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008230 / 0.011353 (-0.003122) | 0.005499 / 0.011008 (-0.005509) | 0.076108 / 0.038508 (0.037600) | 0.037444 / 0.023109 (0.014335) | 0.364420 / 0.275898 (0.088522) | 0.412308 / 0.323480 (0.088828) | 0.006704 / 0.007986 (-0.001282) | 0.004359 / 0.004328 (0.000031) | 0.075080 / 0.004250 (0.070830) | 0.057698 / 0.037052 (0.020646) | 0.366088 / 0.258489 (0.107599) | 0.409583 / 0.293841 (0.115742) | 0.037882 / 0.128546 (-0.090664) | 0.012421 / 0.075646 (-0.063225) | 0.087701 / 0.419271 (-0.331571) | 0.050669 / 0.043533 (0.007136) | 0.351139 / 0.255139 (0.096000) | 0.384340 / 0.283200 (0.101140) | 0.108097 / 0.141683 (-0.033586) | 1.445010 / 1.452155 (-0.007145) | 1.559570 / 1.492716 (0.066853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.324114 / 0.018006 (0.306108) | 0.549134 / 0.000490 (0.548644) | 0.003544 / 0.000200 (0.003344) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030646 / 0.037411 (-0.006765) | 0.108573 / 0.014526 (0.094047) | 0.125291 / 0.176557 (-0.051266) | 0.174798 / 0.737135 (-0.562338) | 0.128000 / 0.296338 (-0.168338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428881 / 0.215209 (0.213672) | 4.282320 / 2.077655 (2.204665) | 2.061462 / 1.504120 (0.557342) | 1.858477 / 1.541195 (0.317283) | 1.971646 / 1.468490 (0.503156) | 0.723631 / 4.584777 (-3.861146) | 3.822376 / 3.745712 (0.076664) | 2.174427 / 5.269862 (-3.095434) | 1.386066 / 4.565676 (-3.179611) | 0.088391 / 0.424275 (-0.335884) | 0.012948 / 0.007607 (0.005341) | 0.524423 / 0.226044 (0.298378) | 5.249389 / 2.268929 (2.980460) | 2.528662 / 55.444624 (-52.915962) | 2.245329 / 6.876477 (-4.631147) | 2.402733 / 2.142072 (0.260660) | 0.868864 / 4.805227 (-3.936364) | 0.174066 / 6.500664 (-6.326598) | 0.066165 / 0.075469 (-0.009304) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296922 / 1.841788 (-0.544865) | 15.814109 / 8.074308 (7.739801) | 14.086059 / 10.191392 (3.894667) | 0.190952 / 0.680424 (-0.489472) | 0.017679 / 0.534201 (-0.516522) | 0.428872 / 0.579283 (-0.150411) | 0.435399 / 0.434364 (0.001035) | 0.540856 / 0.540337 (0.000519) | 0.648904 / 1.386936 (-0.738032) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f401758c5019ede4404994d5d59220125984874d \"CML watermark\")\n" ]
"2023-02-08T13:38:59Z"
"2023-02-19T18:35:09Z"
"2023-02-19T18:27:29Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5512.diff", "html_url": "https://github.com/huggingface/datasets/pull/5512", "merged_at": "2023-02-19T18:27:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5512.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5512" }
I implemented `__getitems__` to speed up batched data loading in PyTorch close https://github.com/huggingface/datasets/issues/5505
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5512/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5512/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2861/comments
https://api.github.com/repos/huggingface/datasets/issues/2861/events
https://github.com/huggingface/datasets/pull/2861
985,081,871
MDExOlB1bGxSZXF1ZXN0NzI0NDM2OTcw
2,861
fix: 🐛 be more specific when catching exceptions
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
[ "To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?\r\n\r\n", "Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a \"FileNotFoundError\" while it should not be caught. ", "And what about passing the `timeout` parameter instead?", "It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https://github.com/huggingface/datasets-preview-backend/tree/master/src/datasets_preview_backend/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset`", "I understand, you are trying to find a fix for your use case. OK.\r\n\r\nJust note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue...", "Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case." ]
"2021-09-01T12:18:12Z"
"2021-09-02T09:53:36Z"
"2021-09-02T09:52:03Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2861.diff", "html_url": "https://github.com/huggingface/datasets/pull/2861", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2861.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2861" }
The same specific exception is catched in other parts of the same function.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2861/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2861/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6062/comments
https://api.github.com/repos/huggingface/datasets/issues/6062/events
https://github.com/huggingface/datasets/pull/6062
1,818,341,584
PR_kwDODunzps5WOj62
6,062
Improve `Dataset.from_list` docstring
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008340 / 0.011353 (-0.003013) | 0.005053 / 0.011008 (-0.005955) | 0.103294 / 0.038508 (0.064786) | 0.069417 / 0.023109 (0.046308) | 0.436922 / 0.275898 (0.161024) | 0.461348 / 0.323480 (0.137868) | 0.006030 / 0.007986 (-0.001955) | 0.003727 / 0.004328 (-0.000601) | 0.076384 / 0.004250 (0.072134) | 0.056742 / 0.037052 (0.019689) | 0.439996 / 0.258489 (0.181507) | 0.469417 / 0.293841 (0.175577) | 0.044343 / 0.128546 (-0.084203) | 0.012634 / 0.075646 (-0.063013) | 0.359746 / 0.419271 (-0.059525) | 0.064842 / 0.043533 (0.021309) | 0.425960 / 0.255139 (0.170821) | 0.458568 / 0.283200 (0.175368) | 0.039802 / 0.141683 (-0.101881) | 1.687320 / 1.452155 (0.235165) | 1.806212 / 1.492716 (0.313496) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255484 / 0.018006 (0.237478) | 0.563039 / 0.000490 (0.562549) | 0.000445 / 0.000200 (0.000245) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027511 / 0.037411 (-0.009900) | 0.089185 / 0.014526 (0.074659) | 0.098397 / 0.176557 (-0.078160) | 0.163897 / 0.737135 (-0.573238) | 0.099905 / 0.296338 (-0.196434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.612737 / 0.215209 (0.397528) | 6.209948 / 2.077655 (4.132294) | 2.756060 / 1.504120 (1.251940) | 2.402115 / 1.541195 (0.860920) | 2.422665 / 1.468490 (0.954175) | 0.834799 / 4.584777 (-3.749977) | 5.251699 / 3.745712 (1.505986) | 5.554141 / 5.269862 (0.284280) | 3.254699 / 4.565676 (-1.310977) | 0.095697 / 0.424275 (-0.328578) | 0.009406 / 0.007607 (0.001799) | 0.729025 / 0.226044 (0.502980) | 7.195521 / 2.268929 (4.926593) | 3.360264 / 55.444624 (-52.084361) | 2.696764 / 6.876477 (-4.179713) | 2.702796 / 2.142072 (0.560724) | 0.974420 / 4.805227 (-3.830808) | 0.195215 / 6.500664 (-6.305450) | 0.069754 / 0.075469 (-0.005715) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.553458 / 1.841788 (-0.288330) | 21.972436 / 8.074308 (13.898128) | 20.027392 / 10.191392 (9.836000) | 0.216950 / 0.680424 (-0.463474) | 0.032196 / 0.534201 (-0.502005) | 0.449884 / 0.579283 (-0.129399) | 0.586213 / 0.434364 (0.151849) | 0.537227 / 0.540337 (-0.003111) | 0.751022 / 1.386936 (-0.635914) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007859 / 0.011353 (-0.003493) | 0.004762 / 0.011008 (-0.006246) | 0.086023 / 0.038508 (0.047515) | 0.069218 / 0.023109 (0.046109) | 0.449312 / 0.275898 (0.173414) | 0.481687 / 0.323480 (0.158207) | 0.006318 / 0.007986 (-0.001668) | 0.004063 / 0.004328 (-0.000266) | 0.076917 / 0.004250 (0.072667) | 0.058034 / 0.037052 (0.020981) | 0.474265 / 0.258489 (0.215775) | 0.497736 / 0.293841 (0.203895) | 0.044587 / 0.128546 (-0.083959) | 0.013880 / 0.075646 (-0.061766) | 0.089233 / 0.419271 (-0.330038) | 0.058760 / 0.043533 (0.015227) | 0.439515 / 0.255139 (0.184376) | 0.473246 / 0.283200 (0.190047) | 0.042968 / 0.141683 (-0.098715) | 1.802647 / 1.452155 (0.350493) | 1.778563 / 1.492716 (0.285847) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.343741 / 0.018006 (0.325735) | 0.567409 / 0.000490 (0.566919) | 0.029727 / 0.000200 (0.029527) | 0.000147 / 0.000054 (0.000092) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031021 / 0.037411 (-0.006390) | 0.096659 / 0.014526 (0.082133) | 0.103341 / 0.176557 (-0.073215) | 0.169893 / 0.737135 (-0.567242) | 0.103280 / 0.296338 (-0.193058) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584724 / 0.215209 (0.369515) | 5.792596 / 2.077655 (3.714941) | 2.683133 / 1.504120 (1.179013) | 2.367837 / 1.541195 (0.826643) | 2.378567 / 1.468490 (0.910076) | 0.803427 / 4.584777 (-3.781350) | 5.179017 / 3.745712 (1.433305) | 4.446323 / 5.269862 (-0.823538) | 2.771731 / 4.565676 (-1.793945) | 0.100943 / 0.424275 (-0.323332) | 0.009875 / 0.007607 (0.002268) | 0.725260 / 0.226044 (0.499216) | 7.149728 / 2.268929 (4.880800) | 3.646438 / 55.444624 (-51.798187) | 2.793858 / 6.876477 (-4.082618) | 2.971966 / 2.142072 (0.829894) | 0.998147 / 4.805227 (-3.807080) | 0.198004 / 6.500664 (-6.302660) | 0.072581 / 0.075469 (-0.002888) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.696737 / 1.841788 (-0.145051) | 22.615193 / 8.074308 (14.540884) | 20.272421 / 10.191392 (10.081029) | 0.237459 / 0.680424 (-0.442965) | 0.034774 / 0.534201 (-0.499427) | 0.484649 / 0.579283 (-0.094634) | 0.590263 / 0.434364 (0.155899) | 0.547833 / 0.540337 (0.007495) | 0.762109 / 1.386936 (-0.624827) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4bc3628b5a8f71ad7cfc014d8ba5e798f26becb7 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011183 / 0.011353 (-0.000170) | 0.005267 / 0.011008 (-0.005741) | 0.108506 / 0.038508 (0.069997) | 0.083541 / 0.023109 (0.060431) | 0.452189 / 0.275898 (0.176291) | 0.496229 / 0.323480 (0.172749) | 0.004951 / 0.007986 (-0.003035) | 0.004452 / 0.004328 (0.000124) | 0.085133 / 0.004250 (0.080883) | 0.061291 / 0.037052 (0.024239) | 0.450453 / 0.258489 (0.191964) | 0.506456 / 0.293841 (0.212616) | 0.049784 / 0.128546 (-0.078762) | 0.014738 / 0.075646 (-0.060908) | 0.372603 / 0.419271 (-0.046669) | 0.065223 / 0.043533 (0.021690) | 0.467872 / 0.255139 (0.212733) | 0.500062 / 0.283200 (0.216862) | 0.040911 / 0.141683 (-0.100772) | 1.852970 / 1.452155 (0.400816) | 2.016996 / 1.492716 (0.524280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262620 / 0.018006 (0.244614) | 0.593925 / 0.000490 (0.593435) | 0.000413 / 0.000200 (0.000213) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035713 / 0.037411 (-0.001698) | 0.111403 / 0.014526 (0.096878) | 0.117259 / 0.176557 (-0.059298) | 0.201545 / 0.737135 (-0.535590) | 0.133111 / 0.296338 (-0.163228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.597318 / 0.215209 (0.382109) | 5.882691 / 2.077655 (3.805036) | 2.572203 / 1.504120 (1.068083) | 2.248016 / 1.541195 (0.706821) | 2.359103 / 1.468490 (0.890613) | 0.852023 / 4.584777 (-3.732754) | 5.270831 / 3.745712 (1.525119) | 4.712915 / 5.269862 (-0.556947) | 3.124295 / 4.565676 (-1.441381) | 0.092045 / 0.424275 (-0.332230) | 0.007834 / 0.007607 (0.000227) | 0.695711 / 0.226044 (0.469666) | 7.011760 / 2.268929 (4.742831) | 3.333300 / 55.444624 (-52.111325) | 2.745889 / 6.876477 (-4.130587) | 3.153458 / 2.142072 (1.011385) | 1.011089 / 4.805227 (-3.794139) | 0.207467 / 6.500664 (-6.293197) | 0.079802 / 0.075469 (0.004333) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.703784 / 1.841788 (-0.138003) | 24.414340 / 8.074308 (16.340032) | 22.534528 / 10.191392 (12.343136) | 0.276129 / 0.680424 (-0.404295) | 0.027954 / 0.534201 (-0.506247) | 0.484261 / 0.579283 (-0.095022) | 0.605316 / 0.434364 (0.170952) | 0.557219 / 0.540337 (0.016882) | 0.802209 / 1.386936 (-0.584727) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009109 / 0.011353 (-0.002244) | 0.005376 / 0.011008 (-0.005632) | 0.085141 / 0.038508 (0.046633) | 0.100560 / 0.023109 (0.077450) | 0.482673 / 0.275898 (0.206775) | 0.551582 / 0.323480 (0.228103) | 0.006756 / 0.007986 (-0.001229) | 0.004171 / 0.004328 (-0.000158) | 0.084184 / 0.004250 (0.079933) | 0.069283 / 0.037052 (0.032230) | 0.517722 / 0.258489 (0.259233) | 0.542641 / 0.293841 (0.248801) | 0.047790 / 0.128546 (-0.080756) | 0.014063 / 0.075646 (-0.061583) | 0.110591 / 0.419271 (-0.308680) | 0.064373 / 0.043533 (0.020840) | 0.496636 / 0.255139 (0.241497) | 0.551906 / 0.283200 (0.268707) | 0.046187 / 0.141683 (-0.095496) | 1.864836 / 1.452155 (0.412681) | 1.923765 / 1.492716 (0.431049) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286558 / 0.018006 (0.268552) | 0.610353 / 0.000490 (0.609863) | 0.012647 / 0.000200 (0.012447) | 0.000162 / 0.000054 (0.000107) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037099 / 0.037411 (-0.000313) | 0.108608 / 0.014526 (0.094082) | 0.120386 / 0.176557 (-0.056170) | 0.183450 / 0.737135 (-0.553686) | 0.124860 / 0.296338 (-0.171479) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629006 / 0.215209 (0.413797) | 6.309206 / 2.077655 (4.231551) | 2.878558 / 1.504120 (1.374438) | 2.616093 / 1.541195 (1.074898) | 2.668096 / 1.468490 (1.199606) | 0.865732 / 4.584777 (-3.719045) | 5.312433 / 3.745712 (1.566721) | 4.799352 / 5.269862 (-0.470509) | 3.142207 / 4.565676 (-1.423469) | 0.099591 / 0.424275 (-0.324684) | 0.009159 / 0.007607 (0.001552) | 0.730999 / 0.226044 (0.504954) | 7.486442 / 2.268929 (5.217513) | 3.657699 / 55.444624 (-51.786925) | 3.080094 / 6.876477 (-3.796383) | 3.320976 / 2.142072 (1.178904) | 1.089324 / 4.805227 (-3.715904) | 0.222831 / 6.500664 (-6.277833) | 0.083976 / 0.075469 (0.008507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.793181 / 1.841788 (-0.048607) | 25.307444 / 8.074308 (17.233136) | 21.321713 / 10.191392 (11.130321) | 0.216326 / 0.680424 (-0.464098) | 0.034298 / 0.534201 (-0.499903) | 0.497173 / 0.579283 (-0.082110) | 0.643550 / 0.434364 (0.209186) | 0.581213 / 0.540337 (0.040876) | 0.830973 / 1.386936 (-0.555963) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#24875bb8494c3a7803182b08c70747b1b1a6bf4d \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006886 / 0.011353 (-0.004467) | 0.004267 / 0.011008 (-0.006741) | 0.086182 / 0.038508 (0.047674) | 0.083405 / 0.023109 (0.060296) | 0.313717 / 0.275898 (0.037819) | 0.351476 / 0.323480 (0.027996) | 0.005702 / 0.007986 (-0.002284) | 0.003802 / 0.004328 (-0.000526) | 0.065759 / 0.004250 (0.061508) | 0.060056 / 0.037052 (0.023003) | 0.315871 / 0.258489 (0.057382) | 0.364520 / 0.293841 (0.070679) | 0.032067 / 0.128546 (-0.096479) | 0.008679 / 0.075646 (-0.066967) | 0.294968 / 0.419271 (-0.124303) | 0.054684 / 0.043533 (0.011152) | 0.314124 / 0.255139 (0.058985) | 0.337312 / 0.283200 (0.054113) | 0.025051 / 0.141683 (-0.116632) | 1.505242 / 1.452155 (0.053087) | 1.608263 / 1.492716 (0.115547) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266562 / 0.018006 (0.248556) | 0.579887 / 0.000490 (0.579397) | 0.004161 / 0.000200 (0.003961) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031153 / 0.037411 (-0.006258) | 0.087703 / 0.014526 (0.073177) | 0.103864 / 0.176557 (-0.072693) | 0.159032 / 0.737135 (-0.578104) | 0.102482 / 0.296338 (-0.193857) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405805 / 0.215209 (0.190596) | 4.050669 / 2.077655 (1.973014) | 2.064384 / 1.504120 (0.560264) | 1.892825 / 1.541195 (0.351630) | 2.001083 / 1.468490 (0.532593) | 0.478174 / 4.584777 (-4.106603) | 3.542580 / 3.745712 (-0.203132) | 3.319205 / 5.269862 (-1.950656) | 2.075868 / 4.565676 (-2.489808) | 0.057345 / 0.424275 (-0.366930) | 0.007459 / 0.007607 (-0.000148) | 0.483564 / 0.226044 (0.257520) | 4.827746 / 2.268929 (2.558818) | 2.579541 / 55.444624 (-52.865083) | 2.205125 / 6.876477 (-4.671352) | 2.489206 / 2.142072 (0.347133) | 0.575843 / 4.805227 (-4.229384) | 0.133010 / 6.500664 (-6.367654) | 0.061082 / 0.075469 (-0.014387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286059 / 1.841788 (-0.555729) | 20.575173 / 8.074308 (12.500865) | 14.351692 / 10.191392 (4.160300) | 0.150401 / 0.680424 (-0.530022) | 0.018678 / 0.534201 (-0.515523) | 0.397860 / 0.579283 (-0.181423) | 0.419474 / 0.434364 (-0.014890) | 0.474492 / 0.540337 (-0.065846) | 0.659510 / 1.386936 (-0.727426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006948 / 0.011353 (-0.004405) | 0.004305 / 0.011008 (-0.006703) | 0.064220 / 0.038508 (0.025712) | 0.083251 / 0.023109 (0.060142) | 0.388148 / 0.275898 (0.112250) | 0.417834 / 0.323480 (0.094354) | 0.005762 / 0.007986 (-0.002224) | 0.003803 / 0.004328 (-0.000525) | 0.066365 / 0.004250 (0.062114) | 0.061808 / 0.037052 (0.024756) | 0.390889 / 0.258489 (0.132400) | 0.430619 / 0.293841 (0.136778) | 0.031777 / 0.128546 (-0.096770) | 0.008781 / 0.075646 (-0.066865) | 0.070844 / 0.419271 (-0.348427) | 0.050552 / 0.043533 (0.007019) | 0.378420 / 0.255139 (0.123281) | 0.403273 / 0.283200 (0.120074) | 0.024578 / 0.141683 (-0.117105) | 1.494790 / 1.452155 (0.042636) | 1.549408 / 1.492716 (0.056692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302668 / 0.018006 (0.284662) | 0.542235 / 0.000490 (0.541746) | 0.001847 / 0.000200 (0.001647) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031947 / 0.037411 (-0.005465) | 0.092220 / 0.014526 (0.077694) | 0.104525 / 0.176557 (-0.072031) | 0.162000 / 0.737135 (-0.575135) | 0.106795 / 0.296338 (-0.189543) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412035 / 0.215209 (0.196826) | 4.106527 / 2.077655 (2.028872) | 2.111529 / 1.504120 (0.607409) | 1.953201 / 1.541195 (0.412006) | 2.079258 / 1.468490 (0.610768) | 0.479562 / 4.584777 (-4.105215) | 3.606256 / 3.745712 (-0.139456) | 5.175250 / 5.269862 (-0.094612) | 3.292465 / 4.565676 (-1.273212) | 0.057726 / 0.424275 (-0.366549) | 0.008247 / 0.007607 (0.000640) | 0.486143 / 0.226044 (0.260098) | 4.859051 / 2.268929 (2.590123) | 2.675629 / 55.444624 (-52.768995) | 2.267448 / 6.876477 (-4.609029) | 2.567639 / 2.142072 (0.425567) | 0.580822 / 4.805227 (-4.224406) | 0.134942 / 6.500664 (-6.365722) | 0.063825 / 0.075469 (-0.011644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334421 / 1.841788 (-0.507367) | 20.481428 / 8.074308 (12.407120) | 14.227943 / 10.191392 (4.036551) | 0.170711 / 0.680424 (-0.509713) | 0.018212 / 0.534201 (-0.515989) | 0.397212 / 0.579283 (-0.182071) | 0.411934 / 0.434364 (-0.022430) | 0.478019 / 0.540337 (-0.062319) | 0.645434 / 1.386936 (-0.741502) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef3d3f10886e23a65cce3bfd939b8ec0d5a5c2c1 \"CML watermark\")\n" ]
"2023-07-24T12:36:38Z"
"2023-07-24T14:43:48Z"
"2023-07-24T14:34:43Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6062.diff", "html_url": "https://github.com/huggingface/datasets/pull/6062", "merged_at": "2023-07-24T14:34:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/6062.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6062" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6062/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6062/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2764/comments
https://api.github.com/repos/huggingface/datasets/issues/2764/events
https://github.com/huggingface/datasets/pull/2764
962,554,799
MDExOlB1bGxSZXF1ZXN0NzA1MzI3MDQ5
2,764
Add DER metric for SUPERB speaker diarization task
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "E3165C", "default": false, "description": "", "id": 4190228726, "name": "transfer-to-evaluate", "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate" } ]
closed
false
null
[]
null
[ "Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate" ]
"2021-08-06T09:12:36Z"
"2023-07-11T09:35:23Z"
"2023-07-11T09:35:23Z"
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/2764.diff", "html_url": "https://github.com/huggingface/datasets/pull/2764", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2764.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2764" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2764/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1203/comments
https://api.github.com/repos/huggingface/datasets/issues/1203/events
https://github.com/huggingface/datasets/pull/1203
757,935,170
MDExOlB1bGxSZXF1ZXN0NTMzMjAzMTc0
1,203
Add Neural Code Search Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/34424769?v=4", "events_url": "https://api.github.com/users/vinaykudari/events{/privacy}", "followers_url": "https://api.github.com/users/vinaykudari/followers", "following_url": "https://api.github.com/users/vinaykudari/following{/other_user}", "gists_url": "https://api.github.com/users/vinaykudari/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vinaykudari", "id": 34424769, "login": "vinaykudari", "node_id": "MDQ6VXNlcjM0NDI0NzY5", "organizations_url": "https://api.github.com/users/vinaykudari/orgs", "received_events_url": "https://api.github.com/users/vinaykudari/received_events", "repos_url": "https://api.github.com/users/vinaykudari/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vinaykudari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinaykudari/subscriptions", "type": "User", "url": "https://api.github.com/users/vinaykudari" }
[]
closed
false
null
[]
null
[ "> Really good thanks !\r\n> \r\n> I left a few comments\r\n\r\nThanks, resolved them :) ", "looks like this PR includes changes about many other files than the ones for Code Search\r\n\r\ncan you create another branch and another PR please ?", "> looks like this PR includes changes about many other files than the ones for Code Search\r\n> \r\n> can you create another branch and another PR please ?\r\n\r\nOkay sure" ]
"2020-12-06T14:12:39Z"
"2020-12-09T16:40:15Z"
"2020-12-09T16:40:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1203.diff", "html_url": "https://github.com/huggingface/datasets/pull/1203", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1203.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1203" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1203/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1203/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1899/comments
https://api.github.com/repos/huggingface/datasets/issues/1899/events
https://github.com/huggingface/datasets/pull/1899
810,308,332
MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4
1,899
Fix: ALT - fix duplicated examples in alt-parallel
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-02-17T15:53:56Z"
"2021-02-17T17:20:49Z"
"2021-02-17T17:20:49Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1899.diff", "html_url": "https://github.com/huggingface/datasets/pull/1899", "merged_at": "2021-02-17T17:20:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/1899.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1899" }
As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field. This was due to a bad copy of a python dict. This PR fixes that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1899/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6243/comments
https://api.github.com/repos/huggingface/datasets/issues/6243/events
https://github.com/huggingface/datasets/pull/6243
1,898,532,784
PR_kwDODunzps5aclIy
6,243
Fix cast from fixed size list to variable size list
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006784 / 0.011353 (-0.004569) | 0.004051 / 0.011008 (-0.006957) | 0.083790 / 0.038508 (0.045282) | 0.081219 / 0.023109 (0.058110) | 0.313195 / 0.275898 (0.037297) | 0.336954 / 0.323480 (0.013475) | 0.004324 / 0.007986 (-0.003662) | 0.004516 / 0.004328 (0.000188) | 0.065051 / 0.004250 (0.060801) | 0.057647 / 0.037052 (0.020595) | 0.316675 / 0.258489 (0.058186) | 0.357936 / 0.293841 (0.064095) | 0.030980 / 0.128546 (-0.097566) | 0.008844 / 0.075646 (-0.066802) | 0.287027 / 0.419271 (-0.132245) | 0.052130 / 0.043533 (0.008597) | 0.308125 / 0.255139 (0.052986) | 0.337345 / 0.283200 (0.054145) | 0.025781 / 0.141683 (-0.115902) | 1.466161 / 1.452155 (0.014006) | 1.565824 / 1.492716 (0.073108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299112 / 0.018006 (0.281106) | 0.640520 / 0.000490 (0.640030) | 0.008846 / 0.000200 (0.008647) | 0.000273 / 0.000054 (0.000219) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029853 / 0.037411 (-0.007559) | 0.081697 / 0.014526 (0.067172) | 0.099110 / 0.176557 (-0.077447) | 0.155864 / 0.737135 (-0.581271) | 0.098749 / 0.296338 (-0.197590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385722 / 0.215209 (0.170512) | 3.851490 / 2.077655 (1.773835) | 1.851995 / 1.504120 (0.347875) | 1.660398 / 1.541195 (0.119204) | 1.769370 / 1.468490 (0.300879) | 0.481523 / 4.584777 (-4.103254) | 3.550449 / 3.745712 (-0.195263) | 3.424782 / 5.269862 (-1.845079) | 2.106470 / 4.565676 (-2.459206) | 0.056500 / 0.424275 (-0.367775) | 0.007891 / 0.007607 (0.000284) | 0.465564 / 0.226044 (0.239520) | 4.662892 / 2.268929 (2.393964) | 2.305424 / 55.444624 (-53.139201) | 1.980524 / 6.876477 (-4.895953) | 2.218423 / 2.142072 (0.076350) | 0.584662 / 4.805227 (-4.220565) | 0.132325 / 6.500664 (-6.368340) | 0.060773 / 0.075469 (-0.014696) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254261 / 1.841788 (-0.587527) | 19.479805 / 8.074308 (11.405497) | 14.222687 / 10.191392 (4.031295) | 0.149829 / 0.680424 (-0.530595) | 0.018630 / 0.534201 (-0.515571) | 0.395284 / 0.579283 (-0.183999) | 0.413385 / 0.434364 (-0.020978) | 0.462931 / 0.540337 (-0.077406) | 0.645359 / 1.386936 (-0.741577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006991 / 0.011353 (-0.004362) | 0.004306 / 0.011008 (-0.006702) | 0.065213 / 0.038508 (0.026705) | 0.082442 / 0.023109 (0.059332) | 0.411294 / 0.275898 (0.135396) | 0.452176 / 0.323480 (0.128696) | 0.005802 / 0.007986 (-0.002183) | 0.003556 / 0.004328 (-0.000772) | 0.066163 / 0.004250 (0.061913) | 0.060680 / 0.037052 (0.023628) | 0.416975 / 0.258489 (0.158486) | 0.456353 / 0.293841 (0.162512) | 0.033584 / 0.128546 (-0.094963) | 0.008687 / 0.075646 (-0.066959) | 0.071300 / 0.419271 (-0.347972) | 0.049382 / 0.043533 (0.005849) | 0.409329 / 0.255139 (0.154190) | 0.434829 / 0.283200 (0.151629) | 0.022966 / 0.141683 (-0.118716) | 1.493847 / 1.452155 (0.041692) | 1.582372 / 1.492716 (0.089656) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280578 / 0.018006 (0.262572) | 0.538122 / 0.000490 (0.537632) | 0.004515 / 0.000200 (0.004315) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033383 / 0.037411 (-0.004028) | 0.093426 / 0.014526 (0.078901) | 0.109314 / 0.176557 (-0.067242) | 0.162349 / 0.737135 (-0.574786) | 0.109849 / 0.296338 (-0.186490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431073 / 0.215209 (0.215864) | 4.311942 / 2.077655 (2.234287) | 2.291170 / 1.504120 (0.787051) | 2.132266 / 1.541195 (0.591072) | 2.236526 / 1.468490 (0.768036) | 0.492001 / 4.584777 (-4.092776) | 3.523013 / 3.745712 (-0.222699) | 3.413481 / 5.269862 (-1.856381) | 2.112979 / 4.565676 (-2.452698) | 0.058654 / 0.424275 (-0.365621) | 0.007729 / 0.007607 (0.000121) | 0.512027 / 0.226044 (0.285982) | 5.125264 / 2.268929 (2.856336) | 2.836281 / 55.444624 (-52.608344) | 2.447253 / 6.876477 (-4.429224) | 2.711908 / 2.142072 (0.569835) | 0.592598 / 4.805227 (-4.212629) | 0.134837 / 6.500664 (-6.365827) | 0.059813 / 0.075469 (-0.015656) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.373464 / 1.841788 (-0.468323) | 20.548983 / 8.074308 (12.474675) | 14.799833 / 10.191392 (4.608441) | 0.168601 / 0.680424 (-0.511823) | 0.020358 / 0.534201 (-0.513843) | 0.398790 / 0.579283 (-0.180494) | 0.416921 / 0.434364 (-0.017443) | 0.480542 / 0.540337 (-0.059795) | 0.645062 / 1.386936 (-0.741874) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#afd6fc193a91cb0461c8bf3b64db6943c23de846 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008616 / 0.011353 (-0.002737) | 0.004957 / 0.011008 (-0.006051) | 0.102629 / 0.038508 (0.064121) | 0.080492 / 0.023109 (0.057383) | 0.461817 / 0.275898 (0.185919) | 0.487964 / 0.323480 (0.164484) | 0.006336 / 0.007986 (-0.001649) | 0.004607 / 0.004328 (0.000278) | 0.074311 / 0.004250 (0.070061) | 0.060368 / 0.037052 (0.023315) | 0.458076 / 0.258489 (0.199587) | 0.493028 / 0.293841 (0.199187) | 0.044153 / 0.128546 (-0.084394) | 0.014066 / 0.075646 (-0.061581) | 0.369848 / 0.419271 (-0.049424) | 0.061690 / 0.043533 (0.018157) | 0.439728 / 0.255139 (0.184590) | 0.484706 / 0.283200 (0.201506) | 0.034657 / 0.141683 (-0.107026) | 1.710591 / 1.452155 (0.258437) | 1.900225 / 1.492716 (0.407509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.308837 / 0.018006 (0.290831) | 0.579561 / 0.000490 (0.579072) | 0.010163 / 0.000200 (0.009963) | 0.000613 / 0.000054 (0.000558) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028108 / 0.037411 (-0.009303) | 0.085072 / 0.014526 (0.070546) | 0.103375 / 0.176557 (-0.073182) | 0.173765 / 0.737135 (-0.563371) | 0.102460 / 0.296338 (-0.193879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602642 / 0.215209 (0.387433) | 5.582537 / 2.077655 (3.504882) | 2.405553 / 1.504120 (0.901434) | 2.057298 / 1.541195 (0.516103) | 2.223787 / 1.468490 (0.755297) | 0.846138 / 4.584777 (-3.738638) | 5.290306 / 3.745712 (1.544594) | 4.836066 / 5.269862 (-0.433795) | 2.951901 / 4.565676 (-1.613775) | 0.099432 / 0.424275 (-0.324843) | 0.009198 / 0.007607 (0.001591) | 0.731370 / 0.226044 (0.505325) | 6.663026 / 2.268929 (4.394098) | 3.200932 / 55.444624 (-52.243692) | 2.486654 / 6.876477 (-4.389823) | 2.833195 / 2.142072 (0.691123) | 0.989481 / 4.805227 (-3.815746) | 0.205176 / 6.500664 (-6.295488) | 0.073760 / 0.075469 (-0.001709) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745494 / 1.841788 (-0.096294) | 24.649294 / 8.074308 (16.574986) | 22.312182 / 10.191392 (12.120790) | 0.245207 / 0.680424 (-0.435217) | 0.031971 / 0.534201 (-0.502230) | 0.495179 / 0.579283 (-0.084104) | 0.603233 / 0.434364 (0.168869) | 0.560906 / 0.540337 (0.020569) | 0.788292 / 1.386936 (-0.598644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008922 / 0.011353 (-0.002431) | 0.005203 / 0.011008 (-0.005805) | 0.074414 / 0.038508 (0.035906) | 0.077552 / 0.023109 (0.054443) | 0.547217 / 0.275898 (0.271319) | 0.625298 / 0.323480 (0.301818) | 0.006135 / 0.007986 (-0.001851) | 0.004163 / 0.004328 (-0.000165) | 0.078014 / 0.004250 (0.073764) | 0.064484 / 0.037052 (0.027431) | 0.562356 / 0.258489 (0.303867) | 0.643613 / 0.293841 (0.349772) | 0.050155 / 0.128546 (-0.078391) | 0.013665 / 0.075646 (-0.061981) | 0.090224 / 0.419271 (-0.329048) | 0.063852 / 0.043533 (0.020319) | 0.560914 / 0.255139 (0.305775) | 0.591531 / 0.283200 (0.308331) | 0.036491 / 0.141683 (-0.105192) | 1.670898 / 1.452155 (0.218743) | 1.783924 / 1.492716 (0.291208) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.312764 / 0.018006 (0.294758) | 0.611116 / 0.000490 (0.610626) | 0.006367 / 0.000200 (0.006167) | 0.000130 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033967 / 0.037411 (-0.003445) | 0.101550 / 0.014526 (0.087025) | 0.116953 / 0.176557 (-0.059604) | 0.180061 / 0.737135 (-0.557075) | 0.115220 / 0.296338 (-0.181118) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.642110 / 0.215209 (0.426901) | 6.361381 / 2.077655 (4.283727) | 2.948175 / 1.504120 (1.444055) | 2.633935 / 1.541195 (1.092740) | 2.822150 / 1.468490 (1.353660) | 0.931412 / 4.584777 (-3.653365) | 5.428540 / 3.745712 (1.682828) | 4.672920 / 5.269862 (-0.596941) | 3.102046 / 4.565676 (-1.463630) | 0.100825 / 0.424275 (-0.323450) | 0.009464 / 0.007607 (0.001857) | 0.774102 / 0.226044 (0.548058) | 7.715003 / 2.268929 (5.446074) | 3.987807 / 55.444624 (-51.456817) | 3.089129 / 6.876477 (-3.787347) | 3.333247 / 2.142072 (1.191174) | 1.012427 / 4.805227 (-3.792800) | 0.200662 / 6.500664 (-6.300002) | 0.072422 / 0.075469 (-0.003047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.680364 / 1.841788 (-0.161424) | 24.484576 / 8.074308 (16.410268) | 21.920990 / 10.191392 (11.729598) | 0.218604 / 0.680424 (-0.461820) | 0.035818 / 0.534201 (-0.498383) | 0.470648 / 0.579283 (-0.108635) | 0.585108 / 0.434364 (0.150744) | 0.539152 / 0.540337 (-0.001185) | 0.763999 / 1.386936 (-0.622937) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cfed1d09ed6c680085624d96eb99bfb2b0b27599 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006304 / 0.011353 (-0.005049) | 0.003884 / 0.011008 (-0.007125) | 0.084847 / 0.038508 (0.046339) | 0.069372 / 0.023109 (0.046263) | 0.318876 / 0.275898 (0.042978) | 0.344733 / 0.323480 (0.021253) | 0.005139 / 0.007986 (-0.002847) | 0.003203 / 0.004328 (-0.001125) | 0.065758 / 0.004250 (0.061507) | 0.054189 / 0.037052 (0.017137) | 0.317475 / 0.258489 (0.058986) | 0.359310 / 0.293841 (0.065469) | 0.030639 / 0.128546 (-0.097908) | 0.008657 / 0.075646 (-0.066989) | 0.289127 / 0.419271 (-0.130144) | 0.052344 / 0.043533 (0.008811) | 0.316122 / 0.255139 (0.060983) | 0.338339 / 0.283200 (0.055140) | 0.022677 / 0.141683 (-0.119006) | 1.551629 / 1.452155 (0.099474) | 1.617917 / 1.492716 (0.125201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231067 / 0.018006 (0.213061) | 0.450559 / 0.000490 (0.450070) | 0.008484 / 0.000200 (0.008284) | 0.000234 / 0.000054 (0.000179) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027054 / 0.037411 (-0.010357) | 0.081560 / 0.014526 (0.067034) | 0.094162 / 0.176557 (-0.082395) | 0.148583 / 0.737135 (-0.588552) | 0.093596 / 0.296338 (-0.202742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388616 / 0.215209 (0.173407) | 3.874905 / 2.077655 (1.797251) | 1.915845 / 1.504120 (0.411725) | 1.746410 / 1.541195 (0.205215) | 1.828789 / 1.468490 (0.360299) | 0.483270 / 4.584777 (-4.101506) | 3.489157 / 3.745712 (-0.256555) | 3.190086 / 5.269862 (-2.079776) | 1.978023 / 4.565676 (-2.587653) | 0.056290 / 0.424275 (-0.367985) | 0.007585 / 0.007607 (-0.000022) | 0.467051 / 0.226044 (0.241007) | 4.665971 / 2.268929 (2.397043) | 2.418550 / 55.444624 (-53.026075) | 2.048338 / 6.876477 (-4.828139) | 2.225275 / 2.142072 (0.083203) | 0.576601 / 4.805227 (-4.228626) | 0.131960 / 6.500664 (-6.368704) | 0.060177 / 0.075469 (-0.015292) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249797 / 1.841788 (-0.591991) | 18.552939 / 8.074308 (10.478631) | 14.016616 / 10.191392 (3.825224) | 0.162869 / 0.680424 (-0.517555) | 0.018105 / 0.534201 (-0.516096) | 0.394838 / 0.579283 (-0.184445) | 0.403378 / 0.434364 (-0.030986) | 0.460931 / 0.540337 (-0.079407) | 0.637365 / 1.386936 (-0.749571) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006497 / 0.011353 (-0.004856) | 0.003928 / 0.011008 (-0.007080) | 0.063958 / 0.038508 (0.025450) | 0.069609 / 0.023109 (0.046500) | 0.401599 / 0.275898 (0.125701) | 0.428128 / 0.323480 (0.104648) | 0.005296 / 0.007986 (-0.002689) | 0.003332 / 0.004328 (-0.000996) | 0.063903 / 0.004250 (0.059652) | 0.056303 / 0.037052 (0.019250) | 0.400704 / 0.258489 (0.142214) | 0.435982 / 0.293841 (0.142141) | 0.032434 / 0.128546 (-0.096112) | 0.008570 / 0.075646 (-0.067077) | 0.070788 / 0.419271 (-0.348483) | 0.048252 / 0.043533 (0.004719) | 0.403269 / 0.255139 (0.148130) | 0.419796 / 0.283200 (0.136596) | 0.022598 / 0.141683 (-0.119085) | 1.481627 / 1.452155 (0.029472) | 1.578388 / 1.492716 (0.085672) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224552 / 0.018006 (0.206546) | 0.444059 / 0.000490 (0.443570) | 0.003757 / 0.000200 (0.003557) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032173 / 0.037411 (-0.005239) | 0.092562 / 0.014526 (0.078036) | 0.104972 / 0.176557 (-0.071584) | 0.156467 / 0.737135 (-0.580669) | 0.104274 / 0.296338 (-0.192065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441693 / 0.215209 (0.226484) | 4.400217 / 2.077655 (2.322562) | 2.393862 / 1.504120 (0.889742) | 2.281178 / 1.541195 (0.739983) | 2.339895 / 1.468490 (0.871405) | 0.488734 / 4.584777 (-4.096043) | 3.523352 / 3.745712 (-0.222360) | 3.216761 / 5.269862 (-2.053101) | 2.007553 / 4.565676 (-2.558123) | 0.058050 / 0.424275 (-0.366225) | 0.007566 / 0.007607 (-0.000041) | 0.515439 / 0.226044 (0.289394) | 5.155086 / 2.268929 (2.886157) | 2.864958 / 55.444624 (-52.579666) | 2.592460 / 6.876477 (-4.284016) | 2.800449 / 2.142072 (0.658376) | 0.588441 / 4.805227 (-4.216786) | 0.131589 / 6.500664 (-6.369075) | 0.059075 / 0.075469 (-0.016394) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353889 / 1.841788 (-0.487898) | 18.938285 / 8.074308 (10.863977) | 14.937141 / 10.191392 (4.745749) | 0.168811 / 0.680424 (-0.511613) | 0.020118 / 0.534201 (-0.514083) | 0.394791 / 0.579283 (-0.184492) | 0.414434 / 0.434364 (-0.019930) | 0.466821 / 0.540337 (-0.073517) | 0.629894 / 1.386936 (-0.757042) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#23921b08390db7dbb3186a8de40dc49a4066da76 \"CML watermark\")\n", "CI failures are unrelated", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005959 / 0.011353 (-0.005394) | 0.004164 / 0.011008 (-0.006844) | 0.082336 / 0.038508 (0.043828) | 0.070344 / 0.023109 (0.047234) | 0.348032 / 0.275898 (0.072134) | 0.366328 / 0.323480 (0.042848) | 0.003882 / 0.007986 (-0.004104) | 0.003619 / 0.004328 (-0.000709) | 0.063343 / 0.004250 (0.059093) | 0.056617 / 0.037052 (0.019564) | 0.351625 / 0.258489 (0.093136) | 0.395839 / 0.293841 (0.101998) | 0.030842 / 0.128546 (-0.097704) | 0.008363 / 0.075646 (-0.067284) | 0.300535 / 0.419271 (-0.118737) | 0.053303 / 0.043533 (0.009770) | 0.354782 / 0.255139 (0.099643) | 0.364918 / 0.283200 (0.081719) | 0.025365 / 0.141683 (-0.116318) | 1.555009 / 1.452155 (0.102854) | 1.597443 / 1.492716 (0.104727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239808 / 0.018006 (0.221801) | 0.488164 / 0.000490 (0.487675) | 0.013183 / 0.000200 (0.012983) | 0.000483 / 0.000054 (0.000429) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027938 / 0.037411 (-0.009473) | 0.078521 / 0.014526 (0.063995) | 0.095498 / 0.176557 (-0.081059) | 0.150884 / 0.737135 (-0.586251) | 0.097577 / 0.296338 (-0.198762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384546 / 0.215209 (0.169337) | 4.037707 / 2.077655 (1.960053) | 1.940321 / 1.504120 (0.436201) | 1.716741 / 1.541195 (0.175546) | 1.837200 / 1.468490 (0.368710) | 0.502112 / 4.584777 (-4.082665) | 3.770452 / 3.745712 (0.024740) | 3.325691 / 5.269862 (-1.944171) | 2.015622 / 4.565676 (-2.550055) | 0.056246 / 0.424275 (-0.368029) | 0.007320 / 0.007607 (-0.000287) | 0.445553 / 0.226044 (0.219509) | 4.567233 / 2.268929 (2.298304) | 2.319531 / 55.444624 (-53.125093) | 1.968664 / 6.876477 (-4.907813) | 2.122349 / 2.142072 (-0.019724) | 0.573688 / 4.805227 (-4.231540) | 0.131410 / 6.500664 (-6.369254) | 0.062767 / 0.075469 (-0.012702) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255244 / 1.841788 (-0.586543) | 19.042480 / 8.074308 (10.968172) | 13.935342 / 10.191392 (3.743950) | 0.161259 / 0.680424 (-0.519165) | 0.020582 / 0.534201 (-0.513619) | 0.391365 / 0.579283 (-0.187918) | 0.417462 / 0.434364 (-0.016902) | 0.473121 / 0.540337 (-0.067216) | 0.674768 / 1.386936 (-0.712168) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003969 / 0.011008 (-0.007040) | 0.063558 / 0.038508 (0.025050) | 0.073847 / 0.023109 (0.050738) | 0.407064 / 0.275898 (0.131166) | 0.440695 / 0.323480 (0.117215) | 0.005783 / 0.007986 (-0.002203) | 0.003517 / 0.004328 (-0.000812) | 0.065721 / 0.004250 (0.061470) | 0.056390 / 0.037052 (0.019338) | 0.419019 / 0.258489 (0.160530) | 0.450721 / 0.293841 (0.156880) | 0.034094 / 0.128546 (-0.094452) | 0.008594 / 0.075646 (-0.067052) | 0.069254 / 0.419271 (-0.350017) | 0.049218 / 0.043533 (0.005685) | 0.413312 / 0.255139 (0.158173) | 0.439454 / 0.283200 (0.156255) | 0.021481 / 0.141683 (-0.120202) | 1.517536 / 1.452155 (0.065382) | 1.530532 / 1.492716 (0.037815) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235392 / 0.018006 (0.217386) | 0.477371 / 0.000490 (0.476881) | 0.007070 / 0.000200 (0.006870) | 0.000132 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031909 / 0.037411 (-0.005502) | 0.092459 / 0.014526 (0.077933) | 0.105795 / 0.176557 (-0.070761) | 0.157745 / 0.737135 (-0.579390) | 0.104187 / 0.296338 (-0.192152) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424385 / 0.215209 (0.209176) | 4.445371 / 2.077655 (2.367716) | 2.423639 / 1.504120 (0.919519) | 2.188167 / 1.541195 (0.646972) | 2.171023 / 1.468490 (0.702532) | 0.483566 / 4.584777 (-4.101211) | 3.825702 / 3.745712 (0.079990) | 3.276350 / 5.269862 (-1.993512) | 2.063075 / 4.565676 (-2.502602) | 0.061628 / 0.424275 (-0.362647) | 0.008176 / 0.007607 (0.000569) | 0.506697 / 0.226044 (0.280653) | 5.067924 / 2.268929 (2.798995) | 2.785567 / 55.444624 (-52.659057) | 2.457340 / 6.876477 (-4.419137) | 2.599646 / 2.142072 (0.457574) | 0.581550 / 4.805227 (-4.223677) | 0.131712 / 6.500664 (-6.368952) | 0.058776 / 0.075469 (-0.016693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356639 / 1.841788 (-0.485148) | 20.103463 / 8.074308 (12.029155) | 14.481010 / 10.191392 (4.289618) | 0.162870 / 0.680424 (-0.517554) | 0.023197 / 0.534201 (-0.511004) | 0.413042 / 0.579283 (-0.166241) | 0.427494 / 0.434364 (-0.006870) | 0.508457 / 0.540337 (-0.031880) | 0.662412 / 1.386936 (-0.724524) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05fe5c06d42f84408b933c2809acb9b7449cbbb3 \"CML watermark\")\n" ]
"2023-09-15T14:23:33Z"
"2023-09-19T18:02:21Z"
"2023-09-19T17:53:17Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6243.diff", "html_url": "https://github.com/huggingface/datasets/pull/6243", "merged_at": "2023-09-19T17:53:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/6243.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6243" }
Fix #6242
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6243/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6243/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4603/comments
https://api.github.com/repos/huggingface/datasets/issues/4603/events
https://github.com/huggingface/datasets/issues/4603
1,289,963,331
I_kwDODunzps5M40dD
4,603
CI fails recurrently and randomly on Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
"2022-06-30T10:59:58Z"
"2022-06-30T13:22:25Z"
"2022-06-30T13:22:25Z"
MEMBER
null
null
null
As reported by @lhoestq, The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs: ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4603/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4603/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5932/comments
https://api.github.com/repos/huggingface/datasets/issues/5932/events
https://github.com/huggingface/datasets/pull/5932
1,746,249,161
PR_kwDODunzps5Sbrzo
5,932
[doc build] Use secrets
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008499 / 0.011353 (-0.002854) | 0.006155 / 0.011008 (-0.004853) | 0.124032 / 0.038508 (0.085524) | 0.037337 / 0.023109 (0.014228) | 0.389274 / 0.275898 (0.113376) | 0.427736 / 0.323480 (0.104257) | 0.006929 / 0.007986 (-0.001057) | 0.005017 / 0.004328 (0.000689) | 0.096356 / 0.004250 (0.092105) | 0.055694 / 0.037052 (0.018642) | 0.391417 / 0.258489 (0.132928) | 0.448098 / 0.293841 (0.154257) | 0.042442 / 0.128546 (-0.086105) | 0.013456 / 0.075646 (-0.062190) | 0.423502 / 0.419271 (0.004230) | 0.062919 / 0.043533 (0.019386) | 0.384317 / 0.255139 (0.129178) | 0.410851 / 0.283200 (0.127652) | 0.112807 / 0.141683 (-0.028875) | 1.746050 / 1.452155 (0.293895) | 1.977974 / 1.492716 (0.485257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306382 / 0.018006 (0.288375) | 0.620310 / 0.000490 (0.619820) | 0.009309 / 0.000200 (0.009109) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026900 / 0.037411 (-0.010511) | 0.140125 / 0.014526 (0.125599) | 0.136295 / 0.176557 (-0.040261) | 0.207721 / 0.737135 (-0.529414) | 0.146328 / 0.296338 (-0.150011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616712 / 0.215209 (0.401503) | 6.237820 / 2.077655 (4.160166) | 2.503809 / 1.504120 (0.999689) | 2.129739 / 1.541195 (0.588544) | 2.160768 / 1.468490 (0.692277) | 0.971273 / 4.584777 (-3.613504) | 5.687161 / 3.745712 (1.941449) | 2.738148 / 5.269862 (-2.531713) | 1.692695 / 4.565676 (-2.872981) | 0.113701 / 0.424275 (-0.310574) | 0.014809 / 0.007607 (0.007202) | 0.774795 / 0.226044 (0.548750) | 7.660012 / 2.268929 (5.391083) | 3.253036 / 55.444624 (-52.191588) | 2.607498 / 6.876477 (-4.268979) | 2.681678 / 2.142072 (0.539606) | 1.095275 / 4.805227 (-3.709952) | 0.239078 / 6.500664 (-6.261586) | 0.081034 / 0.075469 (0.005565) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574547 / 1.841788 (-0.267240) | 18.323566 / 8.074308 (10.249258) | 19.274482 / 10.191392 (9.083090) | 0.210275 / 0.680424 (-0.470149) | 0.031843 / 0.534201 (-0.502358) | 0.514843 / 0.579283 (-0.064440) | 0.633782 / 0.434364 (0.199418) | 0.588569 / 0.540337 (0.048232) | 0.721401 / 1.386936 (-0.665535) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008866 / 0.011353 (-0.002487) | 0.006460 / 0.011008 (-0.004548) | 0.121337 / 0.038508 (0.082829) | 0.033896 / 0.023109 (0.010786) | 0.455702 / 0.275898 (0.179804) | 0.509685 / 0.323480 (0.186205) | 0.007650 / 0.007986 (-0.000336) | 0.005578 / 0.004328 (0.001250) | 0.098505 / 0.004250 (0.094255) | 0.056122 / 0.037052 (0.019069) | 0.478483 / 0.258489 (0.219994) | 0.560008 / 0.293841 (0.266167) | 0.044926 / 0.128546 (-0.083620) | 0.014562 / 0.075646 (-0.061085) | 0.115027 / 0.419271 (-0.304244) | 0.066494 / 0.043533 (0.022961) | 0.463434 / 0.255139 (0.208296) | 0.513856 / 0.283200 (0.230656) | 0.126436 / 0.141683 (-0.015247) | 1.874729 / 1.452155 (0.422575) | 1.925080 / 1.492716 (0.432364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012672 / 0.018006 (-0.005334) | 0.615797 / 0.000490 (0.615307) | 0.001606 / 0.000200 (0.001406) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031104 / 0.037411 (-0.006307) | 0.130107 / 0.014526 (0.115581) | 0.140587 / 0.176557 (-0.035970) | 0.205081 / 0.737135 (-0.532054) | 0.144068 / 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.646549 / 0.215209 (0.431340) | 6.403962 / 2.077655 (4.326307) | 2.812594 / 1.504120 (1.308474) | 2.478480 / 1.541195 (0.937285) | 2.552385 / 1.468490 (1.083895) | 0.991987 / 4.584777 (-3.592790) | 5.777917 / 3.745712 (2.032205) | 5.697830 / 5.269862 (0.427969) | 2.370583 / 4.565676 (-2.195094) | 0.109905 / 0.424275 (-0.314370) | 0.013801 / 0.007607 (0.006193) | 0.799932 / 0.226044 (0.573888) | 8.155672 / 2.268929 (5.886743) | 3.711662 / 55.444624 (-51.732963) | 3.042164 / 6.876477 (-3.834312) | 3.073549 / 2.142072 (0.931477) | 1.137515 / 4.805227 (-3.667712) | 0.231266 / 6.500664 (-6.269398) | 0.080893 / 0.075469 (0.005424) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669210 / 1.841788 (-0.172577) | 18.747144 / 8.074308 (10.672836) | 21.084589 / 10.191392 (10.893197) | 0.241379 / 0.680424 (-0.439045) | 0.029473 / 0.534201 (-0.504728) | 0.524605 / 0.579283 (-0.054678) | 0.622852 / 0.434364 (0.188488) | 0.604941 / 0.540337 (0.064604) | 0.715978 / 1.386936 (-0.670958) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#142484a60b1330359d7713e906fc9e5e30aa9f64 \"CML watermark\")\n", "Cool ! what about `.github/workflows/build_pr_documentation.yml` and `.github/workflows/delete_doc_comment.yml` ?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005973 / 0.011353 (-0.005380) | 0.004389 / 0.011008 (-0.006620) | 0.096076 / 0.038508 (0.057568) | 0.031569 / 0.023109 (0.008460) | 0.328300 / 0.275898 (0.052402) | 0.359356 / 0.323480 (0.035876) | 0.005378 / 0.007986 (-0.002607) | 0.003703 / 0.004328 (-0.000625) | 0.075251 / 0.004250 (0.071000) | 0.042340 / 0.037052 (0.005287) | 0.346103 / 0.258489 (0.087614) | 0.379896 / 0.293841 (0.086055) | 0.027493 / 0.128546 (-0.101053) | 0.009033 / 0.075646 (-0.066613) | 0.327829 / 0.419271 (-0.091442) | 0.064074 / 0.043533 (0.020541) | 0.337703 / 0.255139 (0.082564) | 0.355335 / 0.283200 (0.072136) | 0.101179 / 0.141683 (-0.040504) | 1.471738 / 1.452155 (0.019584) | 1.539031 / 1.492716 (0.046315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194097 / 0.018006 (0.176091) | 0.434190 / 0.000490 (0.433701) | 0.005730 / 0.000200 (0.005530) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025634 / 0.037411 (-0.011778) | 0.105080 / 0.014526 (0.090555) | 0.116508 / 0.176557 (-0.060049) | 0.173867 / 0.737135 (-0.563269) | 0.117749 / 0.296338 (-0.178590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401566 / 0.215209 (0.186357) | 4.003558 / 2.077655 (1.925903) | 1.802756 / 1.504120 (0.298636) | 1.604222 / 1.541195 (0.063027) | 1.656617 / 1.468490 (0.188127) | 0.523385 / 4.584777 (-4.061392) | 3.744292 / 3.745712 (-0.001420) | 1.794295 / 5.269862 (-3.475567) | 1.044690 / 4.565676 (-3.520987) | 0.064992 / 0.424275 (-0.359284) | 0.011542 / 0.007607 (0.003935) | 0.507830 / 0.226044 (0.281785) | 5.061574 / 2.268929 (2.792645) | 2.252896 / 55.444624 (-53.191729) | 1.912551 / 6.876477 (-4.963926) | 2.073510 / 2.142072 (-0.068562) | 0.642148 / 4.805227 (-4.163079) | 0.140151 / 6.500664 (-6.360513) | 0.062623 / 0.075469 (-0.012846) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180367 / 1.841788 (-0.661421) | 14.263475 / 8.074308 (6.189167) | 12.917251 / 10.191392 (2.725859) | 0.143815 / 0.680424 (-0.536608) | 0.017286 / 0.534201 (-0.516915) | 0.388411 / 0.579283 (-0.190872) | 0.430512 / 0.434364 (-0.003851) | 0.466595 / 0.540337 (-0.073742) | 0.564545 / 1.386936 (-0.822391) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006059 / 0.011353 (-0.005294) | 0.004419 / 0.011008 (-0.006590) | 0.074206 / 0.038508 (0.035697) | 0.031180 / 0.023109 (0.008071) | 0.380031 / 0.275898 (0.104133) | 0.410373 / 0.323480 (0.086893) | 0.005397 / 0.007986 (-0.002589) | 0.003952 / 0.004328 (-0.000376) | 0.074426 / 0.004250 (0.070176) | 0.046256 / 0.037052 (0.009203) | 0.385543 / 0.258489 (0.127054) | 0.430724 / 0.293841 (0.136883) | 0.028052 / 0.128546 (-0.100494) | 0.008810 / 0.075646 (-0.066836) | 0.080749 / 0.419271 (-0.338522) | 0.046746 / 0.043533 (0.003214) | 0.380325 / 0.255139 (0.125186) | 0.398901 / 0.283200 (0.115701) | 0.099607 / 0.141683 (-0.042076) | 1.433343 / 1.452155 (-0.018812) | 1.520447 / 1.492716 (0.027730) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202232 / 0.018006 (0.184225) | 0.431342 / 0.000490 (0.430852) | 0.001020 / 0.000200 (0.000820) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028762 / 0.037411 (-0.008649) | 0.111777 / 0.014526 (0.097251) | 0.119283 / 0.176557 (-0.057273) | 0.168151 / 0.737135 (-0.568985) | 0.126093 / 0.296338 (-0.170245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442689 / 0.215209 (0.227480) | 4.369202 / 2.077655 (2.291547) | 2.167703 / 1.504120 (0.663583) | 1.960580 / 1.541195 (0.419385) | 2.001459 / 1.468490 (0.532969) | 0.527169 / 4.584777 (-4.057608) | 3.738987 / 3.745712 (-0.006726) | 1.819002 / 5.269862 (-3.450860) | 1.082786 / 4.565676 (-3.482891) | 0.066209 / 0.424275 (-0.358066) | 0.011549 / 0.007607 (0.003942) | 0.545959 / 0.226044 (0.319915) | 5.466655 / 2.268929 (3.197727) | 2.671448 / 55.444624 (-52.773176) | 2.340968 / 6.876477 (-4.535509) | 2.358805 / 2.142072 (0.216733) | 0.649456 / 4.805227 (-4.155771) | 0.142009 / 6.500664 (-6.358655) | 0.064199 / 0.075469 (-0.011270) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259819 / 1.841788 (-0.581969) | 14.456988 / 8.074308 (6.382680) | 14.478982 / 10.191392 (4.287590) | 0.163156 / 0.680424 (-0.517268) | 0.017090 / 0.534201 (-0.517111) | 0.391339 / 0.579283 (-0.187944) | 0.422021 / 0.434364 (-0.012343) | 0.465340 / 0.540337 (-0.074997) | 0.564517 / 1.386936 (-0.822419) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#97358c88f996a65f49923ec215358044e4146a95 \"CML watermark\")\n", "> .github/workflows/delete_doc_comment.yml \r\n\r\nis already updated https://github.com/huggingface/datasets/pull/5932/files\r\n\r\n> .github/workflows/build_pr_documentation.yml\r\n\r\nindeed no changes are needed" ]
"2023-06-07T16:09:39Z"
"2023-06-09T10:16:58Z"
"2023-06-09T09:53:16Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5932.diff", "html_url": "https://github.com/huggingface/datasets/pull/5932", "merged_at": "2023-06-09T09:53:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/5932.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5932" }
Companion pr to https://github.com/huggingface/doc-builder/pull/379
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5932/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5932/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5056/comments
https://api.github.com/repos/huggingface/datasets/issues/5056/events
https://github.com/huggingface/datasets/pull/5056
1,394,713,173
PR_kwDODunzps5ADfxN
5,056
Fix broken URL's (GEM)
{ "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manandey", "id": 6687858, "login": "manandey", "node_id": "MDQ6VXNlcjY2ODc4NTg=", "organizations_url": "https://api.github.com/users/manandey/orgs", "received_events_url": "https://api.github.com/users/manandey/received_events", "repos_url": "https://api.github.com/users/manandey/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "type": "User", "url": "https://api.github.com/users/manandey" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5056). All of your documentation changes will be reflected on that endpoint.", "Thanks, @manandey. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub." ]
"2022-10-03T13:13:22Z"
"2022-10-04T13:49:00Z"
"2022-10-04T13:48:59Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5056.diff", "html_url": "https://github.com/huggingface/datasets/pull/5056", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5056.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5056" }
This PR fixes the broken URL's in GEM. cc. @lhoestq, @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5056/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5056/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4535/comments
https://api.github.com/repos/huggingface/datasets/issues/4535/events
https://github.com/huggingface/datasets/pull/4535
1,278,365,039
PR_kwDODunzps46BnXq
4,535
Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
closed
false
null
[]
null
[ "Also, I had a doubt while checking the code related to the indices... \r\n\r\n@lhoestq, there's a value in `config.py` named `DATASET_INDICES_FILENAME` which has the arrow extension (which I assume it should be `indices.faiss`, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an `ArrowDataset` in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/config.py#L183\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/arrow_dataset.py#L1079-L1092\r\n\r\nSo should I also remove that?\r\n\r\nP.S. I also edited the following code comment which I found misleading as it's not actually storing the indices.\r\n\r\nhttps://github.com/huggingface/datasets/blob/8ddc4bbeb1e2bd307b21f5d21f884649aa2bf640/src/datasets/arrow_dataset.py#L1122", "_The documentation is not available anymore as the PR was closed or merged._", "> @lhoestq, there's a value in config.py named DATASET_INDICES_FILENAME which has the arrow extension (which I assume it should be indices.faiss, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an ArrowDataset in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nThe arrow file is used to store an indices mapping (when you shuffle the dataset for example) - not for a faiss index ;)", "Ok cool thanks a lot for the explanation @lhoestq I was not sure about that :+1: I'll also add it there as you suggested!", "CI failures are unrelated to this PR and fixed on master, merging" ]
"2022-06-21T12:18:49Z"
"2022-06-27T16:25:09Z"
"2022-06-27T16:14:36Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4535.diff", "html_url": "https://github.com/huggingface/datasets/pull/4535", "merged_at": "2022-06-27T16:14:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/4535.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4535" }
Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR adds the `batch_size` parameter to both `ArrowDataset.add_faiss_index` and `ArrowDataset.add_faiss_index_from_external_arrays`. This is useful so as to tweak the `batch_size` according to the VM specifications.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4535/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4535/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3516/comments
https://api.github.com/repos/huggingface/datasets/issues/3516/events
https://github.com/huggingface/datasets/pull/3516
1,092,657,738
PR_kwDODunzps4weYhE
3,516
dataset `asset` - change to raw.githubusercontent.com URLs
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh" }
[]
closed
false
null
[]
null
[]
"2022-01-03T16:43:57Z"
"2022-01-03T17:39:02Z"
"2022-01-03T17:39:01Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3516.diff", "html_url": "https://github.com/huggingface/datasets/pull/3516", "merged_at": "2022-01-03T17:39:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/3516.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3516" }
Changed the URLs to the ones it was automatically re-directing. Before, the download was failing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3516/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3516/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4183/comments
https://api.github.com/repos/huggingface/datasets/issues/4183/events
https://github.com/huggingface/datasets/pull/4183
1,208,449,335
PR_kwDODunzps42bjXn
4,183
Document librispeech configs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I think the main purpose of #4179 was how to be able to load both configs into one, so should we maybe add this part of the code: https://github.com/huggingface/datasets/issues/4179#issuecomment-1102383717 \r\n\r\nto the doc? \r\n\r\nActually @lhoestq would this work given that they have different split names: https://huggingface.co/datasets/librispeech_asr#data-splits ? ", "This doc extension does not explain why I can't simply load the whole dataset. Or what workaround I need to get the whole dataset, which is what people usually want for Librispeech.", "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq, I can add a `\"all\"` config to Librispeech have the datasets already cached somewhere ", "I'm closing this PR then, feel free to continue the discussion in https://github.com/huggingface/datasets/issues/4179\r\n" ]
"2022-04-19T14:26:59Z"
"2023-09-24T10:02:24Z"
"2022-04-19T15:15:20Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4183.diff", "html_url": "https://github.com/huggingface/datasets/pull/4183", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4183.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4183" }
Added an example of how to load one config or the other
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4183/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4183/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6139
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6139/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6139/comments
https://api.github.com/repos/huggingface/datasets/issues/6139/events
https://github.com/huggingface/datasets/issues/6139
1,844,991,583
I_kwDODunzps5t-FZf
6,139
Offline dataset viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuvalkirstain", "id": 57996478, "login": "yuvalkirstain", "node_id": "MDQ6VXNlcjU3OTk2NDc4", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "type": "User", "url": "https://api.github.com/users/yuvalkirstain" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
null
[ "Hi, thanks for the suggestion. It's not possible at the moment. The viewer is part of the Hub codebase and only works on public datasets. Also, it relies on [Datasets Server](https://github.com/huggingface/datasets-server/), which prepares the data and provides an API to access the rows, size, etc.\r\n\r\nIf you're interested in hosting your data as a private dataset on the Hub, you might want to look at https://github.com/huggingface/datasets-server/issues/39.", "Hi, we are building an offline dataset viewer: https://github.com/Renumics/spotlight\r\nIt supports many HF datasets, but currently you have to use it via Pandas:\r\ndf=ds.to_pandas()\r\nspotlight.show(df)\r\n\r\nWould love to hear from you if that works for your use case. If not, feel free to open an issue on the repo: https://github.com/Renumics/spotlight/issues", "@ssuwelack thank you! I will definitely try it out.", "Related issues:\r\n- https://github.com/huggingface/datasets-server/issues/213\r\n- https://github.com/huggingface/datasets-server/issues/441\r\n- https://github.com/huggingface/datasets/issues/6014", "Closing for now, as developing and maintaining an offline viewer is not planned." ]
"2023-08-10T11:30:00Z"
"2023-09-29T13:10:23Z"
"2023-09-29T13:10:22Z"
NONE
null
null
null
### Feature request The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something that makes it easy to view the dataset. ### Motivation I want to easily view my dataset even when it is hosted locally. ### Your contribution N.A.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6139/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6139/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1907/comments
https://api.github.com/repos/huggingface/datasets/issues/1907/events
https://github.com/huggingface/datasets/issues/1907
811,520,569
MDU6SXNzdWU4MTE1MjA1Njk=
1,907
DBPedia14 Dataset Checksum bug?
{ "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/francisco-perez-sorrosal", "id": 918006, "login": "francisco-perez-sorrosal", "node_id": "MDQ6VXNlcjkxODAwNg==", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "type": "User", "url": "https://api.github.com/users/francisco-perez-sorrosal" }
[]
closed
false
null
[]
null
[ "Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe error says that the checksum of the downloaded file doesn't match because google drive returns a text file with the \"Quota Exceeded\" error instead of the actual data file.", "Thanks @lhoestq! Yes, it seems back to normal after a couple of days." ]
"2021-02-18T22:25:48Z"
"2021-02-22T23:22:05Z"
"2021-02-22T23:22:04Z"
CONTRIBUTOR
null
null
null
Hi there!!! I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error: ``` Traceback (most recent call last): File "./conditional_classification/basic_pipeline.py", line 178, in <module> main() File "./conditional_classification/basic_pipeline.py", line 128, in main corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class, File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data datasets = load_dataset(self.name, split=dataset_split) File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset builder_instance.download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare self._download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare verify_checksums( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k'] ``` I've seen this has happened before in other datasets as reported in #537. I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days. Can you please check if there's a problem with the checksums? Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently? Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1907/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2152/comments
https://api.github.com/repos/huggingface/datasets/issues/2152/events
https://github.com/huggingface/datasets/pull/2152
845,751,273
MDExOlB1bGxSZXF1ZXN0NjA0ODk0MDkz
2,152
Update README.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4", "events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}", "followers_url": "https://api.github.com/users/JieyuZhao/followers", "following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}", "gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JieyuZhao", "id": 22306304, "login": "JieyuZhao", "node_id": "MDQ6VXNlcjIyMzA2MzA0", "organizations_url": "https://api.github.com/users/JieyuZhao/orgs", "received_events_url": "https://api.github.com/users/JieyuZhao/received_events", "repos_url": "https://api.github.com/users/JieyuZhao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions", "type": "User", "url": "https://api.github.com/users/JieyuZhao" }
[]
closed
false
null
[]
null
[]
"2021-03-31T03:21:19Z"
"2021-04-01T10:20:37Z"
"2021-04-01T10:20:36Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2152.diff", "html_url": "https://github.com/huggingface/datasets/pull/2152", "merged_at": "2021-04-01T10:20:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/2152.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2152" }
Updated some descriptions of Wino_Bias dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2152/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2152/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4759/comments
https://api.github.com/repos/huggingface/datasets/issues/4759/events
https://github.com/huggingface/datasets/issues/4759
1,320,783,300
I_kwDODunzps5OuY3E
4,759
Dataset Viewer issue for Toygar/turkish-offensive-language-detection
{ "avatar_url": "https://avatars.githubusercontent.com/u/44132720?v=4", "events_url": "https://api.github.com/users/toygarr/events{/privacy}", "followers_url": "https://api.github.com/users/toygarr/followers", "following_url": "https://api.github.com/users/toygarr/following{/other_user}", "gists_url": "https://api.github.com/users/toygarr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/toygarr", "id": 44132720, "login": "toygarr", "node_id": "MDQ6VXNlcjQ0MTMyNzIw", "organizations_url": "https://api.github.com/users/toygarr/orgs", "received_events_url": "https://api.github.com/users/toygarr/received_events", "repos_url": "https://api.github.com/users/toygarr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/toygarr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/toygarr/subscriptions", "type": "User", "url": "https://api.github.com/users/toygarr" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "I refreshed the dataset viewer manually, it's fixed now. Sorry for the inconvenience.\r\n<img width=\"1557\" alt=\"Capture d’écran 2022-07-28 à 09 17 39\" src=\"https://user-images.githubusercontent.com/1676121/181514666-92d7f8e1-ddc1-4769-84f3-f1edfdb902e8.png\">\r\n\r\n" ]
"2022-07-28T11:21:43Z"
"2022-07-28T13:17:56Z"
"2022-07-28T13:17:48Z"
NONE
null
null
null
### Link https://huggingface.co/datasets/Toygar/turkish-offensive-language-detection ### Description Status code: 400 Exception: Status400Error Message: The dataset does not exist. Hi, I provided train.csv, test.csv and valid.csv files. However, viewer says dataset does not exist. Should I need to do anything else? ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4759/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4759/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3856/comments
https://api.github.com/repos/huggingface/datasets/issues/3856/events
https://github.com/huggingface/datasets/pull/3856
1,162,522,034
PR_kwDODunzps40GUSf
3,856
Fix push_to_hub with null images
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3856). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-08T11:07:09Z"
"2022-03-08T15:22:17Z"
"2022-03-08T15:22:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3856.diff", "html_url": "https://github.com/huggingface/datasets/pull/3856", "merged_at": "2022-03-08T15:22:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/3856.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3856" }
This code currently raises an error because of the null image: ```python import datasets dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] } features = datasets.Features({ 'name': datasets.Value('string'), 'image': datasets.Image(), }) dataset = datasets.Dataset.from_dict(dataset_dict, features) dataset.push_to_hub("username/dataset") # this line produces an error: 'NoneType' object is not subscriptable ``` I fixed this in this PR TODO: - [x] add a test
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3856/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3856/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4856/comments
https://api.github.com/repos/huggingface/datasets/issues/4856/events
https://github.com/huggingface/datasets/issues/4856
1,339,779,957
I_kwDODunzps5P22t1
4,856
file missing when load_dataset with openwebtext on windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/10361976?v=4", "events_url": "https://api.github.com/users/kingstarcraft/events{/privacy}", "followers_url": "https://api.github.com/users/kingstarcraft/followers", "following_url": "https://api.github.com/users/kingstarcraft/following{/other_user}", "gists_url": "https://api.github.com/users/kingstarcraft/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kingstarcraft", "id": 10361976, "login": "kingstarcraft", "node_id": "MDQ6VXNlcjEwMzYxOTc2", "organizations_url": "https://api.github.com/users/kingstarcraft/orgs", "received_events_url": "https://api.github.com/users/kingstarcraft/received_events", "repos_url": "https://api.github.com/users/kingstarcraft/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kingstarcraft/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kingstarcraft/subscriptions", "type": "User", "url": "https://api.github.com/users/kingstarcraft" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "I have tried to extract ```0015896-b1054262f7da52a0518521e29c8e352c.txt``` from ```17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz``` with 7-zip\r\nand put the file into cache_path ```F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa```\r\nthere is still raise the same error and I find the file was removed from cache_path after I run the run_mlm.py with ```python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base```." ]
"2022-08-16T04:04:22Z"
"2023-01-04T03:39:12Z"
"2023-01-04T03:39:12Z"
NONE
null
null
null
## Describe the bug 0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz with 7-zip. ## Steps to reproduce the bug ```sh python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base ``` or ```python from datasets import load_dataset load_dataset("openwebtext", None, cache_dir=None, use_auth_token=None) ``` ## Expected results Loading is successful ## Actual results Traceback (most recent call last): File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 704, in download_and_prepare self._download_and_prepare( File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 795, in _download_and_prepare raise OSError( OSError: Cannot find data file. Original error: [Errno 22] Invalid argument: 'F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa/0015896-b1054262f7da52a0518521e29c8e352c.txt' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: windows - Python version: 3.8.5 - PyArrow version: 9.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4856/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4856/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6293/comments
https://api.github.com/repos/huggingface/datasets/issues/6293/events
https://github.com/huggingface/datasets/issues/6293
1,937,238,047
I_kwDODunzps5zd-gf
6,293
Choose columns to stream parquet data in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
"2023-10-11T08:59:36Z"
"2023-10-11T16:21:38Z"
"2023-10-11T16:21:38Z"
MEMBER
null
null
null
Currently passing columns= to load_dataset in streaming mode fails ``` Tried to load parquet data with columns '['link']' with mismatching features '{'caption': Value(dtype='string', id=None), 'image': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='null', id=None)}, 'link': Value(dtype='string', id=None), 'message_id': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None)}' ``` similar to https://github.com/huggingface/datasets/issues/6039 reported at https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65259a09617407d4520f4ad9
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6293/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6293/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4432
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4432/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4432/comments
https://api.github.com/repos/huggingface/datasets/issues/4432/events
https://github.com/huggingface/datasets/pull/4432
1,255,523,720
PR_kwDODunzps441JmK
4,432
Fix builder docstring
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-06-01T09:45:30Z"
"2022-06-02T17:43:47Z"
"2022-06-02T17:35:15Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4432.diff", "html_url": "https://github.com/huggingface/datasets/pull/4432", "merged_at": "2022-06-02T17:35:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/4432.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4432" }
Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4432/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4432/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5982/comments
https://api.github.com/repos/huggingface/datasets/issues/5982/events
https://github.com/huggingface/datasets/issues/5982
1,770,333,296
I_kwDODunzps5phSRw
5,982
404 on Datasets Documentation Page
{ "avatar_url": "https://avatars.githubusercontent.com/u/118509387?v=4", "events_url": "https://api.github.com/users/kmulka-bloomberg/events{/privacy}", "followers_url": "https://api.github.com/users/kmulka-bloomberg/followers", "following_url": "https://api.github.com/users/kmulka-bloomberg/following{/other_user}", "gists_url": "https://api.github.com/users/kmulka-bloomberg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kmulka-bloomberg", "id": 118509387, "login": "kmulka-bloomberg", "node_id": "U_kgDOBxBPSw", "organizations_url": "https://api.github.com/users/kmulka-bloomberg/orgs", "received_events_url": "https://api.github.com/users/kmulka-bloomberg/received_events", "repos_url": "https://api.github.com/users/kmulka-bloomberg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kmulka-bloomberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kmulka-bloomberg/subscriptions", "type": "User", "url": "https://api.github.com/users/kmulka-bloomberg" }
[]
closed
false
null
[]
null
[ "This wasn’t working for me a bit earlier, but it looks to be back up now", "We had a minor issue updating the docs after the latest release. It should work now :)." ]
"2023-06-22T20:14:57Z"
"2023-06-26T15:45:03Z"
"2023-06-26T15:45:03Z"
NONE
null
null
null
### Describe the bug Getting a 404 from the Hugging Face Datasets docs page: https://huggingface.co/docs/datasets/index ### Steps to reproduce the bug 1. Go to URL https://huggingface.co/docs/datasets/index 2. Notice 404 not found ### Expected behavior URL should either show docs or redirect to new location ### Environment info hugginface.co
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5982/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5982/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6117/comments
https://api.github.com/repos/huggingface/datasets/issues/6117/events
https://github.com/huggingface/datasets/pull/6117
1,835,213,848
PR_kwDODunzps5XHktw
6,117
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6117). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012516 / 0.011353 (0.001163) | 0.004725 / 0.011008 (-0.006283) | 0.112245 / 0.038508 (0.073736) | 0.079146 / 0.023109 (0.056037) | 0.386415 / 0.275898 (0.110517) | 0.420441 / 0.323480 (0.096961) | 0.005682 / 0.007986 (-0.002304) | 0.004169 / 0.004328 (-0.000160) | 0.077847 / 0.004250 (0.073597) | 0.055763 / 0.037052 (0.018711) | 0.385529 / 0.258489 (0.127040) | 0.422711 / 0.293841 (0.128870) | 0.047212 / 0.128546 (-0.081334) | 0.013711 / 0.075646 (-0.061935) | 0.342856 / 0.419271 (-0.076416) | 0.066788 / 0.043533 (0.023255) | 0.380728 / 0.255139 (0.125589) | 0.416241 / 0.283200 (0.133041) | 0.034676 / 0.141683 (-0.107007) | 1.679661 / 1.452155 (0.227506) | 1.838014 / 1.492716 (0.345297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219556 / 0.018006 (0.201550) | 0.524728 / 0.000490 (0.524238) | 0.005045 / 0.000200 (0.004845) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.085937 / 0.014526 (0.071412) | 0.099245 / 0.176557 (-0.077311) | 0.158995 / 0.737135 (-0.578141) | 0.101504 / 0.296338 (-0.194835) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582200 / 0.215209 (0.366991) | 5.794340 / 2.077655 (3.716685) | 2.473635 / 1.504120 (0.969515) | 2.168135 / 1.541195 (0.626941) | 2.215886 / 1.468490 (0.747396) | 0.855599 / 4.584777 (-3.729178) | 5.003067 / 3.745712 (1.257354) | 4.503566 / 5.269862 (-0.766295) | 2.912248 / 4.565676 (-1.653428) | 0.103267 / 0.424275 (-0.321008) | 0.012114 / 0.007607 (0.004507) | 0.712240 / 0.226044 (0.486196) | 7.131946 / 2.268929 (4.863017) | 3.280052 / 55.444624 (-52.164573) | 2.583472 / 6.876477 (-4.293004) | 2.820758 / 2.142072 (0.678686) | 1.132097 / 4.805227 (-3.673131) | 0.232191 / 6.500664 (-6.268473) | 0.082966 / 0.075469 (0.007497) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581125 / 1.841788 (-0.260662) | 22.723878 / 8.074308 (14.649570) | 19.969347 / 10.191392 (9.777955) | 0.234365 / 0.680424 (-0.446059) | 0.030245 / 0.534201 (-0.503956) | 0.470843 / 0.579283 (-0.108440) | 0.558069 / 0.434364 (0.123705) | 0.534878 / 0.540337 (-0.005460) | 0.801025 / 1.386936 (-0.585911) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008524 / 0.011353 (-0.002829) | 0.005083 / 0.011008 (-0.005925) | 0.078054 / 0.038508 (0.039546) | 0.082025 / 0.023109 (0.058915) | 0.458027 / 0.275898 (0.182129) | 0.498232 / 0.323480 (0.174752) | 0.005938 / 0.007986 (-0.002048) | 0.003776 / 0.004328 (-0.000553) | 0.080413 / 0.004250 (0.076163) | 0.060485 / 0.037052 (0.023433) | 0.462816 / 0.258489 (0.204327) | 0.513970 / 0.293841 (0.220129) | 0.047574 / 0.128546 (-0.080973) | 0.013424 / 0.075646 (-0.062222) | 0.087707 / 0.419271 (-0.331565) | 0.065007 / 0.043533 (0.021474) | 0.465844 / 0.255139 (0.210705) | 0.498474 / 0.283200 (0.215274) | 0.033518 / 0.141683 (-0.108164) | 1.737507 / 1.452155 (0.285352) | 1.848291 / 1.492716 (0.355574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316710 / 0.018006 (0.298703) | 0.504415 / 0.000490 (0.503925) | 0.042128 / 0.000200 (0.041928) | 0.000171 / 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032097 / 0.037411 (-0.005314) | 0.099371 / 0.014526 (0.084845) | 0.109311 / 0.176557 (-0.067246) | 0.177373 / 0.737135 (-0.559762) | 0.110753 / 0.296338 (-0.185585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.688060 / 0.215209 (0.472851) | 6.255219 / 2.077655 (4.177564) | 2.696845 / 1.504120 (1.192725) | 2.395424 / 1.541195 (0.854230) | 2.414870 / 1.468490 (0.946380) | 0.865704 / 4.584777 (-3.719073) | 5.086828 / 3.745712 (1.341116) | 4.648107 / 5.269862 (-0.621754) | 3.091119 / 4.565676 (-1.474558) | 0.101787 / 0.424275 (-0.322489) | 0.008829 / 0.007607 (0.001222) | 0.772398 / 0.226044 (0.546354) | 7.700366 / 2.268929 (5.431438) | 3.608632 / 55.444624 (-51.835992) | 2.923309 / 6.876477 (-3.953168) | 2.952141 / 2.142072 (0.810069) | 1.093006 / 4.805227 (-3.712221) | 0.224363 / 6.500664 (-6.276301) | 0.074927 / 0.075469 (-0.000542) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.638414 / 1.841788 (-0.203374) | 23.486781 / 8.074308 (15.412473) | 21.129104 / 10.191392 (10.937712) | 0.259955 / 0.680424 (-0.420469) | 0.027305 / 0.534201 (-0.506895) | 0.464448 / 0.579283 (-0.114835) | 0.553737 / 0.434364 (0.119373) | 0.571318 / 0.540337 (0.030981) | 0.772917 / 1.386936 (-0.614019) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3ec5ee9e78b464364796651d995823c7ecb0f951 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009093 / 0.011353 (-0.002260) | 0.005283 / 0.011008 (-0.005725) | 0.112299 / 0.038508 (0.073791) | 0.081341 / 0.023109 (0.058232) | 0.363799 / 0.275898 (0.087901) | 0.409261 / 0.323480 (0.085781) | 0.006400 / 0.007986 (-0.001586) | 0.003965 / 0.004328 (-0.000363) | 0.074389 / 0.004250 (0.070139) | 0.060654 / 0.037052 (0.023602) | 0.391046 / 0.258489 (0.132557) | 0.430514 / 0.293841 (0.136673) | 0.054900 / 0.128546 (-0.073646) | 0.017972 / 0.075646 (-0.057675) | 0.410875 / 0.419271 (-0.008396) | 0.067405 / 0.043533 (0.023873) | 0.371468 / 0.255139 (0.116329) | 0.435061 / 0.283200 (0.151861) | 0.038063 / 0.141683 (-0.103620) | 1.733509 / 1.452155 (0.281354) | 1.833899 / 1.492716 (0.341182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243230 / 0.018006 (0.225224) | 0.605636 / 0.000490 (0.605146) | 0.004890 / 0.000200 (0.004690) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027624 / 0.037411 (-0.009787) | 0.084799 / 0.014526 (0.070273) | 0.104405 / 0.176557 (-0.072152) | 0.165383 / 0.737135 (-0.571752) | 0.102083 / 0.296338 (-0.194255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578334 / 0.215209 (0.363125) | 5.369520 / 2.077655 (3.291866) | 2.294174 / 1.504120 (0.790055) | 2.054195 / 1.541195 (0.513000) | 2.007304 / 1.468490 (0.538814) | 0.839283 / 4.584777 (-3.745494) | 5.262288 / 3.745712 (1.516576) | 4.363346 / 5.269862 (-0.906516) | 2.854903 / 4.565676 (-1.710773) | 0.096975 / 0.424275 (-0.327300) | 0.008237 / 0.007607 (0.000630) | 0.646746 / 0.226044 (0.420702) | 6.250621 / 2.268929 (3.981693) | 2.900377 / 55.444624 (-52.544247) | 2.283238 / 6.876477 (-4.593239) | 2.443785 / 2.142072 (0.301713) | 0.991719 / 4.805227 (-3.813508) | 0.189755 / 6.500664 (-6.310909) | 0.067906 / 0.075469 (-0.007563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.515563 / 1.841788 (-0.326225) | 21.956499 / 8.074308 (13.882191) | 19.161750 / 10.191392 (8.970358) | 0.238199 / 0.680424 (-0.442225) | 0.026771 / 0.534201 (-0.507430) | 0.450195 / 0.579283 (-0.129088) | 0.585168 / 0.434364 (0.150804) | 0.522945 / 0.540337 (-0.017393) | 0.776244 / 1.386936 (-0.610693) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007997 / 0.011353 (-0.003356) | 0.005021 / 0.011008 (-0.005988) | 0.087308 / 0.038508 (0.048800) | 0.077760 / 0.023109 (0.054650) | 0.425313 / 0.275898 (0.149415) | 0.451470 / 0.323480 (0.127990) | 0.006848 / 0.007986 (-0.001137) | 0.004812 / 0.004328 (0.000484) | 0.071198 / 0.004250 (0.066947) | 0.058325 / 0.037052 (0.021273) | 0.427411 / 0.258489 (0.168922) | 0.466069 / 0.293841 (0.172228) | 0.048686 / 0.128546 (-0.079861) | 0.011841 / 0.075646 (-0.063806) | 0.086225 / 0.419271 (-0.333047) | 0.060500 / 0.043533 (0.016967) | 0.435580 / 0.255139 (0.180441) | 0.456919 / 0.283200 (0.173719) | 0.035094 / 0.141683 (-0.106588) | 1.582805 / 1.452155 (0.130650) | 1.717838 / 1.492716 (0.225122) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283967 / 0.018006 (0.265960) | 0.517496 / 0.000490 (0.517006) | 0.014747 / 0.000200 (0.014547) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083835 / 0.014526 (0.069309) | 0.099157 / 0.176557 (-0.077400) | 0.173210 / 0.737135 (-0.563925) | 0.094212 / 0.296338 (-0.202127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535720 / 0.215209 (0.320511) | 5.273730 / 2.077655 (3.196075) | 2.422560 / 1.504120 (0.918440) | 2.131416 / 1.541195 (0.590222) | 2.192000 / 1.468490 (0.723510) | 0.708469 / 4.584777 (-3.876308) | 4.758092 / 3.745712 (1.012380) | 3.940729 / 5.269862 (-1.329133) | 2.553093 / 4.565676 (-2.012583) | 0.084895 / 0.424275 (-0.339380) | 0.008730 / 0.007607 (0.001123) | 0.646975 / 0.226044 (0.420930) | 6.294811 / 2.268929 (4.025883) | 3.293964 / 55.444624 (-52.150660) | 2.568985 / 6.876477 (-4.307492) | 2.743786 / 2.142072 (0.601713) | 0.899733 / 4.805227 (-3.905494) | 0.193484 / 6.500664 (-6.307181) | 0.070012 / 0.075469 (-0.005457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502255 / 1.841788 (-0.339532) | 20.690234 / 8.074308 (12.615926) | 18.375791 / 10.191392 (8.184399) | 0.200135 / 0.680424 (-0.480289) | 0.029434 / 0.534201 (-0.504767) | 0.477267 / 0.579283 (-0.102016) | 0.566869 / 0.434364 (0.132505) | 0.543756 / 0.540337 (0.003418) | 0.700476 / 1.386936 (-0.686460) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef17d9fd6c648bb41d43ba301c3de4d7b6f833d8 \"CML watermark\")\n" ]
"2023-08-03T14:46:04Z"
"2023-08-03T14:56:59Z"
"2023-08-03T14:46:18Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6117.diff", "html_url": "https://github.com/huggingface/datasets/pull/6117", "merged_at": "2023-08-03T14:46:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/6117.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6117" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6117/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6117/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4798/comments
https://api.github.com/repos/huggingface/datasets/issues/4798/events
https://github.com/huggingface/datasets/pull/4798
1,330,699,942
PR_kwDODunzps48wVEG
4,798
Shard generator
{ "avatar_url": "https://avatars.githubusercontent.com/u/43296932?v=4", "events_url": "https://api.github.com/users/marianna13/events{/privacy}", "followers_url": "https://api.github.com/users/marianna13/followers", "following_url": "https://api.github.com/users/marianna13/following{/other_user}", "gists_url": "https://api.github.com/users/marianna13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marianna13", "id": 43296932, "login": "marianna13", "node_id": "MDQ6VXNlcjQzMjk2OTMy", "organizations_url": "https://api.github.com/users/marianna13/orgs", "received_events_url": "https://api.github.com/users/marianna13/received_events", "repos_url": "https://api.github.com/users/marianna13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marianna13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marianna13/subscriptions", "type": "User", "url": "https://api.github.com/users/marianna13" }
[]
closed
false
null
[]
null
[ "Hi, thanks!\r\n\r\n> I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to \"split\" these large datasets into chunks with equal size\r\n\r\n`map`, the method we use for processing in `datasets`, already does that if `batched=True`. And you can control the batch size with `batch_size`.\r\n\r\n> Even better - be able to run through these chunks one by one in simple and convenient way\r\n\r\nIt's not hard to do this \"manually\" with the existing API:\r\n```python\r\nbatch_size = <BATCH_SIZE>\r\nfor i in range(len(dset) // batch_size)\r\n shard = dset[i * batch_size:(i+1) * batch_size] # a dict of lists\r\n shard = Dataset.from_dict(shard)\r\n```\r\n(should be of similar performance to your implementation)\r\n\r\nStill, I think an API like that could be useful if implemented efficiently (see [this](https://discuss.huggingface.co/t/why-is-it-so-slow-to-access-data-through-iteration-with-hugginface-dataset/20385) discussion to understand what's the issue with `select`/`__getitem__` on which your implementation relies on), which can be done with `pa.Table.to_reader` in PyArrow 8.0.0+, .\r\n\r\n@lhoestq @albertvillanova wdyt? We could use such API to efficiently iterate over the batches in `map` before processing them.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4798). All of your documentation changes will be reflected on that endpoint.", "This is more efficient since it doesn't bring the data in memory:\r\n```python\r\nfor i in range(len(dset) // batch_size)\r\n start = i * batch_size\r\n end = min((i+1) * batch_size, len(dset))\r\n shard = dset.select(range(start, end))\r\n```\r\n\r\n@marianna13 can you give more details on when it would be handy to have this shard generator ?", "> This is more efficient since it doesn't bring the data in memory:\r\n> \r\n> ```python\r\n> for i in range(len(dset) // batch_size)\r\n> start = i * batch_size\r\n> end = min((i+1) * batch_size, len(dset))\r\n> shard = dset.select(range(start, end))\r\n> ```\r\n> \r\n> @marianna13 can you give more details on when it would be handy to have this shard generator ?\r\n\r\nSure! I used such generator when I needed to process a very large dataset (>1TB) in parallel, I've found out empirically that it's much more efficient to do that by processing only one part of the dataset with the shard generator. I tried to use a map with batching but it causesd oom errors, I tried to use the normal shard and here's what I came up with. So I thought it might be helpful to someone else!", "I see thanks ! `map` should work just fine even at this scale, feel free to open an issue if you'd like to discuss your OOM issue.\r\n\r\nRegarding `shard_generator`, since it is pretty straightforward to get shards I'm not sure we need that extra Dataset method", "Hi again! We've just added `_iter_batches(batch_size)` to the `Dataset` API for fast iteration over batches/chunks, so I think we can close this PR. Compared to this implementation, `_iter_batches` leverages `pa.Table.to_reader` for chunking, which makes it significantly faster." ]
"2022-08-06T09:14:06Z"
"2022-10-03T15:35:10Z"
"2022-10-03T15:35:10Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4798.diff", "html_url": "https://github.com/huggingface/datasets/pull/4798", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4798.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4798" }
Hi everyone! I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to "split" these large datasets into chunks with equal size. Even better - be able to run through these chunks one by one in simple and convenient way. So I decided to add the method called shard_generator() to the main Dataset class. It works similar to shard method but it returns a generator of datasets with equal size (defined by shard_size attribute). Example: ```python >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="validation") >>> ds Dataset({ features: ['text', 'label'], num_rows: 1066 }) >>> next(ds.shard_generator(300)) Dataset({ features: ['text', 'label'], num_rows: 300 }) ``` I hope it can be helpful to someone. Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4798/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4798/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2990/comments
https://api.github.com/repos/huggingface/datasets/issues/2990/events
https://github.com/huggingface/datasets/pull/2990
1,012,097,418
PR_kwDODunzps4sgLt5
2,990
Make Dataset.map accept list of np.array
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-09-30T12:08:54Z"
"2021-10-01T13:57:46Z"
"2021-10-01T13:57:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2990.diff", "html_url": "https://github.com/huggingface/datasets/pull/2990", "merged_at": "2021-10-01T13:57:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/2990.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2990" }
Fix #2987.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2990/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2990/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5967/comments
https://api.github.com/repos/huggingface/datasets/issues/5967/events
https://github.com/huggingface/datasets/issues/5967
1,763,926,520
I_kwDODunzps5pI2H4
5,967
Config name / split name lost after map with multiproc
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[]
open
false
null
[]
null
[ "This must be due to DatasetInfo.from_merge which drops them and is used in `concatenate_datasets`.\r\n\r\nAnd you're experiencing this issue because multiprocessing does concatenate the resulting datasets from each process.\r\n\r\nMaybe they should be kept if all the subdatasets share the same values for config_name and split", "That sounds like a clean workaround!" ]
"2023-06-19T17:27:36Z"
"2023-06-28T08:55:25Z"
null
CONTRIBUTOR
null
null
null
### Describe the bug Performing a `.map` method on a dataset loses it's config name / split name only if run with multiproc ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset from transformers import AutoFeatureExtractor import numpy as np # load dummy dataset libri = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean") # make train / test splits libri = libri["validation"].train_test_split(seed=42, shuffle=True, test_size=0.1) # example feature extractor model_id = "ntu-spml/distilhubert" feature_extractor = AutoFeatureExtractor.from_pretrained(model_id, do_normalize=True, return_attention_mask=True) sampling_rate = feature_extractor.sampling_rate libri = libri.cast_column("audio", Audio(sampling_rate=sampling_rate)) max_duration = 30.0 def preprocess_function(examples): audio_arrays = [x["array"] for x in examples["audio"]] inputs = feature_extractor( audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=int(feature_extractor.sampling_rate * max_duration), truncation=True, return_attention_mask=True, ) return inputs # single proc map libri_encoded = libri.map( preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=1 ) print(10 * "=" ,"Single processing", 10 * "=") print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split) print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split) # multi proc map libri_encoded = libri.map( preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=2 ) print(10 * "=" ,"Multi processing", 10 * "=") print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split) print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split) ``` **Print Output:** ``` ========== Single processing ========== Config name before: clean Split name before: validation Config name after: clean Split name after: validation ========== Multi processing ========== Config name before: clean Split name before: validation Config name after: None Split name after: None ``` => we can see that the config/split names are lost in the multiprocessing setting ### Expected behavior Should retain both config / split names in the multiproc setting ### Environment info - `datasets` version: 2.13.1.dev0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5967/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5967/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3807/comments
https://api.github.com/repos/huggingface/datasets/issues/3807/events
https://github.com/huggingface/datasets/issues/3807
1,157,531,812
I_kwDODunzps5E_oik
3,807
NonMatchingChecksumError in xcopa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/93286455?v=4", "events_url": "https://api.github.com/users/afcruzs-ms/events{/privacy}", "followers_url": "https://api.github.com/users/afcruzs-ms/followers", "following_url": "https://api.github.com/users/afcruzs-ms/following{/other_user}", "gists_url": "https://api.github.com/users/afcruzs-ms/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/afcruzs-ms", "id": 93286455, "login": "afcruzs-ms", "node_id": "U_kgDOBY9wNw", "organizations_url": "https://api.github.com/users/afcruzs-ms/orgs", "received_events_url": "https://api.github.com/users/afcruzs-ms/received_events", "repos_url": "https://api.github.com/users/afcruzs-ms/repos", "site_admin": false, "starred_url": "https://api.github.com/users/afcruzs-ms/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/afcruzs-ms/subscriptions", "type": "User", "url": "https://api.github.com/users/afcruzs-ms" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "@albertvillanova here's a separate issue for a bug similar to #3792", "Hi @afcruzs-ms, thanks for opening this separate issue for your problem.\r\n\r\nThe root problem in the other issue (#3792) was a change in the service of Google Drive.\r\n\r\nBut in your case, the `xcopa` dataset is not hosted on Google Drive. Therefore, the root cause should be a different one.\r\n\r\nLet me look at it... ", "@afcruzs-ms, I'm not able to reproduce the issue you reported:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"xcopa\", \"it\")\r\nDownloading builder script: 5.21kB [00:00, 2.75MB/s] \r\nDownloading metadata: 28.6kB [00:00, 14.5MB/s] \r\nDownloading and preparing dataset xcopa/it (download: 627.09 KiB, generated: 76.43 KiB, post-processed: Unknown size, total: 703.52 KiB) to .../.cache/huggingface/datasets/xcopa/it/1.0.0/e1fab65f984b24c8b66bcf7ac27a26a1182f84adfb2e74035861be65e214b9e6...\r\nDownloading data: 642kB [00:00, 5.42MB/s]\r\nDataset xcopa downloaded and prepared to .../.cache/huggingface/datasets/xcopa/it/1.0.0/e1fab65f984b24c8b66bcf7ac27a26a1182f84adfb2e74035861be65e214b9e6. Subsequent calls will reuse this data. \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 733.27it/s]\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n test: Dataset({\r\n features: ['premise', 'choice1', 'choice2', 'question', 'label', 'idx', 'changed'],\r\n num_rows: 500\r\n })\r\n validation: Dataset({\r\n features: ['premise', 'choice1', 'choice2', 'question', 'label', 'idx', 'changed'],\r\n num_rows: 100\r\n })\r\n})\r\n```\r\n\r\nMaybe you have some issue with your cached data... Could you please try to force the redownload of the data?\r\n```python\r\ndataset = load_dataset(\"xcopa\", \"it\", download_mode=\"force_redownload\")\r\n```", "It works indeed, thanks! ", "unfortunately, i am having a similar problem with the irc_disentaglement dataset :/\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\n\r\nhowever, it produces the same error as @afcruzs-ms \r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\n\r\nI attempted to use the `ignore_verifications' as such:\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\n```\r\n```\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|██████████| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\n\r\nnot sure what else to try at this point?\r\nThanks in advanced🤗", "Thanks @labouz for reporting: yes, better opening a new GitHub issue as you did. I'm addressing it:\r\n- #4376" ]
"2022-03-02T18:10:19Z"
"2022-05-20T06:00:42Z"
"2022-03-03T17:40:31Z"
NONE
null
null
null
## Describe the bug Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("xcopa", "it") ``` ## Expected results The dataset should be loaded correctly. ## Actual results Fails with: ```python in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/cambridgeltl/xcopa/archive/master.zip'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3, and 1.18.4.dev0 - Platform: - Python version: 3.8 - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3807/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3807/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4968/comments
https://api.github.com/repos/huggingface/datasets/issues/4968/events
https://github.com/huggingface/datasets/pull/4968
1,369,312,877
PR_kwDODunzps4-wKkw
4,968
Support streaming compguesswhat dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-09-12T05:42:24Z"
"2022-09-12T08:00:06Z"
"2022-09-12T07:58:06Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4968.diff", "html_url": "https://github.com/huggingface/datasets/pull/4968", "merged_at": "2022-09-12T07:58:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/4968.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4968" }
Support streaming `compguesswhat` dataset. Fix #3191.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4968/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4968/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5190/comments
https://api.github.com/repos/huggingface/datasets/issues/5190/events
https://github.com/huggingface/datasets/issues/5190
1,433,014,626
I_kwDODunzps5VahFi
5,190
`path` is `None` when downloading a custom audio dataset from the Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
[ "Hi! Yes, this is expected behavior - we do this as a security measure to not leak local paths (this info would be useless on other users' machines anyways) and only push audio bytes. \r\n" ]
"2022-11-02T11:51:25Z"
"2022-11-02T12:55:02Z"
"2022-11-02T12:55:02Z"
MEMBER
null
null
null
### Describe the bug I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub. Locally, I can see the `audio.path` feature is of the expected form `path/to/data_dir`, but when I download the dataset from the Hub, I see `audio.path` is `None` Here's an example: ```python from datasets import load_dataset ds = load_dataset("lewtun/audio-test-push") ds["train"][0] # { # "audio": { # "path": None, <-- Is this expected? # "array": array( # [ # 3.97140226e-07, # 7.30310290e-07, # 7.56406735e-07, # ..., # -1.19636677e-01, # -1.16811886e-01, # -1.12441722e-01, # ] # ), # "sampling_rate": 44100, # }, # "song_id": 0, # "genre_id": 0, # "genre": "Electronic", # } ``` Is this expected behaviour? If yes, feel free to close this issue as it's not a true bug then :) ### Steps to reproduce the bug 1. Create an audio dataset with the `audiofolder` feature 2. Push the dataset to the Hub with `push_to_hub()` 3. Download the Hub dataset and inspect the `audio.path` feature ### Expected behavior `audio.path` points to the file associated with the audio data ### Environment info - `datasets` version: 2.6.2.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5190/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5190/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5167/comments
https://api.github.com/repos/huggingface/datasets/issues/5167/events
https://github.com/huggingface/datasets/pull/5167
1,424,124,477
PR_kwDODunzps5BljPw
5,167
Add ffmpeg4 installation instructions in warnings
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" } ]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "To make it warn only once, feel free to use a global counter in python - and if the warning has already been done, you don't do it again", "> Added the same formatting for the error message :)\r\n\r\nnice!! thank you! \r\n\r\n> Oh and regarding the warning counter, you can do it in another PR maybe ?\r\n\r\nYes, more warnings is better then no warnings.... I'll merge when the CI passes" ]
"2022-10-26T14:21:14Z"
"2022-10-27T09:01:12Z"
"2022-10-27T08:58:58Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5167.diff", "html_url": "https://github.com/huggingface/datasets/pull/5167", "merged_at": "2022-10-27T08:58:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5167.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5167" }
Adds instructions on how to install `ffmpeg=4` on Linux (relevant for Colab users). Looks pretty ugly because I didn't find a way to check `ffmpeg` version from python (without `subprocess.call()`; `ctypes.util.find_library` doesn't work`), so the warning is raised on each decoding. Any suggestions on how to make it look nice are welcome! This is how it looks on Colab: ![image](https://user-images.githubusercontent.com/16348744/198052412-d48018d1-4416-4aa5-9114-f7f9b4af031f.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5167/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5167/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5640/comments
https://api.github.com/repos/huggingface/datasets/issues/5640/events
https://github.com/huggingface/datasets/pull/5640
1,625,896,057
PR_kwDODunzps5MID3I
5,640
Less zip false positives
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006998 / 0.011353 (-0.004355) | 0.005093 / 0.011008 (-0.005916) | 0.100490 / 0.038508 (0.061982) | 0.032736 / 0.023109 (0.009627) | 0.297738 / 0.275898 (0.021840) | 0.322255 / 0.323480 (-0.001225) | 0.005583 / 0.007986 (-0.002402) | 0.004007 / 0.004328 (-0.000321) | 0.075863 / 0.004250 (0.071613) | 0.044212 / 0.037052 (0.007159) | 0.300033 / 0.258489 (0.041544) | 0.341997 / 0.293841 (0.048156) | 0.036172 / 0.128546 (-0.092374) | 0.012176 / 0.075646 (-0.063471) | 0.356052 / 0.419271 (-0.063220) | 0.050438 / 0.043533 (0.006905) | 0.294677 / 0.255139 (0.039538) | 0.318050 / 0.283200 (0.034850) | 0.104733 / 0.141683 (-0.036950) | 1.435681 / 1.452155 (-0.016474) | 1.534793 / 1.492716 (0.042076) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242815 / 0.018006 (0.224809) | 0.565983 / 0.000490 (0.565494) | 0.006800 / 0.000200 (0.006600) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026548 / 0.037411 (-0.010863) | 0.104816 / 0.014526 (0.090290) | 0.116222 / 0.176557 (-0.060335) | 0.172143 / 0.737135 (-0.564992) | 0.121631 / 0.296338 (-0.174707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400126 / 0.215209 (0.184917) | 4.004538 / 2.077655 (1.926883) | 1.798822 / 1.504120 (0.294702) | 1.595191 / 1.541195 (0.053996) | 1.645777 / 1.468490 (0.177287) | 0.705643 / 4.584777 (-3.879134) | 3.750887 / 3.745712 (0.005175) | 2.136547 / 5.269862 (-3.133315) | 1.475881 / 4.565676 (-3.089795) | 0.086921 / 0.424275 (-0.337354) | 0.012379 / 0.007607 (0.004771) | 0.505824 / 0.226044 (0.279779) | 5.052364 / 2.268929 (2.783435) | 2.279983 / 55.444624 (-53.164641) | 1.932253 / 6.876477 (-4.944224) | 2.051359 / 2.142072 (-0.090714) | 0.851906 / 4.805227 (-3.953321) | 0.169566 / 6.500664 (-6.331098) | 0.064600 / 0.075469 (-0.010869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.165859 / 1.841788 (-0.675929) | 15.049950 / 8.074308 (6.975642) | 14.095981 / 10.191392 (3.904589) | 0.151779 / 0.680424 (-0.528645) | 0.017537 / 0.534201 (-0.516664) | 0.420164 / 0.579283 (-0.159119) | 0.418932 / 0.434364 (-0.015432) | 0.488749 / 0.540337 (-0.051588) | 0.582359 / 1.386936 (-0.804577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007426 / 0.011353 (-0.003927) | 0.005248 / 0.011008 (-0.005761) | 0.074118 / 0.038508 (0.035610) | 0.034223 / 0.023109 (0.011114) | 0.337780 / 0.275898 (0.061882) | 0.376300 / 0.323480 (0.052820) | 0.006142 / 0.007986 (-0.001843) | 0.004246 / 0.004328 (-0.000083) | 0.074177 / 0.004250 (0.069926) | 0.052698 / 0.037052 (0.015646) | 0.340229 / 0.258489 (0.081740) | 0.396172 / 0.293841 (0.102331) | 0.037293 / 0.128546 (-0.091253) | 0.012514 / 0.075646 (-0.063132) | 0.087144 / 0.419271 (-0.332128) | 0.051922 / 0.043533 (0.008390) | 0.333188 / 0.255139 (0.078049) | 0.355420 / 0.283200 (0.072220) | 0.110273 / 0.141683 (-0.031410) | 1.447826 / 1.452155 (-0.004329) | 1.561135 / 1.492716 (0.068419) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269203 / 0.018006 (0.251197) | 0.551997 / 0.000490 (0.551508) | 0.001558 / 0.000200 (0.001359) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029511 / 0.037411 (-0.007900) | 0.108614 / 0.014526 (0.094089) | 0.123438 / 0.176557 (-0.053118) | 0.171596 / 0.737135 (-0.565539) | 0.126828 / 0.296338 (-0.169511) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420520 / 0.215209 (0.205310) | 4.175672 / 2.077655 (2.098017) | 1.982220 / 1.504120 (0.478101) | 1.788575 / 1.541195 (0.247381) | 1.860840 / 1.468490 (0.392349) | 0.706730 / 4.584777 (-3.878047) | 3.858718 / 3.745712 (0.113005) | 3.069389 / 5.269862 (-2.200472) | 1.827603 / 4.565676 (-2.738073) | 0.087893 / 0.424275 (-0.336382) | 0.012613 / 0.007607 (0.005006) | 0.524177 / 0.226044 (0.298132) | 5.177077 / 2.268929 (2.908148) | 2.494397 / 55.444624 (-52.950227) | 2.189484 / 6.876477 (-4.686992) | 2.217626 / 2.142072 (0.075554) | 0.846326 / 4.805227 (-3.958901) | 0.176558 / 6.500664 (-6.324106) | 0.065018 / 0.075469 (-0.010451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268618 / 1.841788 (-0.573170) | 15.132711 / 8.074308 (7.058403) | 14.585530 / 10.191392 (4.394138) | 0.163454 / 0.680424 (-0.516970) | 0.017442 / 0.534201 (-0.516759) | 0.421746 / 0.579283 (-0.157537) | 0.425412 / 0.434364 (-0.008952) | 0.499178 / 0.540337 (-0.041159) | 0.595458 / 1.386936 (-0.791478) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ab77e58cd32413f4ef4828134a2470ebd53bb542 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007980 / 0.011353 (-0.003373) | 0.005414 / 0.011008 (-0.005594) | 0.099226 / 0.038508 (0.060718) | 0.035442 / 0.023109 (0.012332) | 0.304851 / 0.275898 (0.028952) | 0.337144 / 0.323480 (0.013664) | 0.006162 / 0.007986 (-0.001823) | 0.004151 / 0.004328 (-0.000177) | 0.074708 / 0.004250 (0.070458) | 0.049690 / 0.037052 (0.012638) | 0.307658 / 0.258489 (0.049168) | 0.358472 / 0.293841 (0.064631) | 0.037181 / 0.128546 (-0.091365) | 0.012259 / 0.075646 (-0.063387) | 0.335426 / 0.419271 (-0.083846) | 0.050790 / 0.043533 (0.007257) | 0.301715 / 0.255139 (0.046576) | 0.320834 / 0.283200 (0.037634) | 0.102357 / 0.141683 (-0.039326) | 1.454750 / 1.452155 (0.002596) | 1.571994 / 1.492716 (0.079278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218708 / 0.018006 (0.200702) | 0.444391 / 0.000490 (0.443901) | 0.005717 / 0.000200 (0.005517) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028017 / 0.037411 (-0.009395) | 0.112753 / 0.014526 (0.098227) | 0.121003 / 0.176557 (-0.055554) | 0.181085 / 0.737135 (-0.556050) | 0.127211 / 0.296338 (-0.169127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400803 / 0.215209 (0.185594) | 4.007315 / 2.077655 (1.929660) | 1.826911 / 1.504120 (0.322791) | 1.637799 / 1.541195 (0.096605) | 1.699754 / 1.468490 (0.231264) | 0.709413 / 4.584777 (-3.875364) | 4.008904 / 3.745712 (0.263192) | 3.916540 / 5.269862 (-1.353322) | 1.902102 / 4.565676 (-2.663575) | 0.089048 / 0.424275 (-0.335227) | 0.012763 / 0.007607 (0.005155) | 0.498957 / 0.226044 (0.272913) | 4.979865 / 2.268929 (2.710937) | 2.301987 / 55.444624 (-53.142637) | 1.929404 / 6.876477 (-4.947073) | 2.107839 / 2.142072 (-0.034233) | 0.857253 / 4.805227 (-3.947974) | 0.171935 / 6.500664 (-6.328729) | 0.066753 / 0.075469 (-0.008716) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.186811 / 1.841788 (-0.654977) | 15.866319 / 8.074308 (7.792011) | 14.738555 / 10.191392 (4.547163) | 0.142879 / 0.680424 (-0.537544) | 0.017679 / 0.534201 (-0.516522) | 0.422840 / 0.579283 (-0.156443) | 0.450307 / 0.434364 (0.015943) | 0.491802 / 0.540337 (-0.048536) | 0.588837 / 1.386936 (-0.798099) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007659 / 0.011353 (-0.003694) | 0.005331 / 0.011008 (-0.005678) | 0.075360 / 0.038508 (0.036852) | 0.034011 / 0.023109 (0.010902) | 0.354488 / 0.275898 (0.078590) | 0.401781 / 0.323480 (0.078301) | 0.005806 / 0.007986 (-0.002179) | 0.004029 / 0.004328 (-0.000300) | 0.073822 / 0.004250 (0.069572) | 0.049067 / 0.037052 (0.012015) | 0.364483 / 0.258489 (0.105994) | 0.405637 / 0.293841 (0.111796) | 0.037166 / 0.128546 (-0.091380) | 0.012397 / 0.075646 (-0.063249) | 0.087346 / 0.419271 (-0.331926) | 0.050888 / 0.043533 (0.007355) | 0.334796 / 0.255139 (0.079657) | 0.387681 / 0.283200 (0.104481) | 0.105056 / 0.141683 (-0.036627) | 1.471630 / 1.452155 (0.019475) | 1.554764 / 1.492716 (0.062047) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231825 / 0.018006 (0.213819) | 0.449746 / 0.000490 (0.449256) | 0.000888 / 0.000200 (0.000688) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030363 / 0.037411 (-0.007049) | 0.115234 / 0.014526 (0.100708) | 0.123005 / 0.176557 (-0.053551) | 0.172772 / 0.737135 (-0.564363) | 0.127818 / 0.296338 (-0.168520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425761 / 0.215209 (0.210552) | 4.237950 / 2.077655 (2.160295) | 1.992045 / 1.504120 (0.487925) | 1.801622 / 1.541195 (0.260427) | 1.918477 / 1.468490 (0.449987) | 0.722730 / 4.584777 (-3.862047) | 4.015968 / 3.745712 (0.270256) | 3.720412 / 5.269862 (-1.549450) | 1.763111 / 4.565676 (-2.802566) | 0.089041 / 0.424275 (-0.335234) | 0.012608 / 0.007607 (0.005001) | 0.522645 / 0.226044 (0.296601) | 5.227108 / 2.268929 (2.958180) | 2.444714 / 55.444624 (-52.999910) | 2.109745 / 6.876477 (-4.766732) | 2.194042 / 2.142072 (0.051969) | 0.871781 / 4.805227 (-3.933447) | 0.173149 / 6.500664 (-6.327515) | 0.066192 / 0.075469 (-0.009277) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.312051 / 1.841788 (-0.529737) | 16.024315 / 8.074308 (7.950007) | 15.123823 / 10.191392 (4.932431) | 0.163997 / 0.680424 (-0.516427) | 0.017595 / 0.534201 (-0.516606) | 0.426379 / 0.579283 (-0.152904) | 0.467709 / 0.434364 (0.033345) | 0.498308 / 0.540337 (-0.042030) | 0.591426 / 1.386936 (-0.795510) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#13488cc110b67090289794f48d5c84a4fd0c063a \"CML watermark\")\n", "CI is failing due to unrelated issues, hopefully https://github.com/huggingface/datasets/pull/5642 fixes it", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006478 / 0.011353 (-0.004875) | 0.004347 / 0.011008 (-0.006661) | 0.097103 / 0.038508 (0.058595) | 0.027650 / 0.023109 (0.004541) | 0.372355 / 0.275898 (0.096457) | 0.408794 / 0.323480 (0.085314) | 0.005034 / 0.007986 (-0.002952) | 0.003252 / 0.004328 (-0.001076) | 0.074068 / 0.004250 (0.069818) | 0.035542 / 0.037052 (-0.001510) | 0.367392 / 0.258489 (0.108903) | 0.409644 / 0.293841 (0.115803) | 0.031745 / 0.128546 (-0.096801) | 0.011501 / 0.075646 (-0.064145) | 0.323355 / 0.419271 (-0.095917) | 0.043065 / 0.043533 (-0.000467) | 0.377313 / 0.255139 (0.122174) | 0.395326 / 0.283200 (0.112127) | 0.087101 / 0.141683 (-0.054582) | 1.461228 / 1.452155 (0.009073) | 1.529413 / 1.492716 (0.036696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199245 / 0.018006 (0.181239) | 0.409978 / 0.000490 (0.409488) | 0.002655 / 0.000200 (0.002455) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023903 / 0.037411 (-0.013508) | 0.097855 / 0.014526 (0.083330) | 0.106405 / 0.176557 (-0.070152) | 0.166889 / 0.737135 (-0.570247) | 0.110256 / 0.296338 (-0.186082) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440351 / 0.215209 (0.225142) | 4.382848 / 2.077655 (2.305194) | 2.049602 / 1.504120 (0.545482) | 1.824638 / 1.541195 (0.283443) | 1.850519 / 1.468490 (0.382029) | 0.702652 / 4.584777 (-3.882125) | 3.394571 / 3.745712 (-0.351141) | 1.940608 / 5.269862 (-3.329254) | 1.263961 / 4.565676 (-3.301716) | 0.083985 / 0.424275 (-0.340290) | 0.013046 / 0.007607 (0.005439) | 0.538272 / 0.226044 (0.312228) | 5.407563 / 2.268929 (3.138634) | 2.519207 / 55.444624 (-52.925418) | 2.153379 / 6.876477 (-4.723098) | 2.394512 / 2.142072 (0.252439) | 0.812840 / 4.805227 (-3.992387) | 0.152868 / 6.500664 (-6.347796) | 0.067823 / 0.075469 (-0.007646) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220031 / 1.841788 (-0.621757) | 13.781237 / 8.074308 (5.706929) | 14.203975 / 10.191392 (4.012583) | 0.141077 / 0.680424 (-0.539347) | 0.016518 / 0.534201 (-0.517682) | 0.379079 / 0.579283 (-0.200204) | 0.378916 / 0.434364 (-0.055448) | 0.434589 / 0.540337 (-0.105749) | 0.521129 / 1.386936 (-0.865807) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006997 / 0.011353 (-0.004356) | 0.004599 / 0.011008 (-0.006410) | 0.078700 / 0.038508 (0.040192) | 0.027902 / 0.023109 (0.004793) | 0.344406 / 0.275898 (0.068508) | 0.392918 / 0.323480 (0.069438) | 0.005175 / 0.007986 (-0.002811) | 0.004755 / 0.004328 (0.000427) | 0.077707 / 0.004250 (0.073457) | 0.039409 / 0.037052 (0.002357) | 0.343250 / 0.258489 (0.084761) | 0.405544 / 0.293841 (0.111703) | 0.032286 / 0.128546 (-0.096260) | 0.011674 / 0.075646 (-0.063972) | 0.087633 / 0.419271 (-0.331639) | 0.043346 / 0.043533 (-0.000186) | 0.355076 / 0.255139 (0.099937) | 0.382155 / 0.283200 (0.098955) | 0.090914 / 0.141683 (-0.050769) | 1.518369 / 1.452155 (0.066215) | 1.583530 / 1.492716 (0.090813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.160369 / 0.018006 (0.142362) | 0.406844 / 0.000490 (0.406354) | 0.002651 / 0.000200 (0.002451) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025295 / 0.037411 (-0.012116) | 0.101490 / 0.014526 (0.086964) | 0.108825 / 0.176557 (-0.067732) | 0.161673 / 0.737135 (-0.575462) | 0.113610 / 0.296338 (-0.182729) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443514 / 0.215209 (0.228305) | 4.436722 / 2.077655 (2.359067) | 2.144008 / 1.504120 (0.639888) | 2.005324 / 1.541195 (0.464129) | 2.123356 / 1.468490 (0.654866) | 0.697217 / 4.584777 (-3.887560) | 3.401105 / 3.745712 (-0.344607) | 1.874621 / 5.269862 (-3.395240) | 1.165069 / 4.565676 (-3.400608) | 0.082799 / 0.424275 (-0.341476) | 0.012806 / 0.007607 (0.005199) | 0.542688 / 0.226044 (0.316644) | 5.420963 / 2.268929 (3.152034) | 2.579034 / 55.444624 (-52.865590) | 2.240201 / 6.876477 (-4.636276) | 2.261309 / 2.142072 (0.119237) | 0.800246 / 4.805227 (-4.004981) | 0.150380 / 6.500664 (-6.350285) | 0.066880 / 0.075469 (-0.008589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281721 / 1.841788 (-0.560067) | 13.906361 / 8.074308 (5.832053) | 14.135336 / 10.191392 (3.943944) | 0.128865 / 0.680424 (-0.551559) | 0.016452 / 0.534201 (-0.517749) | 0.373563 / 0.579283 (-0.205720) | 0.385321 / 0.434364 (-0.049043) | 0.437198 / 0.540337 (-0.103139) | 0.530720 / 1.386936 (-0.856216) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e2f8e17f3c8f8d0cb77a4c566a78e31fab47108c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008099 / 0.011353 (-0.003254) | 0.005093 / 0.011008 (-0.005916) | 0.106258 / 0.038508 (0.067750) | 0.037051 / 0.023109 (0.013942) | 0.347960 / 0.275898 (0.072062) | 0.370849 / 0.323480 (0.047369) | 0.006122 / 0.007986 (-0.001863) | 0.004094 / 0.004328 (-0.000235) | 0.079549 / 0.004250 (0.075299) | 0.046563 / 0.037052 (0.009510) | 0.332735 / 0.258489 (0.074246) | 0.417061 / 0.293841 (0.123220) | 0.038105 / 0.128546 (-0.090441) | 0.011886 / 0.075646 (-0.063760) | 0.342103 / 0.419271 (-0.077169) | 0.053233 / 0.043533 (0.009700) | 0.344754 / 0.255139 (0.089615) | 0.355354 / 0.283200 (0.072155) | 0.101059 / 0.141683 (-0.040624) | 1.518561 / 1.452155 (0.066406) | 1.558652 / 1.492716 (0.065935) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225919 / 0.018006 (0.207913) | 0.518539 / 0.000490 (0.518049) | 0.006230 / 0.000200 (0.006030) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026782 / 0.037411 (-0.010629) | 0.108457 / 0.014526 (0.093931) | 0.125203 / 0.176557 (-0.051353) | 0.175726 / 0.737135 (-0.561409) | 0.127051 / 0.296338 (-0.169287) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416427 / 0.215209 (0.201217) | 4.168851 / 2.077655 (2.091196) | 1.962238 / 1.504120 (0.458118) | 1.825224 / 1.541195 (0.284029) | 1.831200 / 1.468490 (0.362710) | 0.765526 / 4.584777 (-3.819250) | 4.303957 / 3.745712 (0.558245) | 2.193467 / 5.269862 (-3.076395) | 1.654605 / 4.565676 (-2.911071) | 0.096709 / 0.424275 (-0.327566) | 0.013792 / 0.007607 (0.006185) | 0.537862 / 0.226044 (0.311818) | 5.152230 / 2.268929 (2.883302) | 2.520938 / 55.444624 (-52.923686) | 2.108422 / 6.876477 (-4.768054) | 2.214220 / 2.142072 (0.072147) | 0.834320 / 4.805227 (-3.970907) | 0.170635 / 6.500664 (-6.330029) | 0.063131 / 0.075469 (-0.012338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215767 / 1.841788 (-0.626020) | 15.254781 / 8.074308 (7.180473) | 14.360764 / 10.191392 (4.169372) | 0.172511 / 0.680424 (-0.507913) | 0.020161 / 0.534201 (-0.514040) | 0.426936 / 0.579283 (-0.152347) | 0.438771 / 0.434364 (0.004407) | 0.486973 / 0.540337 (-0.053364) | 0.584238 / 1.386936 (-0.802698) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006777 / 0.011353 (-0.004576) | 0.005304 / 0.011008 (-0.005704) | 0.073717 / 0.038508 (0.035209) | 0.033604 / 0.023109 (0.010494) | 0.340448 / 0.275898 (0.064550) | 0.351861 / 0.323480 (0.028381) | 0.005786 / 0.007986 (-0.002199) | 0.005013 / 0.004328 (0.000685) | 0.071263 / 0.004250 (0.067012) | 0.048189 / 0.037052 (0.011137) | 0.339457 / 0.258489 (0.080968) | 0.384383 / 0.293841 (0.090542) | 0.035563 / 0.128546 (-0.092983) | 0.011509 / 0.075646 (-0.064137) | 0.083722 / 0.419271 (-0.335550) | 0.048886 / 0.043533 (0.005353) | 0.350184 / 0.255139 (0.095045) | 0.361037 / 0.283200 (0.077837) | 0.105191 / 0.141683 (-0.036492) | 1.503247 / 1.452155 (0.051093) | 1.582298 / 1.492716 (0.089581) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221687 / 0.018006 (0.203681) | 0.466489 / 0.000490 (0.465999) | 0.000484 / 0.000200 (0.000284) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027978 / 0.037411 (-0.009434) | 0.119572 / 0.014526 (0.105047) | 0.133530 / 0.176557 (-0.043026) | 0.177892 / 0.737135 (-0.559243) | 0.127045 / 0.296338 (-0.169294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430198 / 0.215209 (0.214989) | 4.435512 / 2.077655 (2.357858) | 2.007183 / 1.504120 (0.503063) | 1.799230 / 1.541195 (0.258036) | 1.884750 / 1.468490 (0.416260) | 0.745232 / 4.584777 (-3.839545) | 4.088069 / 3.745712 (0.342357) | 4.114669 / 5.269862 (-1.155193) | 2.374086 / 4.565676 (-2.191590) | 0.089154 / 0.424275 (-0.335121) | 0.012938 / 0.007607 (0.005331) | 0.505954 / 0.226044 (0.279909) | 5.194226 / 2.268929 (2.925298) | 2.487230 / 55.444624 (-52.957394) | 2.163353 / 6.876477 (-4.713124) | 2.177879 / 2.142072 (0.035807) | 0.828728 / 4.805227 (-3.976499) | 0.171157 / 6.500664 (-6.329507) | 0.062883 / 0.075469 (-0.012586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275906 / 1.841788 (-0.565882) | 15.235484 / 8.074308 (7.161176) | 14.467396 / 10.191392 (4.276004) | 0.198994 / 0.680424 (-0.481430) | 0.020203 / 0.534201 (-0.513998) | 0.447904 / 0.579283 (-0.131380) | 0.454210 / 0.434364 (0.019846) | 0.528062 / 0.540337 (-0.012275) | 0.619311 / 1.386936 (-0.767625) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#11cd0f73acbce1d16174f2555e56fda511d5a08b \"CML watermark\")\n" ]
"2023-03-15T16:48:59Z"
"2023-03-16T13:47:37Z"
"2023-03-16T13:40:12Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5640.diff", "html_url": "https://github.com/huggingface/datasets/pull/5640", "merged_at": "2023-03-16T13:40:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5640.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5640" }
`zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile` This is a known issue: https://github.com/python/cpython/issues/72680 At first I wanted to rely only on magic numbers, but then I found that someone contributed a [fix to is_zipfile](https://github.com/python/cpython/pull/5053) - do you think we should use it @albertvillanova or not ? IMO it's ok to rely on magic numbers only for now, since in streaming mode we've had no issue checking only the magic number so far. Close https://github.com/huggingface/datasets/issues/5639
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5640/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5640/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3307/comments
https://api.github.com/repos/huggingface/datasets/issues/3307/events
https://github.com/huggingface/datasets/pull/3307
1,059,226,297
PR_kwDODunzps4uzlWa
3,307
Add IndoNLI dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6201626?v=4", "events_url": "https://api.github.com/users/afaji/events{/privacy}", "followers_url": "https://api.github.com/users/afaji/followers", "following_url": "https://api.github.com/users/afaji/following{/other_user}", "gists_url": "https://api.github.com/users/afaji/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/afaji", "id": 6201626, "login": "afaji", "node_id": "MDQ6VXNlcjYyMDE2MjY=", "organizations_url": "https://api.github.com/users/afaji/orgs", "received_events_url": "https://api.github.com/users/afaji/received_events", "repos_url": "https://api.github.com/users/afaji/repos", "site_admin": false, "starred_url": "https://api.github.com/users/afaji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/afaji/subscriptions", "type": "User", "url": "https://api.github.com/users/afaji" }
[]
closed
false
null
[]
null
[ "@lhoestq thanks for the review! I've modified the labels to follow other NLI datasets.\r\nPlease review my change and let me know if I miss anything." ]
"2021-11-20T20:46:03Z"
"2021-11-25T14:51:48Z"
"2021-11-25T14:51:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3307.diff", "html_url": "https://github.com/huggingface/datasets/pull/3307", "merged_at": "2021-11-25T14:51:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/3307.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3307" }
This PR adds IndoNLI dataset, from https://aclanthology.org/2021.emnlp-main.821/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3307/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3307/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3700/comments
https://api.github.com/repos/huggingface/datasets/issues/3700/events
https://github.com/huggingface/datasets/issues/3700
1,130,252,496
I_kwDODunzps5DXkjQ
3,700
Unable to load a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/97964230?v=4", "events_url": "https://api.github.com/users/PaulchauvinAI/events{/privacy}", "followers_url": "https://api.github.com/users/PaulchauvinAI/followers", "following_url": "https://api.github.com/users/PaulchauvinAI/following{/other_user}", "gists_url": "https://api.github.com/users/PaulchauvinAI/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulchauvinAI", "id": 97964230, "login": "PaulchauvinAI", "node_id": "U_kgDOBdbQxg", "organizations_url": "https://api.github.com/users/PaulchauvinAI/orgs", "received_events_url": "https://api.github.com/users/PaulchauvinAI/received_events", "repos_url": "https://api.github.com/users/PaulchauvinAI/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulchauvinAI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulchauvinAI/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulchauvinAI" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi! `load_dataset` is intended to be used to load a canonical dataset (`wikipedia`), a packaged dataset (`csv`, `json`, ...) or a dataset hosted on the Hub. For local datasets saved with `save_to_disk(\"path/to/dataset\")`, use `load_from_disk(\"path/to/dataset\")`.", "Maybe we should raise an informative error message in this case..." ]
"2022-02-10T15:05:53Z"
"2022-02-11T22:56:39Z"
"2022-02-11T22:56:39Z"
NONE
null
null
null
## Describe the bug Unable to load a dataset from Huggingface that I have just saved. ## Steps to reproduce the bug On Google colab `! pip install datasets ` `from datasets import load_dataset` `my_path = "wiki_dataset"` `dataset = load_dataset('wikipedia', "20200501.fr")` `dataset.save_to_disk(my_path)` `dataset = load_dataset(my_path)` ## Expected results Loading the dataset ## Actual results ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: null _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: string to {'builder_name': Value(dtype='string', id=None), 'citation': Value(dtype='string', id=None), 'config_name': Value(dtype='string', id=None), 'dataset_size': Value(dtype='int64', id=None), 'description': Value(dtype='string', id=None), 'download_checksums': {}, 'download_size': Value(dtype='int64', id=None), 'features': {'title': {'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None), '_type': Value(dtype='string', id=None)}, 'text': {'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'post_processed': Value(dtype='null', id=None), 'post_processing_size': Value(dtype='null', id=None), 'size_in_bytes': Value(dtype='int64', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='string', id=None)}}, 'supervised_keys': Value(dtype='null', id=None), 'task_templates': Value(dtype='null', id=None), 'version': {'version_str': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'major': Value(dtype='int64', id=None), 'minor': Value(dtype='int64', id=None), 'patch': Value(dtype='int64', id=None)}} because column names don't match ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3700/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3700/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3302/comments
https://api.github.com/repos/huggingface/datasets/issues/3302/events
https://github.com/huggingface/datasets/pull/3302
1,058,907,168
PR_kwDODunzps4uynjc
3,302
fix old_val typo in f-string
{ "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" }
[]
closed
false
null
[]
null
[]
"2021-11-19T20:51:08Z"
"2021-11-25T22:14:43Z"
"2021-11-22T17:04:19Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3302.diff", "html_url": "https://github.com/huggingface/datasets/pull/3302", "merged_at": "2021-11-22T17:04:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/3302.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3302" }
This PR is to correct a typo in #3277 that @Carlosbogo revieled in a comment. Related closed issue : #3257 Sorry about that 😅.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3302/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3302/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4981/comments
https://api.github.com/repos/huggingface/datasets/issues/4981/events
https://github.com/huggingface/datasets/issues/4981
1,375,086,773
I_kwDODunzps5R9ii1
4,981
Can't create a dataset with `float16` features
{ "avatar_url": "https://avatars.githubusercontent.com/u/15098095?v=4", "events_url": "https://api.github.com/users/dconathan/events{/privacy}", "followers_url": "https://api.github.com/users/dconathan/followers", "following_url": "https://api.github.com/users/dconathan/following{/other_user}", "gists_url": "https://api.github.com/users/dconathan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dconathan", "id": 15098095, "login": "dconathan", "node_id": "MDQ6VXNlcjE1MDk4MDk1", "organizations_url": "https://api.github.com/users/dconathan/orgs", "received_events_url": "https://api.github.com/users/dconathan/received_events", "repos_url": "https://api.github.com/users/dconathan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dconathan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dconathan/subscriptions", "type": "User", "url": "https://api.github.com/users/dconathan" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types", "Thanks for the link…. didn’t realize arrow didn’t support it yet. Should it be removed from https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_classes#datasets.Value until Arrow supports it?", "Yes, you are right: maybe we should either remove it from our docs or add a comment explaining the issue.\r\n\r\nThe thing is that in Arrow it is partially supported: you can create `float16` values, but you can't cast them from/to other types. And current implementation of `Value` always tries to perform a cast from `float64` to `float16`.", "Maybe we can just add a note in the `Value` documentation ?", "Would you accept a PR to fix this? @lhoestq Do you have an idea of how hard it would be to fix?", "I think the issue comes mostly from pyarrow not supporting `float16` completely.\r\n\r\nFor example you stil can't cast from/to `float16`\r\n```python\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\npa.array(range(5)).cast(pa.float16())\r\n# ArrowNotImplementedError: Unsupported cast from int64 to halffloat using function cast_half_float\r\npa.array(range(5), pa.float32()).cast(pa.float16())\r\n# ArrowNotImplementedError: Unsupported cast from float to halffloat using function cast_half_float\r\npa.array(range(5), pa.float16())\r\n# ArrowTypeError: Expected np.float16 instance\r\npa.array(np.arange(5, dtype=np.float16())).cast(pa.float32())\r\n# ArrowNotImplementedError: Unsupported cast from halffloat to float using function cast_float\r\n```", "Hmm it seems like we can either:\r\n1. try to fix pyarrow upstream\r\n2. half-support float16 with some workaround to make sure we don't ever do casting internally\r\n" ]
"2022-09-15T21:03:24Z"
"2023-03-22T21:40:09Z"
null
CONTRIBUTOR
null
null
null
## Describe the bug I can't create a dataset with `float16` features. I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error. The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases? Thanks! ## Steps to reproduce the bug All of the following raise the following error with the same exact (as far as I can tell) traceback: ```python ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ```python from datasets import Dataset, Features, Value Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16"))) import numpy as np Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16"))) import torch Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16"))) ``` ## Expected results A dataset with `float16` features is successfully created. ## Actual results ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) Cell In [14], line 1 ----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16"))) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split) 865 mapping = features.encode_batch(mapping) 866 mapping = { 867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col) 868 for col, data in mapping.items() 869 } --> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping) 871 if info.features is None: 872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()}) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs) 734 @classmethod 735 def from_pydict(cls, *args, **kwargs): 736 """ 737 Construct a Table from Arrow arrays or columns 738 (...) 748 :class:`datasets.table.Table`: 749 """ --> 750 return cls(pa.Table.from_pydict(*args, **kwargs)) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type) 192 # otherwise we can finally use the user's type 193 elif type is not None: 194 # We use cast_array_to_feature to support casting to custom types like Audio and Image 195 # Also, when trying type "string", we don't want to convert integers or floats to "string". 196 # We only do it if trying_type is False - since this is what the user asks for. --> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 198 return out 199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str) 1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str) 1852 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) 1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1762, in array_cast(array, pa_type, allow_number_to_str) 1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type): 1761 raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") -> 1762 return array.cast(pa_type) 1763 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:919, in pyarrow.lib.Array.cast() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/compute.py:389, in cast(arr, target_type, safe, options) 387 else: 388 options = CastOptions.safe(target_type) --> 389 return call_function("cast", [arr], options) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:560, in pyarrow._compute.call_function() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:355, in pyarrow._compute.Function.call() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status() ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4981/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4981/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1393
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1393/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1393/comments
https://api.github.com/repos/huggingface/datasets/issues/1393/events
https://github.com/huggingface/datasets/pull/1393
760,436,267
MDExOlB1bGxSZXF1ZXN0NTM1MjY4MjUx
1,393
Add script_version suggestion when dataset/metric not found
{ "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joeddav", "id": 9353833, "login": "joeddav", "node_id": "MDQ6VXNlcjkzNTM4MzM=", "organizations_url": "https://api.github.com/users/joeddav/orgs", "received_events_url": "https://api.github.com/users/joeddav/received_events", "repos_url": "https://api.github.com/users/joeddav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "type": "User", "url": "https://api.github.com/users/joeddav" }
[]
closed
false
null
[]
null
[]
"2020-12-09T15:37:38Z"
"2020-12-10T18:17:05Z"
"2020-12-10T18:17:05Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1393.diff", "html_url": "https://github.com/huggingface/datasets/pull/1393", "merged_at": "2020-12-10T18:17:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/1393.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1393" }
Adds a helpful prompt to the error message when a dataset/metric is not found, suggesting the user might need to pass `script_version="master"` if the dataset was added recently. The whole error looks like: > Couldn't find file locally at blah/blah.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1/metrics/blah/blah.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/met rics/blah/blah.py. If the dataset was added recently, you may need to to pass script_version="master" to find the loading script on the master branch.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1393/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1393/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6208/comments
https://api.github.com/repos/huggingface/datasets/issues/6208/events
https://github.com/huggingface/datasets/pull/6208
1,879,572,646
PR_kwDODunzps5ZcnpJ
6,208
Do not filter out .zip extensions from no-script datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006797 / 0.011353 (-0.004556) | 0.003966 / 0.011008 (-0.007042) | 0.085296 / 0.038508 (0.046788) | 0.076873 / 0.023109 (0.053764) | 0.355795 / 0.275898 (0.079897) | 0.397132 / 0.323480 (0.073652) | 0.005325 / 0.007986 (-0.002660) | 0.003343 / 0.004328 (-0.000986) | 0.064966 / 0.004250 (0.060716) | 0.054519 / 0.037052 (0.017467) | 0.357864 / 0.258489 (0.099374) | 0.409238 / 0.293841 (0.115397) | 0.031620 / 0.128546 (-0.096926) | 0.008529 / 0.075646 (-0.067117) | 0.288502 / 0.419271 (-0.130769) | 0.053260 / 0.043533 (0.009728) | 0.355245 / 0.255139 (0.100106) | 0.384139 / 0.283200 (0.100939) | 0.024507 / 0.141683 (-0.117176) | 1.494696 / 1.452155 (0.042541) | 1.579847 / 1.492716 (0.087130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204011 / 0.018006 (0.186005) | 0.451729 / 0.000490 (0.451239) | 0.004628 / 0.000200 (0.004428) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028342 / 0.037411 (-0.009069) | 0.084647 / 0.014526 (0.070121) | 0.096174 / 0.176557 (-0.080383) | 0.151753 / 0.737135 (-0.585382) | 0.096347 / 0.296338 (-0.199991) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.387179 / 0.215209 (0.171970) | 3.861552 / 2.077655 (1.783898) | 1.844033 / 1.504120 (0.339913) | 1.678811 / 1.541195 (0.137616) | 1.793207 / 1.468490 (0.324717) | 0.485836 / 4.584777 (-4.098941) | 3.566274 / 3.745712 (-0.179438) | 3.269888 / 5.269862 (-1.999974) | 2.042850 / 4.565676 (-2.522827) | 0.057088 / 0.424275 (-0.367187) | 0.007627 / 0.007607 (0.000019) | 0.460510 / 0.226044 (0.234465) | 4.602019 / 2.268929 (2.333090) | 2.390984 / 55.444624 (-53.053641) | 1.976150 / 6.876477 (-4.900327) | 2.193394 / 2.142072 (0.051322) | 0.582775 / 4.805227 (-4.222453) | 0.133408 / 6.500664 (-6.367256) | 0.060577 / 0.075469 (-0.014893) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.248505 / 1.841788 (-0.593283) | 19.771301 / 8.074308 (11.696993) | 14.327871 / 10.191392 (4.136479) | 0.155288 / 0.680424 (-0.525136) | 0.018310 / 0.534201 (-0.515891) | 0.393664 / 0.579283 (-0.185619) | 0.410578 / 0.434364 (-0.023786) | 0.459301 / 0.540337 (-0.081037) | 0.631921 / 1.386936 (-0.755015) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006827 / 0.011353 (-0.004526) | 0.004094 / 0.011008 (-0.006915) | 0.065299 / 0.038508 (0.026791) | 0.079496 / 0.023109 (0.056387) | 0.403661 / 0.275898 (0.127763) | 0.434449 / 0.323480 (0.110969) | 0.005398 / 0.007986 (-0.002588) | 0.003410 / 0.004328 (-0.000919) | 0.064832 / 0.004250 (0.060582) | 0.056303 / 0.037052 (0.019250) | 0.397848 / 0.258489 (0.139359) | 0.438244 / 0.293841 (0.144403) | 0.032637 / 0.128546 (-0.095909) | 0.008584 / 0.075646 (-0.067063) | 0.071406 / 0.419271 (-0.347866) | 0.048265 / 0.043533 (0.004732) | 0.397814 / 0.255139 (0.142675) | 0.421601 / 0.283200 (0.138402) | 0.023815 / 0.141683 (-0.117868) | 1.504814 / 1.452155 (0.052659) | 1.577185 / 1.492716 (0.084469) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231775 / 0.018006 (0.213769) | 0.445437 / 0.000490 (0.444948) | 0.005252 / 0.000200 (0.005052) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032777 / 0.037411 (-0.004634) | 0.095054 / 0.014526 (0.080528) | 0.106429 / 0.176557 (-0.070127) | 0.160111 / 0.737135 (-0.577024) | 0.108075 / 0.296338 (-0.188263) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426034 / 0.215209 (0.210825) | 4.244668 / 2.077655 (2.167013) | 2.257938 / 1.504120 (0.753818) | 2.087993 / 1.541195 (0.546798) | 2.170878 / 1.468490 (0.702387) | 0.485228 / 4.584777 (-4.099549) | 3.725912 / 3.745712 (-0.019800) | 3.286925 / 5.269862 (-1.982937) | 2.059929 / 4.565676 (-2.505748) | 0.057813 / 0.424275 (-0.366462) | 0.007518 / 0.007607 (-0.000089) | 0.506632 / 0.226044 (0.280588) | 5.048340 / 2.268929 (2.779411) | 2.744756 / 55.444624 (-52.699869) | 2.406636 / 6.876477 (-4.469841) | 2.617552 / 2.142072 (0.475480) | 0.588476 / 4.805227 (-4.216751) | 0.133518 / 6.500664 (-6.367146) | 0.060778 / 0.075469 (-0.014691) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356416 / 1.841788 (-0.485372) | 20.467516 / 8.074308 (12.393208) | 15.265443 / 10.191392 (5.074051) | 0.169201 / 0.680424 (-0.511223) | 0.020087 / 0.534201 (-0.514114) | 0.402332 / 0.579283 (-0.176951) | 0.414848 / 0.434364 (-0.019516) | 0.470422 / 0.540337 (-0.069916) | 0.647266 / 1.386936 (-0.739670) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#eb001b4cee7f1d71e393c3ad489a8a5cd8119df5 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005804 / 0.011353 (-0.005549) | 0.003519 / 0.011008 (-0.007489) | 0.080003 / 0.038508 (0.041495) | 0.055419 / 0.023109 (0.032309) | 0.395254 / 0.275898 (0.119356) | 0.432714 / 0.323480 (0.109234) | 0.004438 / 0.007986 (-0.003548) | 0.002832 / 0.004328 (-0.001496) | 0.062026 / 0.004250 (0.057775) | 0.044334 / 0.037052 (0.007282) | 0.401278 / 0.258489 (0.142789) | 0.451516 / 0.293841 (0.157675) | 0.026791 / 0.128546 (-0.101755) | 0.007946 / 0.075646 (-0.067700) | 0.265166 / 0.419271 (-0.154106) | 0.044119 / 0.043533 (0.000586) | 0.399621 / 0.255139 (0.144482) | 0.422808 / 0.283200 (0.139609) | 0.019998 / 0.141683 (-0.121685) | 1.433559 / 1.452155 (-0.018596) | 1.596902 / 1.492716 (0.104186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195662 / 0.018006 (0.177656) | 0.423167 / 0.000490 (0.422677) | 0.003426 / 0.000200 (0.003227) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023318 / 0.037411 (-0.014094) | 0.072532 / 0.014526 (0.058006) | 0.082181 / 0.176557 (-0.094375) | 0.142214 / 0.737135 (-0.594921) | 0.083423 / 0.296338 (-0.212915) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402270 / 0.215209 (0.187061) | 4.027607 / 2.077655 (1.949953) | 2.059803 / 1.504120 (0.555684) | 1.865115 / 1.541195 (0.323920) | 1.934976 / 1.468490 (0.466485) | 0.502145 / 4.584777 (-4.082632) | 2.970865 / 3.745712 (-0.774847) | 2.784155 / 5.269862 (-2.485707) | 1.822003 / 4.565676 (-2.743673) | 0.057699 / 0.424275 (-0.366576) | 0.006668 / 0.007607 (-0.000939) | 0.471164 / 0.226044 (0.245120) | 4.733079 / 2.268929 (2.464150) | 2.445119 / 55.444624 (-52.999505) | 2.132956 / 6.876477 (-4.743521) | 2.335998 / 2.142072 (0.193926) | 0.594881 / 4.805227 (-4.210347) | 0.125801 / 6.500664 (-6.374863) | 0.060780 / 0.075469 (-0.014689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.233170 / 1.841788 (-0.608618) | 17.942205 / 8.074308 (9.867897) | 13.587020 / 10.191392 (3.395628) | 0.142110 / 0.680424 (-0.538314) | 0.016600 / 0.534201 (-0.517601) | 0.328659 / 0.579283 (-0.250624) | 0.347759 / 0.434364 (-0.086605) | 0.378651 / 0.540337 (-0.161687) | 0.523474 / 1.386936 (-0.863462) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006028 / 0.011353 (-0.005325) | 0.003552 / 0.011008 (-0.007456) | 0.062175 / 0.038508 (0.023667) | 0.057602 / 0.023109 (0.034493) | 0.444585 / 0.275898 (0.168687) | 0.471238 / 0.323480 (0.147758) | 0.004562 / 0.007986 (-0.003423) | 0.002871 / 0.004328 (-0.001457) | 0.063101 / 0.004250 (0.058851) | 0.046072 / 0.037052 (0.009020) | 0.448253 / 0.258489 (0.189764) | 0.478734 / 0.293841 (0.184893) | 0.028463 / 0.128546 (-0.100084) | 0.008090 / 0.075646 (-0.067557) | 0.068142 / 0.419271 (-0.351130) | 0.040517 / 0.043533 (-0.003016) | 0.447145 / 0.255139 (0.192006) | 0.469472 / 0.283200 (0.186273) | 0.019391 / 0.141683 (-0.122291) | 1.471195 / 1.452155 (0.019040) | 1.532966 / 1.492716 (0.040249) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259894 / 0.018006 (0.241888) | 0.412987 / 0.000490 (0.412497) | 0.020780 / 0.000200 (0.020580) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026352 / 0.037411 (-0.011060) | 0.080024 / 0.014526 (0.065498) | 0.088041 / 0.176557 (-0.088516) | 0.142987 / 0.737135 (-0.594148) | 0.090108 / 0.296338 (-0.206231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458874 / 0.215209 (0.243665) | 4.573005 / 2.077655 (2.495351) | 2.507885 / 1.504120 (1.003765) | 2.335432 / 1.541195 (0.794238) | 2.379617 / 1.468490 (0.911126) | 0.503331 / 4.584777 (-4.081446) | 3.078284 / 3.745712 (-0.667428) | 2.750580 / 5.269862 (-2.519282) | 1.828100 / 4.565676 (-2.737577) | 0.057572 / 0.424275 (-0.366703) | 0.006553 / 0.007607 (-0.001054) | 0.532283 / 0.226044 (0.306239) | 5.310584 / 2.268929 (3.041656) | 2.943559 / 55.444624 (-52.501065) | 2.587544 / 6.876477 (-4.288932) | 2.718261 / 2.142072 (0.576188) | 0.590267 / 4.805227 (-4.214961) | 0.123229 / 6.500664 (-6.377435) | 0.060219 / 0.075469 (-0.015250) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340773 / 1.841788 (-0.501014) | 18.420766 / 8.074308 (10.346458) | 14.630550 / 10.191392 (4.439158) | 0.146666 / 0.680424 (-0.533758) | 0.017905 / 0.534201 (-0.516296) | 0.332483 / 0.579283 (-0.246801) | 0.355490 / 0.434364 (-0.078874) | 0.382618 / 0.540337 (-0.157720) | 0.531336 / 1.386936 (-0.855600) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d438617fc577bc0222527714edafea0c52ebf239 \"CML watermark\")\n", "There were CI errors unrelated to this PR.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008702 / 0.011353 (-0.002651) | 0.005060 / 0.011008 (-0.005948) | 0.097017 / 0.038508 (0.058509) | 0.073740 / 0.023109 (0.050631) | 0.435138 / 0.275898 (0.159240) | 0.512776 / 0.323480 (0.189296) | 0.006186 / 0.007986 (-0.001800) | 0.003970 / 0.004328 (-0.000358) | 0.089523 / 0.004250 (0.085273) | 0.054441 / 0.037052 (0.017389) | 0.447415 / 0.258489 (0.188926) | 0.464851 / 0.293841 (0.171010) | 0.050264 / 0.128546 (-0.078283) | 0.016643 / 0.075646 (-0.059004) | 0.350565 / 0.419271 (-0.068707) | 0.071220 / 0.043533 (0.027687) | 0.432531 / 0.255139 (0.177392) | 0.472994 / 0.283200 (0.189795) | 0.040229 / 0.141683 (-0.101454) | 1.743431 / 1.452155 (0.291276) | 1.778653 / 1.492716 (0.285936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261992 / 0.018006 (0.243986) | 0.571979 / 0.000490 (0.571489) | 0.006270 / 0.000200 (0.006071) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027821 / 0.037411 (-0.009590) | 0.081874 / 0.014526 (0.067348) | 0.103725 / 0.176557 (-0.072831) | 0.170593 / 0.737135 (-0.566542) | 0.108749 / 0.296338 (-0.187590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.690774 / 0.215209 (0.475565) | 6.770902 / 2.077655 (4.693247) | 2.887218 / 1.504120 (1.383098) | 2.456226 / 1.541195 (0.915032) | 2.509422 / 1.468490 (1.040932) | 0.768451 / 4.584777 (-3.816326) | 4.988933 / 3.745712 (1.243221) | 4.151460 / 5.269862 (-1.118402) | 2.640472 / 4.565676 (-1.925205) | 0.093522 / 0.424275 (-0.330753) | 0.008614 / 0.007607 (0.001007) | 0.696281 / 0.226044 (0.470237) | 6.721077 / 2.268929 (4.452149) | 3.229760 / 55.444624 (-52.214864) | 2.668521 / 6.876477 (-4.207956) | 2.866420 / 2.142072 (0.724347) | 0.945328 / 4.805227 (-3.859899) | 0.197645 / 6.500664 (-6.303019) | 0.074442 / 0.075469 (-0.001027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.630468 / 1.841788 (-0.211320) | 22.991661 / 8.074308 (14.917353) | 19.816919 / 10.191392 (9.625527) | 0.257410 / 0.680424 (-0.423014) | 0.027228 / 0.534201 (-0.506973) | 0.444515 / 0.579283 (-0.134768) | 0.597067 / 0.434364 (0.162703) | 0.528151 / 0.540337 (-0.012186) | 0.771276 / 1.386936 (-0.615660) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009154 / 0.011353 (-0.002199) | 0.004648 / 0.011008 (-0.006360) | 0.073054 / 0.038508 (0.034546) | 0.077146 / 0.023109 (0.054037) | 0.481659 / 0.275898 (0.205761) | 0.516985 / 0.323480 (0.193505) | 0.007447 / 0.007986 (-0.000538) | 0.003890 / 0.004328 (-0.000438) | 0.078701 / 0.004250 (0.074450) | 0.059183 / 0.037052 (0.022131) | 0.475350 / 0.258489 (0.216861) | 0.547834 / 0.293841 (0.253993) | 0.058440 / 0.128546 (-0.070106) | 0.013563 / 0.075646 (-0.062083) | 0.084320 / 0.419271 (-0.334951) | 0.065965 / 0.043533 (0.022433) | 0.483541 / 0.255139 (0.228402) | 0.513940 / 0.283200 (0.230740) | 0.042889 / 0.141683 (-0.098794) | 1.676050 / 1.452155 (0.223895) | 1.759206 / 1.492716 (0.266489) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274848 / 0.018006 (0.256841) | 0.588965 / 0.000490 (0.588475) | 0.006312 / 0.000200 (0.006112) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033871 / 0.037411 (-0.003540) | 0.104013 / 0.014526 (0.089487) | 0.118457 / 0.176557 (-0.058099) | 0.178268 / 0.737135 (-0.558868) | 0.116972 / 0.296338 (-0.179366) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.609952 / 0.215209 (0.394743) | 5.788754 / 2.077655 (3.711100) | 2.812166 / 1.504120 (1.308046) | 2.362861 / 1.541195 (0.821666) | 2.641295 / 1.468490 (1.172804) | 0.767601 / 4.584777 (-3.817176) | 5.027439 / 3.745712 (1.281727) | 4.612511 / 5.269862 (-0.657351) | 2.654364 / 4.565676 (-1.911312) | 0.103100 / 0.424275 (-0.321175) | 0.012233 / 0.007607 (0.004626) | 0.749283 / 0.226044 (0.523238) | 7.511093 / 2.268929 (5.242165) | 3.585867 / 55.444624 (-51.858757) | 3.255110 / 6.876477 (-3.621366) | 3.260174 / 2.142072 (1.118102) | 0.958422 / 4.805227 (-3.846806) | 0.209096 / 6.500664 (-6.291568) | 0.075014 / 0.075469 (-0.000455) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.728283 / 1.841788 (-0.113504) | 25.411147 / 8.074308 (17.336839) | 21.335202 / 10.191392 (11.143810) | 0.199090 / 0.680424 (-0.481334) | 0.031288 / 0.534201 (-0.502913) | 0.449226 / 0.579283 (-0.130057) | 0.555570 / 0.434364 (0.121206) | 0.570297 / 0.540337 (0.029960) | 0.758673 / 1.386936 (-0.628263) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fa696b4b4f0d11c5b8592eb31cb1d54a707e3d33 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006862 / 0.011353 (-0.004491) | 0.003959 / 0.011008 (-0.007049) | 0.087219 / 0.038508 (0.048711) | 0.078335 / 0.023109 (0.055226) | 0.319019 / 0.275898 (0.043121) | 0.342871 / 0.323480 (0.019391) | 0.004065 / 0.007986 (-0.003921) | 0.004346 / 0.004328 (0.000017) | 0.065243 / 0.004250 (0.060993) | 0.056698 / 0.037052 (0.019646) | 0.326906 / 0.258489 (0.068417) | 0.354323 / 0.293841 (0.060482) | 0.031252 / 0.128546 (-0.097295) | 0.008587 / 0.075646 (-0.067060) | 0.300323 / 0.419271 (-0.118948) | 0.052810 / 0.043533 (0.009277) | 0.323866 / 0.255139 (0.068727) | 0.346011 / 0.283200 (0.062811) | 0.025584 / 0.141683 (-0.116099) | 1.464475 / 1.452155 (0.012320) | 1.530868 / 1.492716 (0.038152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208927 / 0.018006 (0.190921) | 0.454147 / 0.000490 (0.453657) | 0.003945 / 0.000200 (0.003746) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029901 / 0.037411 (-0.007511) | 0.088889 / 0.014526 (0.074363) | 0.098181 / 0.176557 (-0.078375) | 0.156787 / 0.737135 (-0.580349) | 0.099015 / 0.296338 (-0.197324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384981 / 0.215209 (0.169772) | 3.831040 / 2.077655 (1.753386) | 1.858312 / 1.504120 (0.354192) | 1.686846 / 1.541195 (0.145651) | 1.771509 / 1.468490 (0.303019) | 0.485618 / 4.584777 (-4.099159) | 3.430961 / 3.745712 (-0.314751) | 3.264489 / 5.269862 (-2.005372) | 2.040125 / 4.565676 (-2.525551) | 0.057218 / 0.424275 (-0.367057) | 0.007640 / 0.007607 (0.000033) | 0.468072 / 0.226044 (0.242027) | 4.677214 / 2.268929 (2.408286) | 2.348425 / 55.444624 (-53.096199) | 1.994352 / 6.876477 (-4.882125) | 2.217020 / 2.142072 (0.074948) | 0.587467 / 4.805227 (-4.217760) | 0.133550 / 6.500664 (-6.367114) | 0.060571 / 0.075469 (-0.014898) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.271003 / 1.841788 (-0.570785) | 19.986365 / 8.074308 (11.912057) | 14.574046 / 10.191392 (4.382654) | 0.146212 / 0.680424 (-0.534212) | 0.018320 / 0.534201 (-0.515881) | 0.394524 / 0.579283 (-0.184759) | 0.399707 / 0.434364 (-0.034657) | 0.458965 / 0.540337 (-0.081372) | 0.619940 / 1.386936 (-0.766996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006982 / 0.011353 (-0.004371) | 0.004061 / 0.011008 (-0.006947) | 0.064520 / 0.038508 (0.026012) | 0.076828 / 0.023109 (0.053719) | 0.402989 / 0.275898 (0.127090) | 0.439697 / 0.323480 (0.116217) | 0.005511 / 0.007986 (-0.002475) | 0.003378 / 0.004328 (-0.000950) | 0.064727 / 0.004250 (0.060477) | 0.058114 / 0.037052 (0.021062) | 0.402054 / 0.258489 (0.143565) | 0.442377 / 0.293841 (0.148536) | 0.032808 / 0.128546 (-0.095738) | 0.008604 / 0.075646 (-0.067043) | 0.070994 / 0.419271 (-0.348278) | 0.048738 / 0.043533 (0.005205) | 0.399786 / 0.255139 (0.144647) | 0.423537 / 0.283200 (0.140338) | 0.022397 / 0.141683 (-0.119286) | 1.504613 / 1.452155 (0.052458) | 1.571064 / 1.492716 (0.078348) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226876 / 0.018006 (0.208870) | 0.451477 / 0.000490 (0.450987) | 0.004511 / 0.000200 (0.004311) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032998 / 0.037411 (-0.004413) | 0.095843 / 0.014526 (0.081317) | 0.105684 / 0.176557 (-0.070873) | 0.158175 / 0.737135 (-0.578960) | 0.107297 / 0.296338 (-0.189041) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434912 / 0.215209 (0.219703) | 4.326394 / 2.077655 (2.248740) | 2.287310 / 1.504120 (0.783190) | 2.127987 / 1.541195 (0.586793) | 2.202485 / 1.468490 (0.733995) | 0.494305 / 4.584777 (-4.090472) | 3.575176 / 3.745712 (-0.170536) | 3.354358 / 5.269862 (-1.915504) | 2.074293 / 4.565676 (-2.491383) | 0.058967 / 0.424275 (-0.365308) | 0.007712 / 0.007607 (0.000105) | 0.513734 / 0.226044 (0.287690) | 5.107538 / 2.268929 (2.838610) | 2.776190 / 55.444624 (-52.668434) | 2.425051 / 6.876477 (-4.451426) | 2.666715 / 2.142072 (0.524643) | 0.598844 / 4.805227 (-4.206383) | 0.134186 / 6.500664 (-6.366478) | 0.062403 / 0.075469 (-0.013066) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.346730 / 1.841788 (-0.495058) | 20.533190 / 8.074308 (12.458882) | 15.174443 / 10.191392 (4.983051) | 0.167204 / 0.680424 (-0.513219) | 0.020619 / 0.534201 (-0.513582) | 0.399033 / 0.579283 (-0.180250) | 0.394428 / 0.434364 (-0.039936) | 0.468792 / 0.540337 (-0.071545) | 0.640122 / 1.386936 (-0.746814) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2c4c2b529e2a262a5006e4caa55fbc003378006a \"CML watermark\")\n" ]
"2023-09-04T06:07:12Z"
"2023-09-04T09:22:19Z"
"2023-09-04T09:13:32Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6208.diff", "html_url": "https://github.com/huggingface/datasets/pull/6208", "merged_at": "2023-09-04T09:13:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/6208.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6208" }
This PR is a hotfix of: - #6207 That PR introduced the filtering out of `.zip` extensions. This PR reverts that. Hot fix #6207. Maybe we should do patch releases: the bug was introduced in 2.13.1. CC: @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6208/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6208/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4576/comments
https://api.github.com/repos/huggingface/datasets/issues/4576/events
https://github.com/huggingface/datasets/pull/4576
1,285,698,576
PR_kwDODunzps46aSN_
4,576
Include `metadata.jsonl` in resolved data files
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I still don't know if the way we implemented data files resolution could support the metadata.jsonl file without bad side effects for the other packaged builders. In particular here if you have a folder of csv/parquet/whatever files and a metadata.jsonl file, it would return \r\n```\r\nsplit: patterns_dict[split] + [METADATA_PATTERN]\r\n```\r\nwhich is a bit unexpected and can lead to errors.\r\n\r\nMaybe this logic can be specific to imagefolder somehow ? This could be an additional pattern `[\"metadata.jsonl\", \"**/metadata.jsonl\"]` just for imagefolder, that is only used when `data_files=` is not specified by the user.\r\n\r\nI guess it's ok to have patterns that lead to duplicate metadata.jsonl files for imagefolder, since the imagefolder logic only considers the closest metadata file for each image.\r\n\r\nWhat do you think ?", "Yes, that's indeed the problem. My solution in https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 that accounts for that (include metadata files only if image files are present; not ideal): https://github.com/huggingface/datasets/blob/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95/src/datasets/data_files.py#L119-L125.\r\nPerhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as `imagefolder` and append metadata files to already resolved data files (if there are any). WDYT?", "@lhoestq \r\n\r\n> Perhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as imagefolder and append metadata files to already resolved data files (if there are any). WDYT?\r\n\r\nI decided to go with this approach.\r\n\r\n Not sure if you meant the same thing with this comment:\r\n\r\n> Maybe this logic can be specific to imagefolder somehow ? This could be an additional pattern [\"metadata.jsonl\", \"**/metadata.jsonl\"] just for imagefolder, that is only used when data_files= is not specified by the user.\r\n\r\n\r\nIt adds more code but is easy to follow IMO.\r\n", "The CI still struggles but you can merge since at least one of the two WIN CI succeeded" ]
"2022-06-27T12:01:29Z"
"2022-07-01T12:44:55Z"
"2022-06-30T10:15:32Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4576.diff", "html_url": "https://github.com/huggingface/datasets/pull/4576", "merged_at": "2022-06-30T10:15:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/4576.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4576" }
Include `metadata.jsonl` in resolved data files. Fix #4548 @lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts for nested metadata files also, but this results in more complex code. Let me know which one of these two approaches you prefer.~~ Maybe https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 is good enough for now (for the sake of simplicity). https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 breaks the imagefolder tests due to duplicates in the resolved metadata files. One way to fix this would be to resolve the metadata pattern only on parent directories, but this adds even more logic to `_get_data_files_patterns`, so not sure if this is what we should do.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4576/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4576/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3646/comments
https://api.github.com/repos/huggingface/datasets/issues/3646/events
https://github.com/huggingface/datasets/pull/3646
1,116,544,627
PR_kwDODunzps4xsX66
3,646
Fix streaming datasets that are not reset correctly
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Works smoothly with the `transformers.Trainer` class now, thank you!" ]
"2022-01-27T17:21:02Z"
"2022-01-28T16:34:29Z"
"2022-01-28T16:34:28Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3646.diff", "html_url": "https://github.com/huggingface/datasets/pull/3646", "merged_at": "2022-01-28T16:34:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/3646.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3646" }
Streaming datasets that use `StreamingDownloadManager.iter_archive` and `StreamingDownloadManager.iter_files` had some issues. Indeed if you try to iterate over such dataset twice, then the second time it will be empty. This is because the two methods above are generator functions. I fixed this by making them return iterables that are reset properly instead. Close https://github.com/huggingface/datasets/issues/3645 cc @anton-l
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3646/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3646/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5447/comments
https://api.github.com/repos/huggingface/datasets/issues/5447/events
https://github.com/huggingface/datasets/pull/5447
1,550,599,193
PR_kwDODunzps5IM0Nu
5,447
Fix CI by temporarily pinning fsspec < 2023.1.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011875 / 0.011353 (0.000522) | 0.008188 / 0.011008 (-0.002821) | 0.131137 / 0.038508 (0.092629) | 0.038127 / 0.023109 (0.015018) | 0.383864 / 0.275898 (0.107966) | 0.458617 / 0.323480 (0.135137) | 0.010989 / 0.007986 (0.003003) | 0.004892 / 0.004328 (0.000563) | 0.101955 / 0.004250 (0.097704) | 0.045081 / 0.037052 (0.008029) | 0.409768 / 0.258489 (0.151279) | 0.446597 / 0.293841 (0.152756) | 0.058588 / 0.128546 (-0.069958) | 0.020872 / 0.075646 (-0.054774) | 0.432982 / 0.419271 (0.013711) | 0.075875 / 0.043533 (0.032342) | 0.380923 / 0.255139 (0.125784) | 0.432994 / 0.283200 (0.149795) | 0.122678 / 0.141683 (-0.019005) | 1.857865 / 1.452155 (0.405710) | 1.927801 / 1.492716 (0.435085) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212941 / 0.018006 (0.194935) | 0.527977 / 0.000490 (0.527488) | 0.002996 / 0.000200 (0.002797) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030046 / 0.037411 (-0.007366) | 0.126384 / 0.014526 (0.111858) | 0.138307 / 0.176557 (-0.038250) | 0.185338 / 0.737135 (-0.551797) | 0.144733 / 0.296338 (-0.151606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627096 / 0.215209 (0.411887) | 6.418014 / 2.077655 (4.340360) | 2.547675 / 1.504120 (1.043555) | 2.195552 / 1.541195 (0.654357) | 2.200377 / 1.468490 (0.731887) | 1.289935 / 4.584777 (-3.294842) | 5.670839 / 3.745712 (1.925127) | 5.252597 / 5.269862 (-0.017265) | 2.878470 / 4.565676 (-1.687207) | 0.143754 / 0.424275 (-0.280521) | 0.014814 / 0.007607 (0.007207) | 0.810073 / 0.226044 (0.584028) | 8.183757 / 2.268929 (5.914829) | 3.375525 / 55.444624 (-52.069099) | 2.594048 / 6.876477 (-4.282428) | 2.598095 / 2.142072 (0.456023) | 1.554493 / 4.805227 (-3.250734) | 0.263159 / 6.500664 (-6.237505) | 0.089822 / 0.075469 (0.014353) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.660847 / 1.841788 (-0.180941) | 18.434283 / 8.074308 (10.359975) | 21.764887 / 10.191392 (11.573495) | 0.264524 / 0.680424 (-0.415900) | 0.048519 / 0.534201 (-0.485682) | 0.587468 / 0.579283 (0.008185) | 0.634142 / 0.434364 (0.199778) | 0.675374 / 0.540337 (0.135037) | 0.777510 / 1.386936 (-0.609426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010021 / 0.011353 (-0.001332) | 0.006207 / 0.011008 (-0.004801) | 0.130490 / 0.038508 (0.091982) | 0.037957 / 0.023109 (0.014848) | 0.489381 / 0.275898 (0.213483) | 0.536522 / 0.323480 (0.213042) | 0.008611 / 0.007986 (0.000626) | 0.004894 / 0.004328 (0.000565) | 0.101617 / 0.004250 (0.097367) | 0.052629 / 0.037052 (0.015577) | 0.509211 / 0.258489 (0.250721) | 0.545023 / 0.293841 (0.251182) | 0.057468 / 0.128546 (-0.071078) | 0.023393 / 0.075646 (-0.052253) | 0.431408 / 0.419271 (0.012137) | 0.064967 / 0.043533 (0.021434) | 0.495261 / 0.255139 (0.240122) | 0.527098 / 0.283200 (0.243898) | 0.113172 / 0.141683 (-0.028511) | 1.937072 / 1.452155 (0.484918) | 2.048413 / 1.492716 (0.555697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245406 / 0.018006 (0.227399) | 0.526772 / 0.000490 (0.526283) | 0.004379 / 0.000200 (0.004179) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031785 / 0.037411 (-0.005626) | 0.130949 / 0.014526 (0.116424) | 0.145660 / 0.176557 (-0.030896) | 0.186991 / 0.737135 (-0.550144) | 0.151000 / 0.296338 (-0.145338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.708643 / 0.215209 (0.493434) | 7.179252 / 2.077655 (5.101597) | 3.143375 / 1.504120 (1.639255) | 2.714298 / 1.541195 (1.173103) | 2.773441 / 1.468490 (1.304951) | 1.312821 / 4.584777 (-3.271956) | 5.798396 / 3.745712 (2.052684) | 3.253215 / 5.269862 (-2.016646) | 2.147260 / 4.565676 (-2.418416) | 0.154673 / 0.424275 (-0.269602) | 0.014918 / 0.007607 (0.007311) | 0.860618 / 0.226044 (0.634573) | 8.774455 / 2.268929 (6.505527) | 3.925020 / 55.444624 (-51.519604) | 3.139361 / 6.876477 (-3.737115) | 3.208883 / 2.142072 (1.066810) | 1.547305 / 4.805227 (-3.257922) | 0.268814 / 6.500664 (-6.231850) | 0.084578 / 0.075469 (0.009109) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.694990 / 1.841788 (-0.146798) | 18.619183 / 8.074308 (10.544875) | 21.929886 / 10.191392 (11.738494) | 0.265763 / 0.680424 (-0.414661) | 0.028325 / 0.534201 (-0.505876) | 0.552910 / 0.579283 (-0.026373) | 0.616864 / 0.434364 (0.182500) | 0.637858 / 0.540337 (0.097521) | 0.744508 / 1.386936 (-0.642428) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5f819ba3d0306748aaf9fd8ea040b981dd08e5e5 \"CML watermark\")\n" ]
"2023-01-20T10:11:02Z"
"2023-01-20T10:38:13Z"
"2023-01-20T10:28:43Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5447.diff", "html_url": "https://github.com/huggingface/datasets/pull/5447", "merged_at": "2023-01-20T10:28:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/5447.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5447" }
Temporarily pin fsspec < 2023.1.0 Fix #5445.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5447/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5447/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3299/comments
https://api.github.com/repos/huggingface/datasets/issues/3299/events
https://github.com/huggingface/datasets/issues/3299
1,058,518,213
I_kwDODunzps4_F7TF
3,299
Add option to find unique elements in nested sequences when calling `Dataset.unique`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi @mariosasko!\r\n\r\nHas this been patched into any of the releases?", "Hi! Not yet, would you be interested in contributing a PR? I can give you some pointers if needed. ", "@mariosasko did this ever get implemented? Willing to help if you are still up for it.", "@dcruiz01 No, but here is an example of how to do this with the existing API:\r\n\r\n\r\n```python\r\nds = Dataset.from_dict({\"tokens\": [[\"a\", \"b\"], [\"c\", \"a\"], [\"c\", \"e\"]]})\r\n\r\ndef flatten_tokens(pa_table):\r\n return pa.table([pc.list_flatten(pa_table[\"tokens\"])], [\"flat_tokens\"])\r\n\r\nds = ds.with_format(\"arrow\")\r\nds = ds.map(flatten_tokens, batched=True)\r\nds = ds.with_format(None)\r\n\r\nunique_tokens = ds.unique(\"flat_tokens\")\r\n```\r\n\r\nWhen I think about it, `.unique` on `Sequence(Value(...))` should return unique sequences/arrays, not unique elements of these sequences..." ]
"2021-11-19T13:16:06Z"
"2023-05-19T14:45:40Z"
null
CONTRIBUTOR
null
null
null
It would be nice to have an option to flatten nested sequences to find unique elements stored in them when calling `Dataset.unique`. ~~Currently, `Dataset.unique` only supports finding unique sequences and not unique elements in that situation.~~
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3299/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3299/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1760/comments
https://api.github.com/repos/huggingface/datasets/issues/1760/events
https://github.com/huggingface/datasets/pull/1760
791,110,857
MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0
1,760
More tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Conll has `multilingual` but is only tagged as `en`", "good catch, that was a bad copy paste x)" ]
"2021-01-21T13:50:10Z"
"2021-01-22T09:40:01Z"
"2021-01-22T09:40:00Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1760.diff", "html_url": "https://github.com/huggingface/datasets/pull/1760", "merged_at": "2021-01-22T09:40:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/1760.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1760" }
Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1760/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1760/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5948/comments
https://api.github.com/repos/huggingface/datasets/issues/5948/events
https://github.com/huggingface/datasets/pull/5948
1,754,794,611
PR_kwDODunzps5S4dUt
5,948
Fix sequence of array support for most dtype
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007220 / 0.011353 (-0.004133) | 0.004558 / 0.011008 (-0.006451) | 0.116647 / 0.038508 (0.078139) | 0.046845 / 0.023109 (0.023736) | 0.352429 / 0.275898 (0.076531) | 0.429739 / 0.323480 (0.106259) | 0.006620 / 0.007986 (-0.001366) | 0.003731 / 0.004328 (-0.000597) | 0.088683 / 0.004250 (0.084433) | 0.070583 / 0.037052 (0.033530) | 0.366699 / 0.258489 (0.108210) | 0.420730 / 0.293841 (0.126889) | 0.037342 / 0.128546 (-0.091204) | 0.010041 / 0.075646 (-0.065605) | 0.383477 / 0.419271 (-0.035795) | 0.060279 / 0.043533 (0.016746) | 0.349988 / 0.255139 (0.094849) | 0.371423 / 0.283200 (0.088224) | 0.026725 / 0.141683 (-0.114958) | 1.736886 / 1.452155 (0.284731) | 1.812874 / 1.492716 (0.320157) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253256 / 0.018006 (0.235250) | 0.563470 / 0.000490 (0.562980) | 0.010475 / 0.000200 (0.010275) | 0.000164 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030518 / 0.037411 (-0.006893) | 0.133324 / 0.014526 (0.118798) | 0.137095 / 0.176557 (-0.039461) | 0.202227 / 0.737135 (-0.534909) | 0.144195 / 0.296338 (-0.152143) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.480870 / 0.215209 (0.265661) | 4.822713 / 2.077655 (2.745058) | 2.124183 / 1.504120 (0.620064) | 1.910733 / 1.541195 (0.369538) | 1.970266 / 1.468490 (0.501776) | 0.624695 / 4.584777 (-3.960082) | 4.459659 / 3.745712 (0.713947) | 2.210123 / 5.269862 (-3.059739) | 1.300520 / 4.565676 (-3.265157) | 0.077096 / 0.424275 (-0.347180) | 0.013333 / 0.007607 (0.005726) | 0.596841 / 0.226044 (0.370797) | 5.917397 / 2.268929 (3.648469) | 2.699397 / 55.444624 (-52.745228) | 2.274833 / 6.876477 (-4.601644) | 2.525376 / 2.142072 (0.383304) | 0.755718 / 4.805227 (-4.049510) | 0.163587 / 6.500664 (-6.337077) | 0.072817 / 0.075469 (-0.002653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.524306 / 1.841788 (-0.317481) | 18.843312 / 8.074308 (10.769004) | 15.694644 / 10.191392 (5.503252) | 0.177400 / 0.680424 (-0.503024) | 0.020104 / 0.534201 (-0.514097) | 0.466421 / 0.579283 (-0.112862) | 0.537274 / 0.434364 (0.102910) | 0.576920 / 0.540337 (0.036583) | 0.718889 / 1.386936 (-0.668047) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007671 / 0.011353 (-0.003682) | 0.004850 / 0.011008 (-0.006158) | 0.090085 / 0.038508 (0.051576) | 0.052023 / 0.023109 (0.028914) | 0.508575 / 0.275898 (0.232677) | 0.590024 / 0.323480 (0.266544) | 0.004564 / 0.007986 (-0.003422) | 0.005345 / 0.004328 (0.001017) | 0.087904 / 0.004250 (0.083653) | 0.064446 / 0.037052 (0.027394) | 0.525625 / 0.258489 (0.267136) | 0.584307 / 0.293841 (0.290466) | 0.037221 / 0.128546 (-0.091325) | 0.010588 / 0.075646 (-0.065059) | 0.098612 / 0.419271 (-0.320659) | 0.059597 / 0.043533 (0.016064) | 0.488064 / 0.255139 (0.232925) | 0.522330 / 0.283200 (0.239131) | 0.030004 / 0.141683 (-0.111679) | 1.732512 / 1.452155 (0.280357) | 1.809027 / 1.492716 (0.316310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218741 / 0.018006 (0.200735) | 0.494946 / 0.000490 (0.494456) | 0.004580 / 0.000200 (0.004380) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034916 / 0.037411 (-0.002495) | 0.133695 / 0.014526 (0.119169) | 0.147964 / 0.176557 (-0.028592) | 0.213210 / 0.737135 (-0.523926) | 0.148850 / 0.296338 (-0.147488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508855 / 0.215209 (0.293646) | 5.065088 / 2.077655 (2.987433) | 2.473110 / 1.504120 (0.968990) | 2.259765 / 1.541195 (0.718570) | 2.359189 / 1.468490 (0.890699) | 0.639082 / 4.584777 (-3.945695) | 4.768195 / 3.745712 (1.022482) | 2.253803 / 5.269862 (-3.016059) | 1.442996 / 4.565676 (-3.122680) | 0.078761 / 0.424275 (-0.345514) | 0.013936 / 0.007607 (0.006329) | 0.625977 / 0.226044 (0.399933) | 6.260817 / 2.268929 (3.991888) | 3.149640 / 55.444624 (-52.294985) | 2.753555 / 6.876477 (-4.122921) | 2.831872 / 2.142072 (0.689799) | 0.781294 / 4.805227 (-4.023933) | 0.169109 / 6.500664 (-6.331555) | 0.075810 / 0.075469 (0.000341) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.533282 / 1.841788 (-0.308506) | 19.460579 / 8.074308 (11.386271) | 17.250424 / 10.191392 (7.059032) | 0.193485 / 0.680424 (-0.486939) | 0.020650 / 0.534201 (-0.513551) | 0.472110 / 0.579283 (-0.107173) | 0.532276 / 0.434364 (0.097912) | 0.613152 / 0.540337 (0.072814) | 0.684684 / 1.386936 (-0.702252) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#650a86ee122209d4a8c8e8068c01ebfd3ba553f5 \"CML watermark\")\n" ]
"2023-06-13T12:38:59Z"
"2023-06-14T15:11:55Z"
"2023-06-14T15:03:33Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5948.diff", "html_url": "https://github.com/huggingface/datasets/pull/5948", "merged_at": "2023-06-14T15:03:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/5948.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5948" }
Fixes #5936 Also, a related fix to #5927
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5948/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5948/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6288/comments
https://api.github.com/repos/huggingface/datasets/issues/6288/events
https://github.com/huggingface/datasets/issues/6288
1,935,005,457
I_kwDODunzps5zVdcR
6,288
Dataset.from_pandas with a DataFrame of PIL.Images
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "A duplicate of https://github.com/huggingface/datasets/issues/4796.\r\n\r\nWe could get this for free by implementing the `Image` feature as an extension type, as shown in [this](https://colab.research.google.com/drive/1Uzm_tXVpGTwbzleDConWcNjacwO1yxE4?usp=sharing) Colab (example with UUIDs).\r\n", "+1 to this\r\nCalling this line with a df that contains a PIL image (as they are returned from load_dataset)\r\n`ds = Dataset.from_pandas(df)`\r\nResults in this error:\r\n`ArrowInvalid: ('Could not convert <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1024x1024 at 0x2B41F2D70> with type PngImageFile: did not recognize Python value type when inferring an Arrow data type', 'Conversion failed for column image with type object')`" ]
"2023-10-10T10:29:16Z"
"2023-10-12T17:36:27Z"
null
MEMBER
null
null
null
Currently type inference doesn't know what to do with a Pandas Series of PIL.Image objects, though it would be nice to get a Dataset with the Image type this way
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6288/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6288/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3216/comments
https://api.github.com/repos/huggingface/datasets/issues/3216/events
https://github.com/huggingface/datasets/pull/3216
1,045,027,733
PR_kwDODunzps4uG1YS
3,216
Pin version exclusion for tensorflow incompatible with keras
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-11-04T17:38:06Z"
"2021-11-05T10:57:38Z"
"2021-11-05T10:57:37Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3216.diff", "html_url": "https://github.com/huggingface/datasets/pull/3216", "merged_at": "2021-11-05T10:57:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3216.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3216" }
Once `tensorflow` version 2.6.2 is released: - https://github.com/tensorflow/tensorflow/commit/c1867f3bfdd1042f694df7a9870be51ba80543cb - https://pypi.org/project/tensorflow/2.6.2/ with the patch: - tensorflow/tensorflow#52927 we can remove the temporary fix we introduced in: - #3208 Fix #3209.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3216/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3216/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5795
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5795/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5795/comments
https://api.github.com/repos/huggingface/datasets/issues/5795/events
https://github.com/huggingface/datasets/pull/5795
1,685,414,505
PR_kwDODunzps5POJo8
5,795
Fix spark imports
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010844 / 0.011353 (-0.000509) | 0.007329 / 0.011008 (-0.003680) | 0.133764 / 0.038508 (0.095256) | 0.040213 / 0.023109 (0.017103) | 0.413466 / 0.275898 (0.137568) | 0.452860 / 0.323480 (0.129380) | 0.008109 / 0.007986 (0.000123) | 0.005773 / 0.004328 (0.001444) | 0.109969 / 0.004250 (0.105718) | 0.053001 / 0.037052 (0.015949) | 0.416377 / 0.258489 (0.157888) | 0.477486 / 0.293841 (0.183645) | 0.056556 / 0.128546 (-0.071990) | 0.024322 / 0.075646 (-0.051324) | 0.437750 / 0.419271 (0.018479) | 0.087732 / 0.043533 (0.044199) | 0.421540 / 0.255139 (0.166401) | 0.429143 / 0.283200 (0.145944) | 0.144864 / 0.141683 (0.003181) | 1.882785 / 1.452155 (0.430631) | 1.980721 / 1.492716 (0.488005) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285497 / 0.018006 (0.267491) | 0.601820 / 0.000490 (0.601331) | 0.005003 / 0.000200 (0.004804) | 0.000122 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030673 / 0.037411 (-0.006739) | 0.126883 / 0.014526 (0.112357) | 0.137677 / 0.176557 (-0.038880) | 0.211504 / 0.737135 (-0.525632) | 0.144752 / 0.296338 (-0.151587) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665845 / 0.215209 (0.450636) | 6.369040 / 2.077655 (4.291385) | 2.708979 / 1.504120 (1.204859) | 2.370842 / 1.541195 (0.829647) | 2.445987 / 1.468490 (0.977497) | 1.260806 / 4.584777 (-3.323971) | 5.979216 / 3.745712 (2.233504) | 3.334350 / 5.269862 (-1.935512) | 2.187298 / 4.565676 (-2.378379) | 0.155494 / 0.424275 (-0.268781) | 0.017351 / 0.007607 (0.009744) | 0.853626 / 0.226044 (0.627581) | 8.375001 / 2.268929 (6.106072) | 3.528312 / 55.444624 (-51.916313) | 2.890509 / 6.876477 (-3.985968) | 3.051016 / 2.142072 (0.908944) | 1.529811 / 4.805227 (-3.275416) | 0.273883 / 6.500664 (-6.226781) | 0.086617 / 0.075469 (0.011148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.648231 / 1.841788 (-0.193557) | 19.487109 / 8.074308 (11.412801) | 23.474621 / 10.191392 (13.283229) | 0.221392 / 0.680424 (-0.459032) | 0.028878 / 0.534201 (-0.505323) | 0.582302 / 0.579283 (0.003019) | 0.615059 / 0.434364 (0.180695) | 0.656082 / 0.540337 (0.115745) | 0.740544 / 1.386936 (-0.646392) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010687 / 0.011353 (-0.000665) | 0.007114 / 0.011008 (-0.003894) | 0.135426 / 0.038508 (0.096918) | 0.041027 / 0.023109 (0.017918) | 0.466441 / 0.275898 (0.190543) | 0.503545 / 0.323480 (0.180065) | 0.009418 / 0.007986 (0.001432) | 0.004976 / 0.004328 (0.000647) | 0.101342 / 0.004250 (0.097092) | 0.058289 / 0.037052 (0.021237) | 0.473715 / 0.258489 (0.215226) | 0.539556 / 0.293841 (0.245715) | 0.063138 / 0.128546 (-0.065408) | 0.020429 / 0.075646 (-0.055217) | 0.124179 / 0.419271 (-0.295093) | 0.066400 / 0.043533 (0.022867) | 0.450793 / 0.255139 (0.195654) | 0.494163 / 0.283200 (0.210964) | 0.131179 / 0.141683 (-0.010504) | 1.876396 / 1.452155 (0.424241) | 1.974148 / 1.492716 (0.481432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313362 / 0.018006 (0.295356) | 0.602618 / 0.000490 (0.602129) | 0.008279 / 0.000200 (0.008079) | 0.000155 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037250 / 0.037411 (-0.000161) | 0.144151 / 0.014526 (0.129625) | 0.155733 / 0.176557 (-0.020824) | 0.214334 / 0.737135 (-0.522801) | 0.167124 / 0.296338 (-0.129214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.686471 / 0.215209 (0.471262) | 6.749174 / 2.077655 (4.671520) | 3.024941 / 1.504120 (1.520821) | 2.553363 / 1.541195 (1.012168) | 2.679107 / 1.468490 (1.210617) | 1.317212 / 4.584777 (-3.267565) | 5.917575 / 3.745712 (2.171862) | 3.412715 / 5.269862 (-1.857146) | 2.203478 / 4.565676 (-2.362198) | 0.150387 / 0.424275 (-0.273888) | 0.015977 / 0.007607 (0.008370) | 0.862999 / 0.226044 (0.636954) | 8.706459 / 2.268929 (6.437530) | 3.762648 / 55.444624 (-51.681977) | 2.992544 / 6.876477 (-3.883933) | 3.135796 / 2.142072 (0.993724) | 1.504140 / 4.805227 (-3.301088) | 0.268265 / 6.500664 (-6.232399) | 0.083297 / 0.075469 (0.007828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.690193 / 1.841788 (-0.151594) | 19.912854 / 8.074308 (11.838546) | 23.568217 / 10.191392 (13.376825) | 0.285125 / 0.680424 (-0.395299) | 0.030593 / 0.534201 (-0.503608) | 0.565305 / 0.579283 (-0.013978) | 0.659283 / 0.434364 (0.224919) | 0.678864 / 0.540337 (0.138527) | 0.793634 / 1.386936 (-0.593302) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d0edbe3f3258b7e580d1b58c0eea6637b5e22b2 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011615 / 0.011353 (0.000262) | 0.006716 / 0.011008 (-0.004292) | 0.146868 / 0.038508 (0.108360) | 0.037621 / 0.023109 (0.014512) | 0.425563 / 0.275898 (0.149664) | 0.483217 / 0.323480 (0.159737) | 0.007830 / 0.007986 (-0.000156) | 0.005940 / 0.004328 (0.001612) | 0.100771 / 0.004250 (0.096521) | 0.063907 / 0.037052 (0.026854) | 0.422993 / 0.258489 (0.164503) | 0.496514 / 0.293841 (0.202673) | 0.056004 / 0.128546 (-0.072542) | 0.021441 / 0.075646 (-0.054206) | 0.453589 / 0.419271 (0.034317) | 0.067555 / 0.043533 (0.024022) | 0.442490 / 0.255139 (0.187351) | 0.503941 / 0.283200 (0.220742) | 0.134023 / 0.141683 (-0.007660) | 1.886329 / 1.452155 (0.434175) | 2.030867 / 1.492716 (0.538150) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288063 / 0.018006 (0.270057) | 0.627177 / 0.000490 (0.626687) | 0.006335 / 0.000200 (0.006135) | 0.000171 / 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032424 / 0.037411 (-0.004987) | 0.132749 / 0.014526 (0.118223) | 0.144727 / 0.176557 (-0.031829) | 0.232577 / 0.737135 (-0.504558) | 0.157315 / 0.296338 (-0.139024) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.623058 / 0.215209 (0.407849) | 6.272447 / 2.077655 (4.194792) | 2.506778 / 1.504120 (1.002658) | 2.203094 / 1.541195 (0.661899) | 2.346972 / 1.468490 (0.878482) | 1.358498 / 4.584777 (-3.226279) | 5.879670 / 3.745712 (2.133958) | 5.818406 / 5.269862 (0.548545) | 3.231936 / 4.565676 (-1.333741) | 0.154013 / 0.424275 (-0.270263) | 0.021541 / 0.007607 (0.013934) | 0.823746 / 0.226044 (0.597702) | 8.140304 / 2.268929 (5.871375) | 3.366911 / 55.444624 (-52.077714) | 2.696856 / 6.876477 (-4.179621) | 2.845743 / 2.142072 (0.703671) | 1.522363 / 4.805227 (-3.282864) | 0.278938 / 6.500664 (-6.221726) | 0.085044 / 0.075469 (0.009575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.681348 / 1.841788 (-0.160440) | 19.686703 / 8.074308 (11.612395) | 22.995655 / 10.191392 (12.804263) | 0.218876 / 0.680424 (-0.461548) | 0.029334 / 0.534201 (-0.504867) | 0.560846 / 0.579283 (-0.018438) | 0.645210 / 0.434364 (0.210846) | 0.697842 / 0.540337 (0.157505) | 0.832875 / 1.386936 (-0.554061) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009509 / 0.011353 (-0.001844) | 0.006471 / 0.011008 (-0.004537) | 0.101477 / 0.038508 (0.062969) | 0.035281 / 0.023109 (0.012171) | 0.470032 / 0.275898 (0.194134) | 0.501475 / 0.323480 (0.177995) | 0.007641 / 0.007986 (-0.000344) | 0.006784 / 0.004328 (0.002455) | 0.096111 / 0.004250 (0.091861) | 0.055199 / 0.037052 (0.018146) | 0.470095 / 0.258489 (0.211606) | 0.530955 / 0.293841 (0.237114) | 0.056161 / 0.128546 (-0.072385) | 0.022055 / 0.075646 (-0.053591) | 0.121585 / 0.419271 (-0.297686) | 0.063736 / 0.043533 (0.020203) | 0.470771 / 0.255139 (0.215632) | 0.490546 / 0.283200 (0.207346) | 0.128825 / 0.141683 (-0.012858) | 1.898639 / 1.452155 (0.446484) | 2.052305 / 1.492716 (0.559589) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322526 / 0.018006 (0.304520) | 0.628096 / 0.000490 (0.627607) | 0.006837 / 0.000200 (0.006637) | 0.000199 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033830 / 0.037411 (-0.003581) | 0.136217 / 0.014526 (0.121691) | 0.147006 / 0.176557 (-0.029551) | 0.203950 / 0.737135 (-0.533185) | 0.150327 / 0.296338 (-0.146011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.654287 / 0.215209 (0.439078) | 6.430306 / 2.077655 (4.352651) | 2.881750 / 1.504120 (1.377630) | 2.489505 / 1.541195 (0.948310) | 2.543037 / 1.468490 (1.074547) | 1.226682 / 4.584777 (-3.358094) | 5.902076 / 3.745712 (2.156364) | 3.335344 / 5.269862 (-1.934518) | 2.156738 / 4.565676 (-2.408939) | 0.151804 / 0.424275 (-0.272472) | 0.015238 / 0.007607 (0.007631) | 0.816364 / 0.226044 (0.590319) | 8.126367 / 2.268929 (5.857438) | 3.653222 / 55.444624 (-51.791402) | 2.886667 / 6.876477 (-3.989809) | 3.120852 / 2.142072 (0.978779) | 1.421423 / 4.805227 (-3.383804) | 0.264590 / 6.500664 (-6.236074) | 0.085716 / 0.075469 (0.010247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745258 / 1.841788 (-0.096530) | 19.379253 / 8.074308 (11.304945) | 23.827046 / 10.191392 (13.635654) | 0.267702 / 0.680424 (-0.412722) | 0.030253 / 0.534201 (-0.503948) | 0.542037 / 0.579283 (-0.037246) | 0.655946 / 0.434364 (0.221582) | 0.683525 / 0.540337 (0.143188) | 0.831333 / 1.386936 (-0.555603) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5b011a258329375aa4dc7b414bd4e7b6363c5357 \"CML watermark\")\n" ]
"2023-04-26T17:09:32Z"
"2023-04-26T17:49:03Z"
"2023-04-26T17:39:12Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5795.diff", "html_url": "https://github.com/huggingface/datasets/pull/5795", "merged_at": "2023-04-26T17:39:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5795.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5795" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5795/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5795/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5322/comments
https://api.github.com/repos/huggingface/datasets/issues/5322/events
https://github.com/huggingface/datasets/pull/5322
1,471,502,162
PR_kwDODunzps5EEeQP
5,322
Raise error for `.tar` archives in the same way as for `.tar.gz` and `.tgz` in `_get_extraction_protocol`
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-12-01T15:19:28Z"
"2022-12-14T16:37:16Z"
"2022-12-14T16:33:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5322.diff", "html_url": "https://github.com/huggingface/datasets/pull/5322", "merged_at": "2022-12-14T16:33:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/5322.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5322" }
Currently `download_and_extract` doesn't throw an error when it is used with files with `.tar` extension in streaming mode because `_get_extraction_protocol` doesn't do it (like it does for `tar.gz` and `tgz`). `_get_extraction_protocol` returns formatted url as if we support tar protocol but we don't. That means that in dataset scripts `.tar` files would be attempted to load and fail during examples generation (after `download_and_extract` execution). So this PR raises error for `tar` files too.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5322/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5322/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3042
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3042/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3042/comments
https://api.github.com/repos/huggingface/datasets/issues/3042/events
https://github.com/huggingface/datasets/pull/3042
1,020,047,289
PR_kwDODunzps4s5Lxo
3,042
Improving elasticsearch integration
{ "avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4", "events_url": "https://api.github.com/users/ggdupont/events{/privacy}", "followers_url": "https://api.github.com/users/ggdupont/followers", "following_url": "https://api.github.com/users/ggdupont/following{/other_user}", "gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ggdupont", "id": 5583410, "login": "ggdupont", "node_id": "MDQ6VXNlcjU1ODM0MTA=", "organizations_url": "https://api.github.com/users/ggdupont/orgs", "received_events_url": "https://api.github.com/users/ggdupont/received_events", "repos_url": "https://api.github.com/users/ggdupont/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions", "type": "User", "url": "https://api.github.com/users/ggdupont" }
[]
open
false
null
[]
null
[ "@lhoestq @albertvillanova Iwas trying to fix the failing tests in circleCI but is there a test elasticsearch instance somewhere? If not, can I launch a docker container to have one?" ]
"2021-10-07T13:28:35Z"
"2022-07-06T15:19:48Z"
null
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3042.diff", "html_url": "https://github.com/huggingface/datasets/pull/3042", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3042.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3042" }
- adding murmurhash signature to sample in index - adding optional credentials for remote elasticsearch server - enabling sample update in index - upgrade the elasticsearch 7.10.1 python client - adding ElasticsearchBulider to instantiate a dataset from an index and a filtering query
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3042/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3042/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2934/comments
https://api.github.com/repos/huggingface/datasets/issues/2934/events
https://github.com/huggingface/datasets/issues/2934
999,477,413
I_kwDODunzps47ktCl
2,934
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon!", "Thanks a lot for investigating !" ]
"2021-09-17T15:26:53Z"
"2021-10-13T09:03:23Z"
"2021-10-13T09:03:23Z"
MEMBER
null
null
null
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one reference left" ``` This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards. Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this. cc @Rocketknight1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2934/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2934/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6261/comments
https://api.github.com/repos/huggingface/datasets/issues/6261/events
https://github.com/huggingface/datasets/issues/6261
1,913,813,178
I_kwDODunzps5yEni6
6,261
Can't load a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/37955817?v=4", "events_url": "https://api.github.com/users/joaopedrosdmm/events{/privacy}", "followers_url": "https://api.github.com/users/joaopedrosdmm/followers", "following_url": "https://api.github.com/users/joaopedrosdmm/following{/other_user}", "gists_url": "https://api.github.com/users/joaopedrosdmm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joaopedrosdmm", "id": 37955817, "login": "joaopedrosdmm", "node_id": "MDQ6VXNlcjM3OTU1ODE3", "organizations_url": "https://api.github.com/users/joaopedrosdmm/orgs", "received_events_url": "https://api.github.com/users/joaopedrosdmm/received_events", "repos_url": "https://api.github.com/users/joaopedrosdmm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joaopedrosdmm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joaopedrosdmm/subscriptions", "type": "User", "url": "https://api.github.com/users/joaopedrosdmm" }
[]
closed
false
null
[]
null
[ "I believe is due to the fact that doesn't work with .tgz files.", "`JourneyDB/JourneyDB` is a gated dataset, so this error means you are not authenticated to access it, either by using an invalid token or by not agreeing to the terms in the dialog on the dataset page.\r\n\r\n> I believe is due to the fact that doesn't work with .tgz files.\r\n\r\nIndeed, the dataset's data files structure is not supported natively by `datasets`. To load it, one option is to clone the repo (or download it with `huggingface_hub.snapshot_download`) and use `Dataset.from_generator` to process the files.", "> JourneyDB/JourneyDB is a gated dataset, so this error means you are not authenticated to access it, either by using an invalid token or by not agreeing to the terms in the dialog on the dataset page.´\r\n\r\nI did authentication with:\r\n\r\n```\r\nfrom huggingface_hub import notebook_login\r\nnotebook_login()\r\n```\r\n\r\nIsn't that the correct way to do it?\r\n\r\n> Indeed, the dataset's data files structure is not supported natively by datasets. To load it, one option is to clone the repo (or download it with huggingface_hub.snapshot_download) and use Dataset.from_generator to process the files.\r\n\r\nGreat suggestion I will give it a try.", "Have you accepted the terms in the dialog [here](https://huggingface.co/datasets/JourneyDB/JourneyDB)?\r\n\r\nIIRC Kaggle preinstalls an outdated `datasets` version, so it's also a good idea to update it before importing `datasets` (and do the same for `huggingface_hub`)", "Sorry for the late reply. Yes, I did. Thanks for the tip!" ]
"2023-09-26T15:46:25Z"
"2023-10-05T10:23:23Z"
"2023-10-05T10:23:22Z"
NONE
null
null
null
### Describe the bug Can't seem to load the JourneyDB dataset. It throws the following error: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[15], line 2 1 # If the dataset is gated/private, make sure you have run huggingface-cli login ----> 2 dataset = load_dataset("JourneyDB/JourneyDB", data_files="data", use_auth_token=True) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1661 ignore_verifications = ignore_verifications or save_infos 1663 # Create a dataset builder -> 1664 builder_instance = load_dataset_builder( 1665 path=path, 1666 name=name, 1667 data_dir=data_dir, 1668 data_files=data_files, 1669 cache_dir=cache_dir, 1670 features=features, 1671 download_config=download_config, 1672 download_mode=download_mode, 1673 revision=revision, 1674 use_auth_token=use_auth_token, 1675 **config_kwargs, 1676 ) 1678 # Return iterable dataset in case of streaming 1679 if streaming: File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1488 download_config = download_config.copy() if download_config else DownloadConfig() 1489 download_config.use_auth_token = use_auth_token -> 1490 dataset_module = dataset_module_factory( 1491 path, 1492 revision=revision, 1493 download_config=download_config, 1494 download_mode=download_mode, 1495 data_dir=data_dir, 1496 data_files=data_files, 1497 ) 1499 # Get dataset builder class from the processing script 1500 builder_cls = import_main_class(dataset_module.module_path) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1238, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1236 raise ConnectionError(f"Couln't reach the Hugging Face Hub for dataset '{path}': {e1}") from None 1237 if isinstance(e1, FileNotFoundError): -> 1238 raise FileNotFoundError( 1239 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1240 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1241 ) from None 1242 raise e1 from None 1243 else: FileNotFoundError: Couldn't find a dataset script at /kaggle/working/JourneyDB/JourneyDB/JourneyDB.py or any data file in the same directory. Couldn't find 'JourneyDB/JourneyDB' on the Hugging Face Hub either: FileNotFoundError: Unable to find data in dataset repository JourneyDB/JourneyDB with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` ### Steps to reproduce the bug 1) ``` from huggingface_hub import notebook_login notebook_login() ``` 2) ``` !pip install -q datasets from datasets import load_dataset ``` 3) `dataset = load_dataset("JourneyDB/JourneyDB", data_files="data", use_auth_token=True)` ### Expected behavior Load the dataset ### Environment info Notebook
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6261/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6261/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6183/comments
https://api.github.com/repos/huggingface/datasets/issues/6183/events
https://github.com/huggingface/datasets/issues/6183
1,867,743,276
I_kwDODunzps5vU4As
6,183
Load dataset with non-existent file
{ "avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4", "events_url": "https://api.github.com/users/freQuensy23-coder/events{/privacy}", "followers_url": "https://api.github.com/users/freQuensy23-coder/followers", "following_url": "https://api.github.com/users/freQuensy23-coder/following{/other_user}", "gists_url": "https://api.github.com/users/freQuensy23-coder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/freQuensy23-coder", "id": 64750224, "login": "freQuensy23-coder", "node_id": "MDQ6VXNlcjY0NzUwMjI0", "organizations_url": "https://api.github.com/users/freQuensy23-coder/orgs", "received_events_url": "https://api.github.com/users/freQuensy23-coder/received_events", "repos_url": "https://api.github.com/users/freQuensy23-coder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/freQuensy23-coder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/freQuensy23-coder/subscriptions", "type": "User", "url": "https://api.github.com/users/freQuensy23-coder" }
[]
closed
false
null
[]
null
[ "Same problem", "This was fixed in https://github.com/huggingface/datasets/pull/6155, which will be included in the next release (or you can install `datasets` from source to use it immediately)." ]
"2023-08-25T22:21:22Z"
"2023-08-29T13:26:22Z"
"2023-08-29T13:26:22Z"
NONE
null
null
null
### Describe the bug When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" - ```SchemaInferenceError: Please pass `features` or at least one example when writing data``` ### Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('json', data_files='/home/alexey/unreal_file.json') ``` ### Expected behavior Raise os FileNotFound error or custom error with informative message ### Environment info ``` # packages in environment at /home/alexey/.conda/envs/alex_LoRA: # # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 5.1 1_gnu accelerate 0.21.0 pypi_0 pypi aiohttp 3.8.5 pypi_0 pypi aiosignal 1.3.1 pypi_0 pypi antlr4-python3-runtime 4.9.3 pypi_0 pypi appdirs 1.4.4 pypi_0 pypi asttokens 2.0.5 pyhd3eb1b0_0 async-timeout 4.0.3 pypi_0 pypi attrs 23.1.0 pypi_0 pypi backcall 0.2.0 pyhd3eb1b0_0 bitsandbytes 0.41.1 pypi_0 pypi bzip2 1.0.8 h7b6447c_0 ca-certificates 2023.05.30 h06a4308_0 certifi 2023.7.22 pypi_0 pypi charset-normalizer 3.2.0 pypi_0 pypi click 8.1.6 pypi_0 pypi cmake 3.27.2 pypi_0 pypi comm 0.1.2 py310h06a4308_0 contourpy 1.1.0 pypi_0 pypi cycler 0.11.0 pypi_0 pypi datasets 2.14.4 pypi_0 pypi debugpy 1.6.7 py310h6a678d5_0 decorator 5.1.1 pyhd3eb1b0_0 dill 0.3.7 pypi_0 pypi docker-pycreds 0.4.0 pypi_0 pypi executing 0.8.3 pyhd3eb1b0_0 filelock 3.12.2 pypi_0 pypi fire 0.5.0 pypi_0 pypi fonttools 4.42.0 pypi_0 pypi frozenlist 1.4.0 pypi_0 pypi fsspec 2023.6.0 pypi_0 pypi gitdb 4.0.10 pypi_0 pypi gitpython 3.1.32 pypi_0 pypi huggingface-hub 0.16.4 pypi_0 pypi idna 3.4 pypi_0 pypi ipykernel 6.25.0 py310h2f386ee_0 ipython 8.12.2 py310h06a4308_0 ipython-genutils 0.2.0 pypi_0 pypi ipywidgets 8.0.4 py310h06a4308_0 jedi 0.18.1 py310h06a4308_1 jinja2 3.1.2 pypi_0 pypi jsonschema 4.19.0 pypi_0 pypi jsonschema-specifications 2023.7.1 pypi_0 pypi jupyter_client 8.1.0 py310h06a4308_0 jupyter_core 5.3.0 py310h06a4308_0 jupyterlab_widgets 3.0.5 py310h06a4308_0 kiwisolver 1.4.4 pypi_0 pypi ld_impl_linux-64 2.38 h1181459_1 libffi 3.3 he6710b0_2 libgcc-ng 11.2.0 h1234567_1 libgomp 11.2.0 h1234567_1 libsodium 1.0.18 h7b6447c_0 libstdcxx-ng 11.2.0 h1234567_1 libuuid 1.41.5 h5eee18b_0 lightning-utilities 0.9.0 pypi_0 pypi lit 16.0.6 pypi_0 pypi markupsafe 2.1.3 pypi_0 pypi matplotlib 3.7.2 pypi_0 pypi matplotlib-inline 0.1.6 py310h06a4308_0 mpmath 1.3.0 pypi_0 pypi multidict 6.0.4 pypi_0 pypi multiprocess 0.70.15 pypi_0 pypi nbformat 4.2.0 pypi_0 pypi ncurses 6.4 h6a678d5_0 nest-asyncio 1.5.6 py310h06a4308_0 networkx 3.1 pypi_0 pypi numpy 1.25.2 pypi_0 pypi nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi nvidia-curand-cu11 10.2.10.91 pypi_0 pypi nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi nvidia-nccl-cu11 2.14.3 pypi_0 pypi nvidia-nvtx-cu11 11.7.91 pypi_0 pypi omegaconf 2.3.0 pypi_0 pypi openssl 1.1.1v h7f8727e_0 packaging 23.0 py310h06a4308_0 pandas 2.0.3 pypi_0 pypi parso 0.8.3 pyhd3eb1b0_0 pathtools 0.1.2 pypi_0 pypi peft 0.4.0 pypi_0 pypi pexpect 4.8.0 pyhd3eb1b0_3 pickleshare 0.7.5 pyhd3eb1b0_1003 pillow 10.0.0 pypi_0 pypi pip 23.2.1 py310h06a4308_0 platformdirs 2.5.2 py310h06a4308_0 plotly 5.16.1 pypi_0 pypi prompt-toolkit 3.0.36 py310h06a4308_0 protobuf 4.24.0 pypi_0 pypi psutil 5.9.0 py310h5eee18b_0 ptyprocess 0.7.0 pyhd3eb1b0_2 pure_eval 0.2.2 pyhd3eb1b0_0 pyarrow 12.0.1 pypi_0 pypi pygments 2.15.1 py310h06a4308_1 pyparsing 3.0.9 pypi_0 pypi python 3.10.0 h12debd9_5 python-dateutil 2.8.2 pyhd3eb1b0_0 pytorch-lightning 2.0.6 pypi_0 pypi pytz 2023.3 pypi_0 pypi pyyaml 6.0.1 pypi_0 pypi pyzmq 25.1.0 py310h6a678d5_0 readline 8.2 h5eee18b_0 referencing 0.30.2 pypi_0 pypi regex 2023.8.8 pypi_0 pypi requests 2.31.0 pypi_0 pypi rpds-py 0.9.2 pypi_0 pypi safetensors 0.3.2 pypi_0 pypi scipy 1.11.1 pypi_0 pypi sentencepiece 0.1.99 pypi_0 pypi sentry-sdk 1.29.2 pypi_0 pypi setproctitle 1.3.2 pypi_0 pypi setuptools 68.0.0 py310h06a4308_0 six 1.16.0 pyhd3eb1b0_1 smmap 5.0.0 pypi_0 pypi sqlite 3.41.2 h5eee18b_0 stack_data 0.2.0 pyhd3eb1b0_0 sympy 1.12 pypi_0 pypi tenacity 8.2.3 pypi_0 pypi termcolor 2.3.0 pypi_0 pypi tk 8.6.12 h1ccaba5_0 tokenizers 0.13.3 pypi_0 pypi torch 2.0.1 pypi_0 pypi torchmetrics 1.0.3 pypi_0 pypi tornado 6.3.2 py310h5eee18b_0 tqdm 4.66.1 pypi_0 pypi traitlets 5.7.1 py310h06a4308_0 transformers 4.31.0 pypi_0 pypi triton 2.0.0 pypi_0 pypi typing-extensions 4.7.1 pypi_0 pypi tzdata 2023.3 pypi_0 pypi urllib3 2.0.4 pypi_0 pypi wandb 0.15.8 pypi_0 pypi wcwidth 0.2.5 pyhd3eb1b0_0 wheel 0.38.4 py310h06a4308_0 widgetsnbextension 4.0.5 py310h06a4308_0 xxhash 3.3.0 pypi_0 pypi xz 5.4.2 h5eee18b_0 yarl 1.9.2 pypi_0 pypi zeromq 4.3.4 h2531618_0 zlib 1.2.13 h5eee18b_0 active environment : None user config file : /home/alexey/.condarc populated config files : conda version : 23.1.0 conda-build version : 3.22.0 python version : 3.9.13.final.0 virtual packages : __archspec=1=x86_64 __cuda=12.0=0 __glibc=2.35=0 __linux=5.19.0=0 __unix=0=0 base environment : /opt/anaconda/anaconda3 (read only) conda av data dir : /opt/anaconda/anaconda3/etc/conda conda av metadata url : None channel URLs : https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /opt/anaconda/anaconda3/pkgs /home/alexey/.conda/pkgs envs directories : /home/alexey/.conda/envs /opt/anaconda/anaconda3/envs platform : linux-64 user-agent : conda/23.1.0 requests/2.31.0 CPython/3.9.13 Linux/5.19.0-46-generic ubuntu/22.04.2 glibc/2.35 UID:GID : 1009:1009 netrc file : /home/alexey/.netrc offline mode : False ```
{ "+1": 0, "-1": 0, "confused": 1, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6183/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6183/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5785
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5785/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5785/comments
https://api.github.com/repos/huggingface/datasets/issues/5785/events
https://github.com/huggingface/datasets/issues/5785
1,680,956,964
I_kwDODunzps5kMV4k
5,785
Unsupported data files raise TypeError: 'NoneType' object is not iterable
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-04-24T10:38:03Z"
"2023-04-27T12:57:30Z"
"2023-04-27T12:57:30Z"
MEMBER
null
null
null
Currently, we raise a TypeError for unsupported data files: ``` TypeError: 'NoneType' object is not iterable ``` See: - https://github.com/huggingface/datasets-server/issues/1073 We should give a more informative error message.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5785/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5785/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4299/comments
https://api.github.com/repos/huggingface/datasets/issues/4299/events
https://github.com/huggingface/datasets/pull/4299
1,230,236,782
PR_kwDODunzps43h5RP
4,299
Remove manual download from imagenet-1k
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the reviews @apsdehal and @lhoestq! As suggested by @lhoestq, I'll separate the train/val/test splits, apply the validation split fixes and shuffle the images files to simplify the script and make streaming faster.", "@apsdehal I dismissed your review as it's no longer relevant after the data files changes suggested by @lhoestq. " ]
"2022-05-09T20:49:18Z"
"2022-05-25T14:54:59Z"
"2022-05-25T14:46:16Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4299.diff", "html_url": "https://github.com/huggingface/datasets/pull/4299", "merged_at": "2022-05-25T14:46:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/4299.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4299" }
Remove the manual download code from `imagenet-1k` to make it a regular dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4299/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4299/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3293/comments
https://api.github.com/repos/huggingface/datasets/issues/3293/events
https://github.com/huggingface/datasets/pull/3293
1,057,004,431
PR_kwDODunzps4uslLN
3,293
Pin version exclusion for Markdown
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-11-18T06:56:01Z"
"2021-11-18T10:28:05Z"
"2021-11-18T10:28:04Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3293.diff", "html_url": "https://github.com/huggingface/datasets/pull/3293", "merged_at": "2021-11-18T10:28:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/3293.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3293" }
As Markdown version 3.3.5 has a bug, it is better to exclude it in case the users have it previously installed in their environment. Related to #3289, #3286.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3293/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3293/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6015/comments
https://api.github.com/repos/huggingface/datasets/issues/6015/events
https://github.com/huggingface/datasets/pull/6015
1,798,807,893
PR_kwDODunzps5VMhgB
6,015
Add metadata ui screenshot in docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007633 / 0.011353 (-0.003720) | 0.004666 / 0.011008 (-0.006343) | 0.097768 / 0.038508 (0.059260) | 0.085153 / 0.023109 (0.062044) | 0.400315 / 0.275898 (0.124417) | 0.452903 / 0.323480 (0.129423) | 0.006227 / 0.007986 (-0.001759) | 0.003814 / 0.004328 (-0.000515) | 0.074586 / 0.004250 (0.070336) | 0.064295 / 0.037052 (0.027242) | 0.408082 / 0.258489 (0.149593) | 0.446921 / 0.293841 (0.153080) | 0.034593 / 0.128546 (-0.093953) | 0.009191 / 0.075646 (-0.066456) | 0.337099 / 0.419271 (-0.082173) | 0.075320 / 0.043533 (0.031787) | 0.403488 / 0.255139 (0.148349) | 0.435309 / 0.283200 (0.152109) | 0.035675 / 0.141683 (-0.106008) | 1.732642 / 1.452155 (0.280487) | 1.770238 / 1.492716 (0.277522) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235879 / 0.018006 (0.217873) | 0.500330 / 0.000490 (0.499841) | 0.005221 / 0.000200 (0.005021) | 0.000150 / 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032479 / 0.037411 (-0.004933) | 0.095873 / 0.014526 (0.081348) | 0.107118 / 0.176557 (-0.069438) | 0.173809 / 0.737135 (-0.563326) | 0.109832 / 0.296338 (-0.186507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444342 / 0.215209 (0.229133) | 4.459010 / 2.077655 (2.381355) | 2.209687 / 1.504120 (0.705567) | 2.007556 / 1.541195 (0.466362) | 2.113683 / 1.468490 (0.645193) | 0.544281 / 4.584777 (-4.040496) | 4.037151 / 3.745712 (0.291439) | 4.852644 / 5.269862 (-0.417217) | 3.134126 / 4.565676 (-1.431550) | 0.066815 / 0.424275 (-0.357460) | 0.008836 / 0.007607 (0.001229) | 0.560904 / 0.226044 (0.334859) | 5.302760 / 2.268929 (3.033832) | 2.750182 / 55.444624 (-52.694442) | 2.322595 / 6.876477 (-4.553882) | 2.547486 / 2.142072 (0.405414) | 0.665766 / 4.805227 (-4.139461) | 0.151613 / 6.500664 (-6.349051) | 0.071155 / 0.075469 (-0.004314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.473717 / 1.841788 (-0.368071) | 22.584179 / 8.074308 (14.509871) | 15.888001 / 10.191392 (5.696609) | 0.181073 / 0.680424 (-0.499351) | 0.021395 / 0.534201 (-0.512806) | 0.452693 / 0.579283 (-0.126590) | 0.447709 / 0.434364 (0.013345) | 0.529599 / 0.540337 (-0.010738) | 0.699241 / 1.386936 (-0.687695) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007917 / 0.011353 (-0.003436) | 0.004544 / 0.011008 (-0.006464) | 0.074566 / 0.038508 (0.036058) | 0.087530 / 0.023109 (0.064421) | 0.419753 / 0.275898 (0.143854) | 0.452352 / 0.323480 (0.128872) | 0.005882 / 0.007986 (-0.002104) | 0.003904 / 0.004328 (-0.000425) | 0.073539 / 0.004250 (0.069289) | 0.071320 / 0.037052 (0.034267) | 0.432899 / 0.258489 (0.174409) | 0.470365 / 0.293841 (0.176524) | 0.036198 / 0.128546 (-0.092348) | 0.009342 / 0.075646 (-0.066304) | 0.080970 / 0.419271 (-0.338301) | 0.058769 / 0.043533 (0.015236) | 0.413397 / 0.255139 (0.158258) | 0.448362 / 0.283200 (0.165162) | 0.034177 / 0.141683 (-0.107506) | 1.706217 / 1.452155 (0.254063) | 1.776743 / 1.492716 (0.284026) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198779 / 0.018006 (0.180773) | 0.499862 / 0.000490 (0.499372) | 0.003891 / 0.000200 (0.003692) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034671 / 0.037411 (-0.002740) | 0.103165 / 0.014526 (0.088639) | 0.115813 / 0.176557 (-0.060744) | 0.177407 / 0.737135 (-0.559728) | 0.117733 / 0.296338 (-0.178606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.476859 / 0.215209 (0.261650) | 4.823063 / 2.077655 (2.745409) | 2.524133 / 1.504120 (1.020013) | 2.374482 / 1.541195 (0.833288) | 2.518047 / 1.468490 (1.049557) | 0.559131 / 4.584777 (-4.025646) | 4.126213 / 3.745712 (0.380501) | 6.488570 / 5.269862 (1.218708) | 3.816540 / 4.565676 (-0.749137) | 0.064742 / 0.424275 (-0.359533) | 0.008476 / 0.007607 (0.000869) | 0.576432 / 0.226044 (0.350387) | 5.835133 / 2.268929 (3.566205) | 3.237833 / 55.444624 (-52.206791) | 2.726596 / 6.876477 (-4.149880) | 2.799212 / 2.142072 (0.657139) | 0.661628 / 4.805227 (-4.143599) | 0.153997 / 6.500664 (-6.346667) | 0.070621 / 0.075469 (-0.004848) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.648505 / 1.841788 (-0.193282) | 22.454019 / 8.074308 (14.379711) | 16.077098 / 10.191392 (5.885706) | 0.217875 / 0.680424 (-0.462549) | 0.021285 / 0.534201 (-0.512916) | 0.459837 / 0.579283 (-0.119446) | 0.476211 / 0.434364 (0.041847) | 0.525903 / 0.540337 (-0.014435) | 0.717224 / 1.386936 (-0.669712) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b767e9c3ef30f9da30d47cfcaccf9a7ac2500c43 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008929 / 0.011353 (-0.002424) | 0.004188 / 0.011008 (-0.006820) | 0.097030 / 0.038508 (0.058522) | 0.071363 / 0.023109 (0.048254) | 0.333116 / 0.275898 (0.057218) | 0.371272 / 0.323480 (0.047792) | 0.006430 / 0.007986 (-0.001555) | 0.003689 / 0.004328 (-0.000639) | 0.068666 / 0.004250 (0.064416) | 0.057562 / 0.037052 (0.020510) | 0.347208 / 0.258489 (0.088719) | 0.390514 / 0.293841 (0.096673) | 0.050560 / 0.128546 (-0.077987) | 0.013372 / 0.075646 (-0.062275) | 0.311345 / 0.419271 (-0.107927) | 0.068990 / 0.043533 (0.025457) | 0.363026 / 0.255139 (0.107887) | 0.379793 / 0.283200 (0.096593) | 0.036891 / 0.141683 (-0.104792) | 1.583481 / 1.452155 (0.131327) | 1.688727 / 1.492716 (0.196011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209777 / 0.018006 (0.191771) | 0.507267 / 0.000490 (0.506777) | 0.003637 / 0.000200 (0.003438) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029309 / 0.037411 (-0.008102) | 0.088386 / 0.014526 (0.073861) | 0.104974 / 0.176557 (-0.071582) | 0.171999 / 0.737135 (-0.565137) | 0.110797 / 0.296338 (-0.185542) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.543465 / 0.215209 (0.328256) | 5.361491 / 2.077655 (3.283836) | 2.348712 / 1.504120 (0.844592) | 2.012527 / 1.541195 (0.471332) | 2.069776 / 1.468490 (0.601286) | 0.874262 / 4.584777 (-3.710515) | 4.877317 / 3.745712 (1.131605) | 5.327459 / 5.269862 (0.057597) | 3.336823 / 4.565676 (-1.228854) | 0.100456 / 0.424275 (-0.323819) | 0.008503 / 0.007607 (0.000895) | 0.692009 / 0.226044 (0.465965) | 6.912731 / 2.268929 (4.643802) | 3.110548 / 55.444624 (-52.334076) | 2.443665 / 6.876477 (-4.432811) | 2.528713 / 2.142072 (0.386641) | 1.076358 / 4.805227 (-3.728869) | 0.220352 / 6.500664 (-6.280312) | 0.080293 / 0.075469 (0.004824) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.538444 / 1.841788 (-0.303344) | 21.121221 / 8.074308 (13.046913) | 19.810609 / 10.191392 (9.619216) | 0.225406 / 0.680424 (-0.455018) | 0.026652 / 0.534201 (-0.507549) | 0.430372 / 0.579283 (-0.148911) | 0.510722 / 0.434364 (0.076358) | 0.514347 / 0.540337 (-0.025991) | 0.686050 / 1.386936 (-0.700886) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007675 / 0.011353 (-0.003678) | 0.004542 / 0.011008 (-0.006466) | 0.069655 / 0.038508 (0.031147) | 0.069338 / 0.023109 (0.046229) | 0.436505 / 0.275898 (0.160607) | 0.481806 / 0.323480 (0.158326) | 0.005315 / 0.007986 (-0.002670) | 0.004455 / 0.004328 (0.000127) | 0.072674 / 0.004250 (0.068424) | 0.058088 / 0.037052 (0.021035) | 0.445825 / 0.258489 (0.187336) | 0.501706 / 0.293841 (0.207865) | 0.047123 / 0.128546 (-0.081424) | 0.012943 / 0.075646 (-0.062703) | 0.093491 / 0.419271 (-0.325780) | 0.060169 / 0.043533 (0.016637) | 0.436530 / 0.255139 (0.181391) | 0.466873 / 0.283200 (0.183674) | 0.040453 / 0.141683 (-0.101230) | 1.586438 / 1.452155 (0.134283) | 1.671081 / 1.492716 (0.178365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180607 / 0.018006 (0.162601) | 0.520145 / 0.000490 (0.519655) | 0.004824 / 0.000200 (0.004624) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029308 / 0.037411 (-0.008103) | 0.093652 / 0.014526 (0.079126) | 0.102332 / 0.176557 (-0.074224) | 0.162414 / 0.737135 (-0.574721) | 0.098017 / 0.296338 (-0.198321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.583949 / 0.215209 (0.368740) | 6.035191 / 2.077655 (3.957536) | 2.801274 / 1.504120 (1.297155) | 2.566150 / 1.541195 (1.024955) | 2.437122 / 1.468490 (0.968632) | 0.865038 / 4.584777 (-3.719739) | 4.841727 / 3.745712 (1.096015) | 4.683919 / 5.269862 (-0.585943) | 2.941240 / 4.565676 (-1.624437) | 0.104888 / 0.424275 (-0.319387) | 0.007747 / 0.007607 (0.000140) | 0.780041 / 0.226044 (0.553997) | 7.771314 / 2.268929 (5.502385) | 3.680814 / 55.444624 (-51.763811) | 2.938472 / 6.876477 (-3.938004) | 2.981740 / 2.142072 (0.839668) | 1.065411 / 4.805227 (-3.739816) | 0.222265 / 6.500664 (-6.278399) | 0.082428 / 0.075469 (0.006959) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.626774 / 1.841788 (-0.215014) | 21.618284 / 8.074308 (13.543976) | 20.596743 / 10.191392 (10.405351) | 0.240969 / 0.680424 (-0.439454) | 0.025630 / 0.534201 (-0.508570) | 0.481981 / 0.579283 (-0.097302) | 0.547914 / 0.434364 (0.113550) | 0.522296 / 0.540337 (-0.018041) | 0.729174 / 1.386936 (-0.657762) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b8067c0262073891180869f700ebef5ac3dc5cce \"CML watermark\")\n" ]
"2023-07-11T12:16:29Z"
"2023-07-11T16:07:28Z"
"2023-07-11T15:56:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6015.diff", "html_url": "https://github.com/huggingface/datasets/pull/6015", "merged_at": "2023-07-11T15:56:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/6015.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6015" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6015/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6015/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6104/comments
https://api.github.com/repos/huggingface/datasets/issues/6104/events
https://github.com/huggingface/datasets/issues/6104
1,828,959,107
I_kwDODunzps5tA7OD
6,104
HF Datasets data access is extremely slow even when in memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NightMachinery", "id": 36224762, "login": "NightMachinery", "node_id": "MDQ6VXNlcjM2MjI0NzYy", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "repos_url": "https://api.github.com/users/NightMachinery/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "type": "User", "url": "https://api.github.com/users/NightMachinery" }
[]
open
false
null
[]
null
[ "Possibly related:\r\n- https://github.com/pytorch/pytorch/issues/22462" ]
"2023-07-31T11:12:19Z"
"2023-08-01T11:22:43Z"
null
CONTRIBUTOR
null
null
null
### Describe the bug Doing a simple `some_dataset[:10]` can take more than a minute. Profiling it: <img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab"> `some_dataset` is completely in memory with no disk cache. This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long? It's faster to produce the dataset from scratch than to access it from HF Datasets! ### Steps to reproduce the bug I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1). ```python #!/usr/bin/env python3 import sys import time import torch from datasets import load_dataset def main(dataset_name): # Start the timer start_time = time.time() # Load the dataset from Hugging Face Hub dataset = load_dataset(dataset_name) # Set the dataset format as torch dataset.set_format(type="torch") # Perform an identity map dataset = dataset.map(lambda example: example, batched=True, batch_size=20) # End the timer end_time = time.time() # Print the time taken print(f"Time taken: {end_time - start_time:.2f} seconds") if __name__ == "__main__": dataset_name = "NightMachinery/hf_datasets_bug1" print(f"dataset_name: {dataset_name}") main(dataset_name) ``` ### Expected behavior _ ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6104/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6104/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4594/comments
https://api.github.com/repos/huggingface/datasets/issues/4594/events
https://github.com/huggingface/datasets/issues/4594
1,288,070,023
I_kwDODunzps5MxmOH
4,594
load_from_disk suggests incorrect fix when used to load DatasetDict
{ "avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4", "events_url": "https://api.github.com/users/dvsth/events{/privacy}", "followers_url": "https://api.github.com/users/dvsth/followers", "following_url": "https://api.github.com/users/dvsth/following{/other_user}", "gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dvsth", "id": 11157811, "login": "dvsth", "node_id": "MDQ6VXNlcjExMTU3ODEx", "organizations_url": "https://api.github.com/users/dvsth/orgs", "received_events_url": "https://api.github.com/users/dvsth/received_events", "repos_url": "https://api.github.com/users/dvsth/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dvsth/subscriptions", "type": "User", "url": "https://api.github.com/users/dvsth" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
"2022-06-29T01:40:01Z"
"2022-06-29T04:03:44Z"
"2022-06-29T04:03:44Z"
NONE
null
null
null
Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indicating that?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4594/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4594/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/2214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2214/comments
https://api.github.com/repos/huggingface/datasets/issues/2214/events
https://github.com/huggingface/datasets/issues/2214
856,333,657
MDU6SXNzdWU4NTYzMzM2NTc=
2,214
load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
{ "avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4", "events_url": "https://api.github.com/users/nsaphra/events{/privacy}", "followers_url": "https://api.github.com/users/nsaphra/followers", "following_url": "https://api.github.com/users/nsaphra/following{/other_user}", "gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nsaphra", "id": 414788, "login": "nsaphra", "node_id": "MDQ6VXNlcjQxNDc4OA==", "organizations_url": "https://api.github.com/users/nsaphra/orgs", "received_events_url": "https://api.github.com/users/nsaphra/received_events", "repos_url": "https://api.github.com/users/nsaphra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions", "type": "User", "url": "https://api.github.com/users/nsaphra" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```", "There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are downloaded from `master` instead of the `1.2.1` repo.\r\n\r\nYou can try setting the env var `HF_SCRIPTS_VERSION=\"1.2.1\"` as a workaround. Let me know if that helps.", "I just faced the same issue. I was using 1.2.1 from conda and received the same AttributeError complaining about 'add_start_docstrings'. Uninstalling the conda installed datasets and then installing the latest datasets (version 1.5.0) using pip install solved the issue for me. I don't like mixing up conda and pip installs in the same environments but this will have to do for now, until 1.5.0 is made available through conda.", "Yep, seems to have fixed things! The conda package could really do with an update. Thanks!" ]
"2021-04-12T20:26:01Z"
"2021-04-23T15:20:02Z"
"2021-04-23T15:20:02Z"
NONE
null
null
null
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package. ```python >>> from datasets import load_metric >>> metric = load_metric("glue", "sst2") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module> @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2214/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3145
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3145/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3145/comments
https://api.github.com/repos/huggingface/datasets/issues/3145/events
https://github.com/huggingface/datasets/issues/3145
1,033,580,009
I_kwDODunzps49my3p
3,145
[when Image type will exist] provide a way to get the data as binary + filename
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
null
[ "@severo, maybe somehow related to this PR ?\r\n- #3129", "@severo I'll keep that in mind.\r\n\r\nYou can track progress on the Image feature in #3163 (still in the early stage). ", "Hi ! As discussed with @severo offline it looks like the dataset viewer already supports reading PIL images, so maybe the dataset viewer doesn't need to disable decoding after all", "Fixed with https://github.com/huggingface/datasets/pull/3163" ]
"2021-10-22T13:23:49Z"
"2021-12-22T11:05:37Z"
"2021-12-22T11:05:36Z"
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in order to serve it on the web. Note: this issue would apply exactly the same for the `Audio` type. **Describe the solution you'd like** If a "cell" has the type `Image`, provide a way to get the binary content of the file, and the filename, eg as: ```python filename: str data: bytes ``` **Describe alternatives you've considered** A way to write the cell to the disk (passing a local directory), and then return the pathname, filename, and mimetype.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3145/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3145/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5490/comments
https://api.github.com/repos/huggingface/datasets/issues/5490/events
https://github.com/huggingface/datasets/pull/5490
1,565,842,327
PR_kwDODunzps5I_nz-
5,490
Do not add index column by default when exporting to CSV
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008581 / 0.011353 (-0.002772) | 0.004519 / 0.011008 (-0.006490) | 0.099721 / 0.038508 (0.061213) | 0.029217 / 0.023109 (0.006107) | 0.298229 / 0.275898 (0.022331) | 0.332605 / 0.323480 (0.009125) | 0.006880 / 0.007986 (-0.001106) | 0.003324 / 0.004328 (-0.001005) | 0.078143 / 0.004250 (0.073892) | 0.034262 / 0.037052 (-0.002790) | 0.304162 / 0.258489 (0.045673) | 0.342351 / 0.293841 (0.048510) | 0.033387 / 0.128546 (-0.095159) | 0.011397 / 0.075646 (-0.064249) | 0.321527 / 0.419271 (-0.097744) | 0.040886 / 0.043533 (-0.002647) | 0.299968 / 0.255139 (0.044829) | 0.322484 / 0.283200 (0.039285) | 0.083832 / 0.141683 (-0.057851) | 1.482241 / 1.452155 (0.030086) | 1.548438 / 1.492716 (0.055721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191002 / 0.018006 (0.172996) | 0.403423 / 0.000490 (0.402933) | 0.002493 / 0.000200 (0.002293) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023720 / 0.037411 (-0.013691) | 0.100806 / 0.014526 (0.086281) | 0.105314 / 0.176557 (-0.071242) | 0.141490 / 0.737135 (-0.595645) | 0.108695 / 0.296338 (-0.187644) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412250 / 0.215209 (0.197041) | 4.124830 / 2.077655 (2.047175) | 1.851948 / 1.504120 (0.347828) | 1.651597 / 1.541195 (0.110403) | 1.712486 / 1.468490 (0.243996) | 0.696634 / 4.584777 (-3.888143) | 3.304220 / 3.745712 (-0.441492) | 1.862776 / 5.269862 (-3.407086) | 1.159452 / 4.565676 (-3.406224) | 0.082930 / 0.424275 (-0.341345) | 0.012586 / 0.007607 (0.004979) | 0.524499 / 0.226044 (0.298455) | 5.249235 / 2.268929 (2.980307) | 2.293187 / 55.444624 (-53.151437) | 1.950101 / 6.876477 (-4.926376) | 2.008274 / 2.142072 (-0.133799) | 0.811641 / 4.805227 (-3.993586) | 0.148785 / 6.500664 (-6.351879) | 0.064461 / 0.075469 (-0.011008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.232227 / 1.841788 (-0.609561) | 13.235896 / 8.074308 (5.161588) | 13.837420 / 10.191392 (3.646028) | 0.135586 / 0.680424 (-0.544838) | 0.028935 / 0.534201 (-0.505266) | 0.397064 / 0.579283 (-0.182220) | 0.393814 / 0.434364 (-0.040549) | 0.480450 / 0.540337 (-0.059887) | 0.561159 / 1.386936 (-0.825777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006696 / 0.011353 (-0.004657) | 0.004528 / 0.011008 (-0.006480) | 0.077335 / 0.038508 (0.038827) | 0.027181 / 0.023109 (0.004072) | 0.345379 / 0.275898 (0.069481) | 0.372544 / 0.323480 (0.049064) | 0.006808 / 0.007986 (-0.001178) | 0.003284 / 0.004328 (-0.001045) | 0.077379 / 0.004250 (0.073129) | 0.039954 / 0.037052 (0.002901) | 0.348094 / 0.258489 (0.089605) | 0.382315 / 0.293841 (0.088474) | 0.031694 / 0.128546 (-0.096852) | 0.011714 / 0.075646 (-0.063933) | 0.086425 / 0.419271 (-0.332846) | 0.041778 / 0.043533 (-0.001754) | 0.342161 / 0.255139 (0.087022) | 0.363798 / 0.283200 (0.080599) | 0.091315 / 0.141683 (-0.050368) | 1.462066 / 1.452155 (0.009912) | 1.541417 / 1.492716 (0.048700) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235840 / 0.018006 (0.217834) | 0.397096 / 0.000490 (0.396606) | 0.004597 / 0.000200 (0.004397) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024296 / 0.037411 (-0.013115) | 0.099167 / 0.014526 (0.084641) | 0.108257 / 0.176557 (-0.068299) | 0.143434 / 0.737135 (-0.593701) | 0.111933 / 0.296338 (-0.184406) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440306 / 0.215209 (0.225096) | 4.374065 / 2.077655 (2.296410) | 2.072653 / 1.504120 (0.568533) | 1.864829 / 1.541195 (0.323635) | 1.927970 / 1.468490 (0.459479) | 0.710118 / 4.584777 (-3.874659) | 3.391216 / 3.745712 (-0.354496) | 1.888847 / 5.269862 (-3.381015) | 1.178740 / 4.565676 (-3.386936) | 0.083950 / 0.424275 (-0.340325) | 0.012567 / 0.007607 (0.004960) | 0.540557 / 0.226044 (0.314513) | 5.437621 / 2.268929 (3.168692) | 2.531165 / 55.444624 (-52.913460) | 2.181450 / 6.876477 (-4.695027) | 2.209108 / 2.142072 (0.067035) | 0.814236 / 4.805227 (-3.990991) | 0.153000 / 6.500664 (-6.347664) | 0.066769 / 0.075469 (-0.008700) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.301057 / 1.841788 (-0.540731) | 14.066786 / 8.074308 (5.992478) | 13.641455 / 10.191392 (3.450063) | 0.138838 / 0.680424 (-0.541586) | 0.016733 / 0.534201 (-0.517468) | 0.391823 / 0.579283 (-0.187460) | 0.390817 / 0.434364 (-0.043547) | 0.487682 / 0.540337 (-0.052656) | 0.581134 / 1.386936 (-0.805802) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b065547654efa0ec633cf373ac1512884c68b2e1 \"CML watermark\")\n" ]
"2023-02-01T10:20:55Z"
"2023-02-09T09:29:08Z"
"2023-02-09T09:22:23Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5490.diff", "html_url": "https://github.com/huggingface/datasets/pull/5490", "merged_at": "2023-02-09T09:22:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/5490.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5490" }
As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name. This PR changes the default behavior, so that now the index column is not written. To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` to name that column. CC: @merveenoyan
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5490/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5490/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4614/comments
https://api.github.com/repos/huggingface/datasets/issues/4614/events
https://github.com/huggingface/datasets/pull/4614
1,291,218,020
PR_kwDODunzps46ssfw
4,614
Ensure ConcatenationTable.cast uses target_schema metadata
{ "avatar_url": "https://avatars.githubusercontent.com/u/8114067?v=4", "events_url": "https://api.github.com/users/dtuit/events{/privacy}", "followers_url": "https://api.github.com/users/dtuit/followers", "following_url": "https://api.github.com/users/dtuit/following{/other_user}", "gists_url": "https://api.github.com/users/dtuit/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dtuit", "id": 8114067, "login": "dtuit", "node_id": "MDQ6VXNlcjgxMTQwNjc=", "organizations_url": "https://api.github.com/users/dtuit/orgs", "received_events_url": "https://api.github.com/users/dtuit/received_events", "repos_url": "https://api.github.com/users/dtuit/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dtuit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dtuit/subscriptions", "type": "User", "url": "https://api.github.com/users/dtuit" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, Thanks for the detailed comment. I've tested the suggested approach and can confirm it works for the testcase outlined above! The PR is updated with the changes.", "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-07-01T10:22:08Z"
"2022-07-19T13:48:45Z"
"2022-07-19T13:36:24Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4614.diff", "html_url": "https://github.com/huggingface/datasets/pull/4614", "merged_at": "2022-07-19T13:36:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/4614.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4614" }
Currently, `ConcatenationTable.cast` does not use target_schema metadata when casting subtables. This causes an issue when using cast_column and the underlying table is a ConcatenationTable. Code example of where issue arrises: ``` from datasets import Dataset, Image column1 = [0, 1] image_paths = ['/images/image1.jpg', '/images/image2.jpg'] ds = Dataset.from_dict({"column1": column1}) ds = ds.add_column("image", image_paths) ds.cast_column("image", Image()) # Fails here ``` Output ``` ... TypeError: Couldn't cast array of type string to {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4614/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4614/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1561/comments
https://api.github.com/repos/huggingface/datasets/issues/1561/events
https://github.com/huggingface/datasets/pull/1561
765,831,436
MDExOlB1bGxSZXF1ZXN0NTM5MTAwNjAy
1,561
Lama
{ "avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4", "events_url": "https://api.github.com/users/ontocord/events{/privacy}", "followers_url": "https://api.github.com/users/ontocord/followers", "following_url": "https://api.github.com/users/ontocord/following{/other_user}", "gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ontocord", "id": 8900094, "login": "ontocord", "node_id": "MDQ6VXNlcjg5MDAwOTQ=", "organizations_url": "https://api.github.com/users/ontocord/orgs", "received_events_url": "https://api.github.com/users/ontocord/received_events", "repos_url": "https://api.github.com/users/ontocord/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ontocord/subscriptions", "type": "User", "url": "https://api.github.com/users/ontocord" }
[]
closed
false
null
[]
null
[ "Let me know why the pyarrow test is failing. For one of the config \"trex\", I had to load an initial datafile for a dictionary which is used to augment the rest of the datasets. In the dummy data, the dictionary file was truncated so I had to fudge that. I'm not sure if that is the issue.\r\n", "@ontocord it just needs a rerun and it will be good to go.", "THanks @tanmoyio. How do I do a rerun?", "@ontocord contributor can’t rerun it, the maintainers will rerun it, it may take lil bit of time as there are so many PRs left to be reviewed and merged ", "@lhoestq not sure why it is failing. i've made all modifications. ", "merging since the CI is fixed on master" ]
"2020-12-14T03:27:10Z"
"2020-12-28T09:51:47Z"
"2020-12-28T09:51:47Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1561.diff", "html_url": "https://github.com/huggingface/datasets/pull/1561", "merged_at": "2020-12-28T09:51:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/1561.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1561" }
This the LAMA dataset for probing facts and common sense from language models. See https://github.com/facebookresearch/LAMA for more details.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1561/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1561/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2561/comments
https://api.github.com/repos/huggingface/datasets/issues/2561/events
https://github.com/huggingface/datasets/issues/2561
932,321,725
MDU6SXNzdWU5MzIzMjE3MjU=
2,561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apsdehal", "id": 3616806, "login": "apsdehal", "node_id": "MDQ6VXNlcjM2MTY4MDY=", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "repos_url": "https://api.github.com/users/apsdehal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "type": "User", "url": "https://api.github.com/users/apsdehal" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! I just tried to reproduce what you said:\r\n- create a local builder class\r\n- use `load_dataset`\r\n- update the builder class code\r\n- use `load_dataset` again (with or without `ignore_verifications=True`)\r\nAnd it creates a new cache, as expected.\r\n\r\nWhat modifications did you do to your builder's code ?", "Hi @lhoestq. Thanks for your reply. I just did minor modifications for which it should not regenerate cache (for e.g. Adding a print statement). Overall, regardless of cache miss, there should be an explicit option to allow reuse of existing cache if author knows cache shouldn't be affected.", "The cache is based on the hash of the dataset builder's code, so changing the code makes it recompute the cache.\r\n\r\nYou could still rename the cache directory of your previous computation to the new expected cache directory if you want to avoid having to recompute it and if you're sure that it would generate the exact same result.\r\n\r\nThe verifications are data integrity verifications: it checks the checksums of the downloaded files, as well as the size of the generated splits.", "Hi @apsdehal,\r\n\r\nIf you decide to follow @lhoestq's suggestion to rename the cache directory of your previous computation to the new expected cache directory, you can do the following to get the name of the new expected cache directory once #2500 is merged:\r\n```python\r\nfrom datasets import load_dataset_builder\r\ndataset_builder = load_dataset_builder(\"path/to/your/dataset\")\r\nprint(dataset_builder.cache_dir)\r\n```\r\n\r\nThis way, you don't have to recompute the hash of the dataset script yourself each time you modify the script." ]
"2021-06-29T07:43:03Z"
"2022-08-04T11:58:36Z"
"2022-08-04T11:58:36Z"
CONTRIBUTOR
null
null
null
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce the bug - Create a local dataset builder class - load the local builder class file using `load_dataset` and let the cache build - update the file's content - The cache should rebuilt. ## Expected results With `ignore_verifications=True`, `load_dataset` should pick up existing cache. ## Actual results Creates new cache. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.7 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2561/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2561/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4386/comments
https://api.github.com/repos/huggingface/datasets/issues/4386/events
https://github.com/huggingface/datasets/issues/4386
1,243,965,532
I_kwDODunzps5KJWhc
4,386
Bug for wiki_auto_asset_turk from GEM
{ "avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4", "events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}", "followers_url": "https://api.github.com/users/StevenTang1998/followers", "following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}", "gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/StevenTang1998", "id": 37647985, "login": "StevenTang1998", "node_id": "MDQ6VXNlcjM3NjQ3OTg1", "organizations_url": "https://api.github.com/users/StevenTang1998/orgs", "received_events_url": "https://api.github.com/users/StevenTang1998/received_events", "repos_url": "https://api.github.com/users/StevenTang1998/repos", "site_admin": false, "starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions", "type": "User", "url": "https://api.github.com/users/StevenTang1998" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ", "Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```", "Thanks for your reply!!\r\nAnd the totto dataset has the same problem. The url should be change to [https://storage.googleapis.com/totto-public/totto_data.zip](https://storage.googleapis.com/totto-public/totto_data.zip).", "Hi again @StevenTang1998,\r\n\r\nI don't see any problem when loading `totto` dataset:\r\n```python\r\nIn [4]: import datasets\r\n ...: ds = datasets.load_dataset(\"totto\")\r\nDownloading builder script: 5.58kB [00:00, 5.33MB/s] \r\nDownloading metadata: 2.78kB [00:00, 2.96MB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset totto/default (download: 179.03 MiB, generated: 706.59 MiB, post-processed: Unknown size, total: 885.62 MiB) to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2...\r\nDownloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 188M/188M [00:32<00:00, 5.77MB/s]\r\nDataset totto downloaded and prepared to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 147.95it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 120761\r\n })\r\n validation: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n test: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n})\r\n```", "Sorry, I didn't express it clearly. It's the totto dataset from gem.\r\ndatasets.load_dataset('gem', 'totto')\r\n", "@StevenTang1998 fixed in:\r\n- #4396", "Thanks!!" ]
"2022-05-21T12:31:30Z"
"2022-05-24T05:55:52Z"
"2022-05-23T10:29:55Z"
NONE
null
null
null
## Describe the bug The script of wiki_auto_asset_turk for GEM may be out of date. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('gem', 'wiki_auto_asset_turk') ``` ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1731, in load_dataset builder_instance.download_and_prepare( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 640, in download_and_prepare self._download_and_prepare( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 1158, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 707, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/tangtianyi/.cache/huggingface/modules/datasets_modules/datasets/gem/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1/gem.py", line 538, in _split_generators dl_dir = dl_manager.download_and_extract(_URLs[self.config.name]) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 416, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 294, in download downloaded_path_or_paths = map_nested( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 351, in map_nested mapped = [ File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 352, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 288, in _single_map_nested return function(data_struct) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 320, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 234, in cached_path output_path = get_from_cache( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 579, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4386/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4386/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4103/comments
https://api.github.com/repos/huggingface/datasets/issues/4103/events
https://github.com/huggingface/datasets/pull/4103
1,193,987,104
PR_kwDODunzps41s3T4
4,103
Add the `GSM8K` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4", "events_url": "https://api.github.com/users/jon-tow/events{/privacy}", "followers_url": "https://api.github.com/users/jon-tow/followers", "following_url": "https://api.github.com/users/jon-tow/following{/other_user}", "gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jon-tow", "id": 41410219, "login": "jon-tow", "node_id": "MDQ6VXNlcjQxNDEwMjE5", "organizations_url": "https://api.github.com/users/jon-tow/orgs", "received_events_url": "https://api.github.com/users/jon-tow/received_events", "repos_url": "https://api.github.com/users/jon-tow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions", "type": "User", "url": "https://api.github.com/users/jon-tow" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI is failing because it's outdated, but the task tags are updated on `master`, merging :)" ]
"2022-04-06T04:07:52Z"
"2022-04-12T15:38:28Z"
"2022-04-12T10:21:16Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4103.diff", "html_url": "https://github.com/huggingface/datasets/pull/4103", "merged_at": "2022-04-12T10:21:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/4103.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4103" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4103/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4103/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3913/comments
https://api.github.com/repos/huggingface/datasets/issues/3913/events
https://github.com/huggingface/datasets/pull/3913
1,168,723,950
PR_kwDODunzps40afYJ
3,913
Deterministic split order in DatasetDict.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3913). All of your documentation changes will be reflected on that endpoint.", "I'm surprised this is needed because the order of the `dict` keys is deterministic as of Python 3.6 (documented in 3.7). Is there a reproducer for this behavior? I wouldn't make this change unless it's absolutely needed because `sorted` modifies the initial order of the keys.", "Indeed this doesn't fix the issue apparently. Actually this is probably because the tokenizer used to process the second split is in a state that has been modified by the first split.\r\n\r\nTherefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen the first split (and therefore is considered a different tokenizer)." ]
"2022-03-14T17:58:37Z"
"2023-09-24T09:55:10Z"
"2022-03-15T10:45:15Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3913.diff", "html_url": "https://github.com/huggingface/datasets/pull/3913", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3913.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3913" }
The order in which the splits are processed by `map` is not deterministic in `DatasetDict.map`. This can cause caching issues when the processing function is stateful and sensible to the order in which examples are processed Close https://github.com/huggingface/datasets/issues/3847
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3913/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3913/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6176/comments
https://api.github.com/repos/huggingface/datasets/issues/6176/events
https://github.com/huggingface/datasets/issues/6176
1,864,436,408
I_kwDODunzps5vIQq4
6,176
how to limit the size of memory mapped file?
{ "avatar_url": "https://avatars.githubusercontent.com/u/47763855?v=4", "events_url": "https://api.github.com/users/williamium3000/events{/privacy}", "followers_url": "https://api.github.com/users/williamium3000/followers", "following_url": "https://api.github.com/users/williamium3000/following{/other_user}", "gists_url": "https://api.github.com/users/williamium3000/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/williamium3000", "id": 47763855, "login": "williamium3000", "node_id": "MDQ6VXNlcjQ3NzYzODU1", "organizations_url": "https://api.github.com/users/williamium3000/orgs", "received_events_url": "https://api.github.com/users/williamium3000/received_events", "repos_url": "https://api.github.com/users/williamium3000/repos", "site_admin": false, "starred_url": "https://api.github.com/users/williamium3000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/williamium3000/subscriptions", "type": "User", "url": "https://api.github.com/users/williamium3000" }
[]
open
false
null
[]
null
[ "Hi! Can you share the error this reproducer throws in your environment? `streaming=True` streams the dataset as it's iterated over without creating a memory-map file.", "The trace of the error. Streaming works but is slower.\r\n```\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2023-08-24_06:06:01\r\n host : compute-126.cm.cluster\r\n rank : 0 (local_rank: 0)\r\n exitcode : 1 (pid: 48442)\r\n error_file: /tmp/torchelastic_4fqzcuuz/none_rx2470jl/attempt_0/0/error.json\r\n traceback : Traceback (most recent call last):\r\n File \"/users/yli7/.conda/envs/pytorch2.0/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 346, in wrapper\r\n return f(*args, **kwargs)\r\n File \"Pretrain.py\", line 214, in main\r\n pair_dataset, c4_dataset = create_dataset('pretrain', config)\r\n File \"/dcs05/qiao/data/william/project/DaVinci/dataset/__init__.py\", line 109, in create_dataset\r\n c4_dataset = load_dataset(\"c4\", \"en\", split=\"train\").to_iterable_dataset(num_shards=1024).map(pre_caption_huggingface)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/load.py\", line 1810, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1145, in as_dataset\r\n datasets = map_nested(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 436, in map_nested\r\n return function(data_struct)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1175, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1246, in _as_dataset\r\n dataset_kwargs = ArrowReader(cache_dir, self.info).read(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 244, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 265, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 200, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 336, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 357, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 1059, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 65, in _memory_mapped_arrow_table_from_file\r\n opened_stream = _memory_mapped_record_batch_reader_from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 50, in _memory_mapped_record_batch_reader_from_file\r\n memory_mapped_stream = pa.memory_map(filename)\r\n File \"pyarrow/io.pxi\", line 1009, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 956, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 115, in pyarrow.lib.check_status\r\n OSError: Memory mapping file failed: Cannot allocate memory\r\n```", "This issue has previously been reported here: https://github.com/huggingface/datasets/issues/5710. Reporting it in the Arrow repo makes more sense as they have control over memory mapping.\r\n\r\nPS: this is the API to reduce the size of the generated Arrow file:\r\n```python\r\nfrom datasets import load_dataset\r\nbuilder = load_dataset_builder(\"c4\", \"en\")\r\nbuilder.download_and_prepare(max_shard_size=\"5GB\")\r\ndataset = builder.as_dataset()\r\n```\r\n\r\nIf this resolves the issue, we can consider exposing `max_shard_size` in `load_dataset`.", "Thanks for the response. The problem seems not resolved. The memory I allocated to the environment is 64G and the following error still occurs\r\n`Python 3.8.16 (default, Jun 12 2023, 18:09:05) \r\n[GCC 11.2.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_dataset_builder\r\n>>> builder = load_dataset_builder(\"c4\", \"en\")\r\n>>> builder.download_and_prepare(max_shard_size=\"5GB\")\r\nFound cached dataset c4 (/users/yli7/.cache/huggingface/datasets/c4/en/0.0.0/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01)\r\n>>> dataset = builder.as_dataset()\r\n 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1145, in as_dataset\r\n datasets = map_nested(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 444, in map_nested\r\n mapped = [\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 445, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 347, in _single_map_nested\r\n return function(data_struct)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1175, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1246, in _as_dataset\r\n dataset_kwargs = ArrowReader(cache_dir, self.info).read(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 244, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 265, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 200, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 336, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 357, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 1059, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 65, in _memory_mapped_arrow_table_from_file\r\n opened_stream = _memory_mapped_record_batch_reader_from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 50, in _memory_mapped_record_batch_reader_from_file\r\n memory_mapped_stream = pa.memory_map(filename)\r\n File \"pyarrow/io.pxi\", line 1009, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 956, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 115, in pyarrow.lib.check_status\r\nOSError: Memory mapping file failed: Cannot allocate memory`", "Have you solved the problem?", "Nope. Streaming works but is slower." ]
"2023-08-24T05:33:45Z"
"2023-10-11T06:00:10Z"
null
NONE
null
null
null
### Describe the bug Huggingface datasets use memory-mapped file to map large datasets in memory for fast access. However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over the limit, memory cannot be allocated), however, when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed. So is there a way to explicitly limit the size of memory mapped file? ### Steps to reproduce the bug python >>> from datasets import load_dataset >>> dataset = load_dataset("c4", "en", streaming=True) ### Expected behavior In a normal environment, this will not have any problem. However, when the system allocates a portion of the memory to the program and when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed. ### Environment info linux cluster with SGE(Sun Grid Engine)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6176/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6176/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3435/comments
https://api.github.com/repos/huggingface/datasets/issues/3435/events
https://github.com/huggingface/datasets/pull/3435
1,081,043,756
PR_kwDODunzps4v4_-0
3,435
Improve Wikipedia Loading Script
{ "avatar_url": "https://avatars.githubusercontent.com/u/45494522?v=4", "events_url": "https://api.github.com/users/geohci/events{/privacy}", "followers_url": "https://api.github.com/users/geohci/followers", "following_url": "https://api.github.com/users/geohci/following{/other_user}", "gists_url": "https://api.github.com/users/geohci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/geohci", "id": 45494522, "login": "geohci", "node_id": "MDQ6VXNlcjQ1NDk0NTIy", "organizations_url": "https://api.github.com/users/geohci/orgs", "received_events_url": "https://api.github.com/users/geohci/received_events", "repos_url": "https://api.github.com/users/geohci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/geohci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geohci/subscriptions", "type": "User", "url": "https://api.github.com/users/geohci" }
[]
closed
false
null
[]
null
[ "I wanted to flag a change from since we discussed this: I initially wrote a function for using the Wikimedia APIs to collect namespace aliases, but decided that adding in more http requests to the script wasn't a great idea so instead used that code to build a static list that I just added directly to the code.\r\n\r\nAlso, an FYI that python library dependencies weren't working on my local end so I wasn't able to directly test the code. I tested a copy with the problematic elements stripped (beam etc.) that worked fine, but someone with a working local copy may want to test just to make sure I didn't accidentally break anything.", "Also, while I would argue more strongly for some of the changes in this code, they are five distinct changes so not so hard to remove one or two if other folks think they aren't worth the overhead etc.", "I also add a comment by @geohci in the Issue page:\r\n> See https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikipedia%20Processing.ipynb for more implementation details / some data around the overhead induced by adding the extra preprocessing steps (stripping link prefixes and magic words)", "Hi ! Thanks a lot, this is very cool ! Note that unfortunately if we change the processing right now, users won't be able to load the \"big\" languages like english anymore, because it requires an Apache Beam runtime to process them. Some Wikipedia dumps have been processed by Hugging Face so that users don't need to run Apache Beam stuff.\r\n\r\nTherefore, we can merge this change after we have processed dumps using this new processing, and host them on the Hugging Face google storage.\r\n\r\nI think we can take care of this and let you know once this is ready ? What do you think @albertvillanova ?\r\n\r\nThis is also an opportunity to have the latest dumps ready, the current ones are from 2020", "Related PR on updating to the latest dates: https://github.com/huggingface/datasets/pull/3612", "@lhoestq if the additional processing steps are validated, we could go on generating the processed datasets for the big languages.\r\n\r\nThe only thing before doing that is that we should also validate other change (so that we include it also in the processed datasets):\r\n- #3398 ", "> @lhoestq if the additional processing steps are validated, we could go on generating the processed datasets for the big languages.\r\n\r\nCool ! Looking forward to it :)\r\n\r\n> The only thing before doing that is that we should also validate other change (so that we include it also in the processed datasets):\r\n> \r\n> https://github.com/huggingface/datasets/issues/3398\r\n\r\nSounds good ! We can definitely add the URL as asked by the Wikipedia to provide credits to the authors.", "@geohci I do not have push rights to this PR. See: [Enabling repository maintainer permissions on existing pull requests](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests).\r\n\r\nI would like to merge the master branch so that all tests pass. Once done, I will be able approve this PR.", "> @geohci I do not have push rights to this PR. See: [Enabling repository maintainer permissions on existing pull requests](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests).\r\n> \r\n> I would like to merge the master branch so that all tests pass. Once done, I will be able approve this PR.\r\n\r\n@albertvillanova the `Allow edits by maintainers` box was already checked (what your instructions indicated) and indicates `If checked, users with write access to huggingface/datasets can add new commits to your wikipedia-updates branch. You can always change this setting later.` so you should have permissions already. If there's something else I'm missing or can do, please let me know. If it's not easy to resolve, I am plenty comfortable with you creating a new PR with these changes under your account too." ]
"2021-12-15T13:30:06Z"
"2022-03-04T08:16:00Z"
"2022-03-04T08:16:00Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3435.diff", "html_url": "https://github.com/huggingface/datasets/pull/3435", "merged_at": "2022-03-04T08:16:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/3435.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3435" }
* More structured approach to detecting redirects * Remove redundant template filter code (covered by strip_code) * Add language-specific lists of additional media namespace aliases for filtering * Add language-specific lists of category namespace aliases for new link text cleaning step * Remove magic words (parser directions like __TOC__ that occasionally occur in text) Fix #3400 With support from @albertvillanova CC @yjernite
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3435/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3435/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1633/comments
https://api.github.com/repos/huggingface/datasets/issues/1633/events
https://github.com/huggingface/datasets/issues/1633
774,422,603
MDU6SXNzdWU3NzQ0MjI2MDM=
1,633
social_i_qa wrong format of labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost" }
[]
closed
false
null
[]
null
[ "@lhoestq, should I raise a PR for this? Just a minor change while reading labels text file", "Sure feel free to open a PR thanks !" ]
"2020-12-24T13:11:54Z"
"2020-12-30T17:18:49Z"
"2020-12-30T17:18:49Z"
NONE
null
null
null
Hi, there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent. so label is 'label': '1\n', not '1' thanks ``` >>> import datasets >>> from datasets import load_dataset >>> dataset = load_dataset( ... 'social_i_qa') cahce dir /julia/cache/datasets Downloading: 4.72kB [00:00, 3.52MB/s] cahce dir /julia/cache/datasets Downloading: 2.19kB [00:00, 1.81MB/s] Using custom data configuration default Reusing dataset social_i_qa (/julia/datasets/social_i_qa/default/0.1.0/4a4190cc2d2482d43416c2167c0c5dccdd769d4482e84893614bd069e5c3ba06) >>> dataset['train'][0] {'answerA': 'like attending', 'answerB': 'like staying home', 'answerC': 'a good friend to have', 'context': 'Cameron decided to have a barbecue and gathered her friends together.', 'label': '1\n', 'question': 'How would Others feel as a result?'} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1633/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1633/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4439/comments
https://api.github.com/repos/huggingface/datasets/issues/4439/events
https://github.com/huggingface/datasets/issues/4439
1,258,434,111
I_kwDODunzps5LAi4_
4,439
TIMIT won't load after manual download: Errors about files that don't exist
{ "avatar_url": "https://avatars.githubusercontent.com/u/13925685?v=4", "events_url": "https://api.github.com/users/drscotthawley/events{/privacy}", "followers_url": "https://api.github.com/users/drscotthawley/followers", "following_url": "https://api.github.com/users/drscotthawley/following{/other_user}", "gists_url": "https://api.github.com/users/drscotthawley/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/drscotthawley", "id": 13925685, "login": "drscotthawley", "node_id": "MDQ6VXNlcjEzOTI1Njg1", "organizations_url": "https://api.github.com/users/drscotthawley/orgs", "received_events_url": "https://api.github.com/users/drscotthawley/received_events", "repos_url": "https://api.github.com/users/drscotthawley/repos", "site_admin": false, "starred_url": "https://api.github.com/users/drscotthawley/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drscotthawley/subscriptions", "type": "User", "url": "https://api.github.com/users/drscotthawley" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "To have some context, please see:\r\n- #4145\r\n\r\nPlease, also note that we have recently made some fixes to the script, which are in our GitHub master branch but not yet released:\r\n- #4422\r\n- #4425 \r\n- #4436", "Thanks Albert! I'll try pulling `datasets` from the git repo instead of PyPI, and/or just wait for the next release.\r\n", "I'm closing this issue then. Please, feel free to reopen it again if the problem persists." ]
"2022-06-02T16:35:56Z"
"2022-06-03T08:44:17Z"
"2022-06-03T08:44:16Z"
NONE
null
null
null
## Describe the bug I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT: ## Steps to reproduce the bug ```python data = load_dataset('timit_asr', 'clean')['train'] ``` ## Expected results The dataset should load with no errors. ## Actual results This error message: ``` File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls raise FileNotFoundError(error_msg) FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place? The files in the dataset look like the following: ``` ³ PHONCODE.DOC ³ PROMPTS.TXT ³ SPKRINFO.TXT ³ SPKRSENT.TXT ³ TESTSET.DOC ``` ...so why are these being excluded by the dataset loader? ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27 - Python version: 3.9.9 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4439/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4439/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3382
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3382/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3382/comments
https://api.github.com/repos/huggingface/datasets/issues/3382/events
https://github.com/huggingface/datasets/pull/3382
1,071,293,299
PR_kwDODunzps4vZT2K
3,382
#3337 Add typing overloads to Dataset.__getitem__ for mypy
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
[]
closed
false
null
[]
null
[ "Locally the `make quality` passes with the same dependencies. I would suggest upgrading flake8. (I can take care of it in another PR)\r\ncc @lhoestq ", "Thank you for fixing flake8! I think we are ready to merge then. " ]
"2021-12-04T20:54:49Z"
"2021-12-14T10:28:55Z"
"2021-12-14T10:28:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3382.diff", "html_url": "https://github.com/huggingface/datasets/pull/3382", "merged_at": "2021-12-14T10:28:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/3382.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3382" }
Add typing overloads to Dataset.__getitem__ for mypy Fixes #3337 **Iterable** Iterable from `collections` cannot have a type, so you can't do `Iterable[int]` for example. `typing` has a Generic version that builds upon the one from `collections`. **Flake8** I had to add `# noqa: F811`, this is a bug from Flake8. datasets uses flake8==3.7.9 which released in October 2019 if I update flake8 (4.0.1), I no longer get these errors, but I did not want to make the update without your approval. (It also triggers other errors like no args in f-strings.)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3382/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3382/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5949/comments
https://api.github.com/repos/huggingface/datasets/issues/5949/events
https://github.com/huggingface/datasets/pull/5949
1,754,843,717
PR_kwDODunzps5S4oPC
5,949
Replace metadata utils with `huggingface_hub`'s RepoCard API
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006635 / 0.011353 (-0.004718) | 0.004439 / 0.011008 (-0.006570) | 0.107831 / 0.038508 (0.069323) | 0.035664 / 0.023109 (0.012555) | 0.393733 / 0.275898 (0.117835) | 0.418336 / 0.323480 (0.094856) | 0.005739 / 0.007986 (-0.002247) | 0.005737 / 0.004328 (0.001408) | 0.079820 / 0.004250 (0.075569) | 0.045402 / 0.037052 (0.008349) | 0.396108 / 0.258489 (0.137619) | 0.422951 / 0.293841 (0.129110) | 0.030506 / 0.128546 (-0.098040) | 0.009785 / 0.075646 (-0.065861) | 0.375302 / 0.419271 (-0.043969) | 0.054355 / 0.043533 (0.010823) | 0.399652 / 0.255139 (0.144513) | 0.410825 / 0.283200 (0.127625) | 0.109238 / 0.141683 (-0.032445) | 1.687532 / 1.452155 (0.235378) | 1.736829 / 1.492716 (0.244113) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226514 / 0.018006 (0.208508) | 0.487010 / 0.000490 (0.486520) | 0.006436 / 0.000200 (0.006236) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029097 / 0.037411 (-0.008315) | 0.122979 / 0.014526 (0.108453) | 0.129454 / 0.176557 (-0.047103) | 0.194006 / 0.737135 (-0.543129) | 0.137968 / 0.296338 (-0.158370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.466425 / 0.215209 (0.251216) | 4.627307 / 2.077655 (2.549652) | 2.108840 / 1.504120 (0.604720) | 1.882547 / 1.541195 (0.341353) | 1.891077 / 1.468490 (0.422587) | 0.590646 / 4.584777 (-3.994131) | 4.176918 / 3.745712 (0.431205) | 2.071475 / 5.269862 (-3.198386) | 1.173815 / 4.565676 (-3.391862) | 0.075330 / 0.424275 (-0.348945) | 0.012944 / 0.007607 (0.005337) | 0.587080 / 0.226044 (0.361036) | 5.827053 / 2.268929 (3.558125) | 2.694258 / 55.444624 (-52.750366) | 2.276997 / 6.876477 (-4.599480) | 2.329678 / 2.142072 (0.187605) | 0.721860 / 4.805227 (-4.083367) | 0.159238 / 6.500664 (-6.341426) | 0.073013 / 0.075469 (-0.002456) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345396 / 1.841788 (-0.496391) | 16.619283 / 8.074308 (8.544975) | 14.754754 / 10.191392 (4.563362) | 0.180784 / 0.680424 (-0.499639) | 0.020376 / 0.534201 (-0.513825) | 0.451010 / 0.579283 (-0.128273) | 0.481524 / 0.434364 (0.047160) | 0.564777 / 0.540337 (0.024440) | 0.683232 / 1.386936 (-0.703704) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007243 / 0.011353 (-0.004110) | 0.005262 / 0.011008 (-0.005746) | 0.084090 / 0.038508 (0.045581) | 0.037429 / 0.023109 (0.014320) | 0.404038 / 0.275898 (0.128140) | 0.445040 / 0.323480 (0.121560) | 0.006220 / 0.007986 (-0.001766) | 0.004256 / 0.004328 (-0.000072) | 0.083794 / 0.004250 (0.079544) | 0.052655 / 0.037052 (0.015603) | 0.414083 / 0.258489 (0.155594) | 0.458190 / 0.293841 (0.164349) | 0.032719 / 0.128546 (-0.095828) | 0.010063 / 0.075646 (-0.065583) | 0.092281 / 0.419271 (-0.326990) | 0.053888 / 0.043533 (0.010355) | 0.407813 / 0.255139 (0.152674) | 0.431692 / 0.283200 (0.148493) | 0.119799 / 0.141683 (-0.021884) | 1.709853 / 1.452155 (0.257698) | 1.771592 / 1.492716 (0.278876) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246540 / 0.018006 (0.228534) | 0.483199 / 0.000490 (0.482709) | 0.002514 / 0.000200 (0.002315) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031576 / 0.037411 (-0.005835) | 0.130020 / 0.014526 (0.115495) | 0.140285 / 0.176557 (-0.036272) | 0.196164 / 0.737135 (-0.540972) | 0.143924 / 0.296338 (-0.152414) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488549 / 0.215209 (0.273340) | 4.888055 / 2.077655 (2.810400) | 2.389163 / 1.504120 (0.885043) | 2.184626 / 1.541195 (0.643431) | 2.260227 / 1.468490 (0.791737) | 0.601331 / 4.584777 (-3.983446) | 4.386159 / 3.745712 (0.640447) | 3.345814 / 5.269862 (-1.924048) | 1.734360 / 4.565676 (-2.831317) | 0.073199 / 0.424275 (-0.351076) | 0.012397 / 0.007607 (0.004790) | 0.601411 / 0.226044 (0.375366) | 6.135000 / 2.268929 (3.866072) | 2.930169 / 55.444624 (-52.514456) | 2.532631 / 6.876477 (-4.343845) | 2.619351 / 2.142072 (0.477279) | 0.740954 / 4.805227 (-4.064274) | 0.162936 / 6.500664 (-6.337728) | 0.073885 / 0.075469 (-0.001585) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502493 / 1.841788 (-0.339294) | 17.026756 / 8.074308 (8.952448) | 15.880958 / 10.191392 (5.689566) | 0.167261 / 0.680424 (-0.513163) | 0.020347 / 0.534201 (-0.513854) | 0.452902 / 0.579283 (-0.126381) | 0.481614 / 0.434364 (0.047250) | 0.539893 / 0.540337 (-0.000445) | 0.653401 / 1.386936 (-0.733535) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6a5781212e968e2515afdf29370a6eab6f657120 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008268 / 0.011353 (-0.003084) | 0.005538 / 0.011008 (-0.005470) | 0.126136 / 0.038508 (0.087628) | 0.046100 / 0.023109 (0.022991) | 0.366882 / 0.275898 (0.090984) | 0.408912 / 0.323480 (0.085432) | 0.007090 / 0.007986 (-0.000895) | 0.004820 / 0.004328 (0.000491) | 0.091432 / 0.004250 (0.087181) | 0.058390 / 0.037052 (0.021338) | 0.368787 / 0.258489 (0.110298) | 0.419429 / 0.293841 (0.125588) | 0.034958 / 0.128546 (-0.093588) | 0.010526 / 0.075646 (-0.065120) | 0.463063 / 0.419271 (0.043791) | 0.070544 / 0.043533 (0.027011) | 0.366182 / 0.255139 (0.111043) | 0.390851 / 0.283200 (0.107652) | 0.128377 / 0.141683 (-0.013306) | 1.819385 / 1.452155 (0.367231) | 1.928834 / 1.492716 (0.436117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228413 / 0.018006 (0.210407) | 0.485511 / 0.000490 (0.485021) | 0.005395 / 0.000200 (0.005195) | 0.000119 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035209 / 0.037411 (-0.002203) | 0.144492 / 0.014526 (0.129967) | 0.150467 / 0.176557 (-0.026089) | 0.223861 / 0.737135 (-0.513274) | 0.156363 / 0.296338 (-0.139975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.517751 / 0.215209 (0.302542) | 5.150438 / 2.077655 (3.072783) | 2.483601 / 1.504120 (0.979481) | 2.279786 / 1.541195 (0.738592) | 2.374510 / 1.468490 (0.906020) | 0.637547 / 4.584777 (-3.947230) | 4.845393 / 3.745712 (1.099681) | 2.241554 / 5.269862 (-3.028307) | 1.290105 / 4.565676 (-3.275572) | 0.079791 / 0.424275 (-0.344484) | 0.014915 / 0.007607 (0.007308) | 0.640468 / 0.226044 (0.414423) | 6.394810 / 2.268929 (4.125881) | 3.012748 / 55.444624 (-52.431876) | 2.625565 / 6.876477 (-4.250912) | 2.792435 / 2.142072 (0.650363) | 0.782284 / 4.805227 (-4.022944) | 0.171628 / 6.500664 (-6.329036) | 0.081714 / 0.075469 (0.006245) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.592411 / 1.841788 (-0.249377) | 18.999604 / 8.074308 (10.925295) | 18.469946 / 10.191392 (8.278554) | 0.200878 / 0.680424 (-0.479546) | 0.021595 / 0.534201 (-0.512606) | 0.519247 / 0.579283 (-0.060036) | 0.534940 / 0.434364 (0.100576) | 0.656325 / 0.540337 (0.115987) | 0.789658 / 1.386936 (-0.597278) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008093 / 0.011353 (-0.003260) | 0.005524 / 0.011008 (-0.005484) | 0.092339 / 0.038508 (0.053831) | 0.045619 / 0.023109 (0.022510) | 0.449376 / 0.275898 (0.173478) | 0.478587 / 0.323480 (0.155107) | 0.006978 / 0.007986 (-0.001007) | 0.004622 / 0.004328 (0.000294) | 0.090618 / 0.004250 (0.086368) | 0.059321 / 0.037052 (0.022269) | 0.450989 / 0.258489 (0.192500) | 0.491652 / 0.293841 (0.197811) | 0.033308 / 0.128546 (-0.095238) | 0.010677 / 0.075646 (-0.064969) | 0.099836 / 0.419271 (-0.319435) | 0.055937 / 0.043533 (0.012404) | 0.440560 / 0.255139 (0.185421) | 0.475305 / 0.283200 (0.192105) | 0.130829 / 0.141683 (-0.010854) | 1.857943 / 1.452155 (0.405789) | 1.989534 / 1.492716 (0.496818) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244715 / 0.018006 (0.226709) | 0.482866 / 0.000490 (0.482377) | 0.001100 / 0.000200 (0.000900) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036288 / 0.037411 (-0.001124) | 0.147903 / 0.014526 (0.133377) | 0.154141 / 0.176557 (-0.022416) | 0.221863 / 0.737135 (-0.515272) | 0.162319 / 0.296338 (-0.134019) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.536972 / 0.215209 (0.321763) | 5.382866 / 2.077655 (3.305211) | 2.719575 / 1.504120 (1.215456) | 2.516596 / 1.541195 (0.975401) | 2.699602 / 1.468490 (1.231112) | 0.639886 / 4.584777 (-3.944891) | 5.109746 / 3.745712 (1.364034) | 2.260206 / 5.269862 (-3.009656) | 1.305506 / 4.565676 (-3.260170) | 0.080262 / 0.424275 (-0.344013) | 0.014801 / 0.007607 (0.007194) | 0.661228 / 0.226044 (0.435184) | 6.596485 / 2.268929 (4.327557) | 3.226114 / 55.444624 (-52.218510) | 2.859776 / 6.876477 (-4.016701) | 3.059355 / 2.142072 (0.917282) | 0.793413 / 4.805227 (-4.011814) | 0.176521 / 6.500664 (-6.324143) | 0.084062 / 0.075469 (0.008593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.642085 / 1.841788 (-0.199703) | 20.355459 / 8.074308 (12.281151) | 17.979620 / 10.191392 (7.788228) | 0.229329 / 0.680424 (-0.451094) | 0.025681 / 0.534201 (-0.508520) | 0.534142 / 0.579283 (-0.045141) | 0.623439 / 0.434364 (0.189075) | 0.621938 / 0.540337 (0.081601) | 0.759038 / 1.386936 (-0.627898) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6a98ff43225df344139023a5b7eb9caef610b677 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007703 / 0.011353 (-0.003649) | 0.005362 / 0.011008 (-0.005646) | 0.113111 / 0.038508 (0.074602) | 0.038891 / 0.023109 (0.015782) | 0.348938 / 0.275898 (0.073040) | 0.398079 / 0.323480 (0.074599) | 0.006707 / 0.007986 (-0.001278) | 0.004489 / 0.004328 (0.000160) | 0.087194 / 0.004250 (0.082943) | 0.054268 / 0.037052 (0.017216) | 0.359949 / 0.258489 (0.101460) | 0.402959 / 0.293841 (0.109118) | 0.032508 / 0.128546 (-0.096038) | 0.010224 / 0.075646 (-0.065422) | 0.387007 / 0.419271 (-0.032264) | 0.058971 / 0.043533 (0.015439) | 0.345085 / 0.255139 (0.089946) | 0.384306 / 0.283200 (0.101107) | 0.122253 / 0.141683 (-0.019430) | 1.706353 / 1.452155 (0.254199) | 1.840780 / 1.492716 (0.348063) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254374 / 0.018006 (0.236368) | 0.497387 / 0.000490 (0.496897) | 0.012294 / 0.000200 (0.012094) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030902 / 0.037411 (-0.006509) | 0.132098 / 0.014526 (0.117573) | 0.140311 / 0.176557 (-0.036245) | 0.205887 / 0.737135 (-0.531249) | 0.143992 / 0.296338 (-0.152347) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467367 / 0.215209 (0.252158) | 4.669936 / 2.077655 (2.592281) | 2.155358 / 1.504120 (0.651238) | 1.984132 / 1.541195 (0.442937) | 2.102352 / 1.468490 (0.633861) | 0.607014 / 4.584777 (-3.977763) | 4.396479 / 3.745712 (0.650767) | 4.666056 / 5.269862 (-0.603806) | 2.176649 / 4.565676 (-2.389028) | 0.072657 / 0.424275 (-0.351619) | 0.012367 / 0.007607 (0.004759) | 0.569706 / 0.226044 (0.343661) | 5.749083 / 2.268929 (3.480154) | 2.640824 / 55.444624 (-52.803801) | 2.310253 / 6.876477 (-4.566224) | 2.486748 / 2.142072 (0.344676) | 0.737891 / 4.805227 (-4.067336) | 0.163507 / 6.500664 (-6.337157) | 0.075776 / 0.075469 (0.000307) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362710 / 1.841788 (-0.479078) | 17.010705 / 8.074308 (8.936396) | 15.084231 / 10.191392 (4.892839) | 0.218274 / 0.680424 (-0.462150) | 0.019555 / 0.534201 (-0.514646) | 0.456013 / 0.579283 (-0.123270) | 0.502772 / 0.434364 (0.068408) | 0.581480 / 0.540337 (0.041142) | 0.686952 / 1.386936 (-0.699984) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007976 / 0.011353 (-0.003377) | 0.005141 / 0.011008 (-0.005868) | 0.086629 / 0.038508 (0.048121) | 0.039553 / 0.023109 (0.016444) | 0.433028 / 0.275898 (0.157130) | 0.463444 / 0.323480 (0.139964) | 0.006967 / 0.007986 (-0.001018) | 0.005814 / 0.004328 (0.001485) | 0.086266 / 0.004250 (0.082015) | 0.055384 / 0.037052 (0.018332) | 0.428733 / 0.258489 (0.170243) | 0.475670 / 0.293841 (0.181829) | 0.032872 / 0.128546 (-0.095674) | 0.010664 / 0.075646 (-0.064983) | 0.094357 / 0.419271 (-0.324915) | 0.058386 / 0.043533 (0.014854) | 0.431114 / 0.255139 (0.175975) | 0.441728 / 0.283200 (0.158528) | 0.131942 / 0.141683 (-0.009740) | 1.782214 / 1.452155 (0.330060) | 1.843185 / 1.492716 (0.350469) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247047 / 0.018006 (0.229041) | 0.488931 / 0.000490 (0.488441) | 0.002657 / 0.000200 (0.002457) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033893 / 0.037411 (-0.003518) | 0.131021 / 0.014526 (0.116495) | 0.142892 / 0.176557 (-0.033665) | 0.200955 / 0.737135 (-0.536180) | 0.151329 / 0.296338 (-0.145010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.521138 / 0.215209 (0.305929) | 5.085207 / 2.077655 (3.007552) | 2.652901 / 1.504120 (1.148781) | 2.401545 / 1.541195 (0.860350) | 2.553461 / 1.468490 (1.084971) | 0.615347 / 4.584777 (-3.969430) | 4.448038 / 3.745712 (0.702326) | 2.049997 / 5.269862 (-3.219865) | 1.190602 / 4.565676 (-3.375075) | 0.073356 / 0.424275 (-0.350919) | 0.013685 / 0.007607 (0.006078) | 0.626705 / 0.226044 (0.400660) | 6.391941 / 2.268929 (4.123012) | 3.218864 / 55.444624 (-52.225760) | 2.858808 / 6.876477 (-4.017669) | 3.005808 / 2.142072 (0.863736) | 0.740725 / 4.805227 (-4.064502) | 0.161904 / 6.500664 (-6.338760) | 0.073727 / 0.075469 (-0.001742) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.488623 / 1.841788 (-0.353164) | 17.584367 / 8.074308 (9.510059) | 16.281818 / 10.191392 (6.090426) | 0.164482 / 0.680424 (-0.515942) | 0.020197 / 0.534201 (-0.514003) | 0.456750 / 0.579283 (-0.122533) | 0.501156 / 0.434364 (0.066792) | 0.549779 / 0.540337 (0.009442) | 0.650156 / 1.386936 (-0.736780) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2b6cc63b868ea4ee60502845ebec68abb943958b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008337 / 0.011353 (-0.003016) | 0.005911 / 0.011008 (-0.005097) | 0.129037 / 0.038508 (0.090529) | 0.046071 / 0.023109 (0.022962) | 0.418657 / 0.275898 (0.142759) | 0.490340 / 0.323480 (0.166860) | 0.006387 / 0.007986 (-0.001598) | 0.004724 / 0.004328 (0.000396) | 0.097953 / 0.004250 (0.093702) | 0.069025 / 0.037052 (0.031972) | 0.431178 / 0.258489 (0.172689) | 0.458363 / 0.293841 (0.164522) | 0.049341 / 0.128546 (-0.079205) | 0.014637 / 0.075646 (-0.061009) | 0.439800 / 0.419271 (0.020529) | 0.069905 / 0.043533 (0.026373) | 0.406775 / 0.255139 (0.151636) | 0.441989 / 0.283200 (0.158790) | 0.046009 / 0.141683 (-0.095674) | 1.847630 / 1.452155 (0.395475) | 1.904067 / 1.492716 (0.411351) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288305 / 0.018006 (0.270299) | 0.594547 / 0.000490 (0.594058) | 0.005600 / 0.000200 (0.005400) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033847 / 0.037411 (-0.003564) | 0.125139 / 0.014526 (0.110613) | 0.147982 / 0.176557 (-0.028574) | 0.208396 / 0.737135 (-0.528739) | 0.144005 / 0.296338 (-0.152334) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669175 / 0.215209 (0.453966) | 6.605289 / 2.077655 (4.527634) | 2.720468 / 1.504120 (1.216348) | 2.341355 / 1.541195 (0.800160) | 2.402069 / 1.468490 (0.933578) | 0.939303 / 4.584777 (-3.645474) | 5.718545 / 3.745712 (1.972833) | 2.856235 / 5.269862 (-2.413627) | 1.821555 / 4.565676 (-2.744121) | 0.105473 / 0.424275 (-0.318802) | 0.014490 / 0.007607 (0.006883) | 0.774349 / 0.226044 (0.548305) | 8.065048 / 2.268929 (5.796120) | 3.508482 / 55.444624 (-51.936143) | 2.822881 / 6.876477 (-4.053596) | 2.962947 / 2.142072 (0.820875) | 1.138944 / 4.805227 (-3.666284) | 0.248414 / 6.500664 (-6.252250) | 0.095665 / 0.075469 (0.020196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.688231 / 1.841788 (-0.153557) | 18.673305 / 8.074308 (10.598997) | 22.768663 / 10.191392 (12.577271) | 0.211238 / 0.680424 (-0.469186) | 0.031380 / 0.534201 (-0.502821) | 0.517175 / 0.579283 (-0.062108) | 0.626437 / 0.434364 (0.192073) | 0.624225 / 0.540337 (0.083888) | 0.743746 / 1.386936 (-0.643191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008888 / 0.011353 (-0.002464) | 0.005491 / 0.011008 (-0.005517) | 0.105013 / 0.038508 (0.066505) | 0.049456 / 0.023109 (0.026347) | 0.528989 / 0.275898 (0.253091) | 0.651871 / 0.323480 (0.328391) | 0.006683 / 0.007986 (-0.001302) | 0.004365 / 0.004328 (0.000037) | 0.098161 / 0.004250 (0.093911) | 0.075615 / 0.037052 (0.038563) | 0.543746 / 0.258489 (0.285257) | 0.650855 / 0.293841 (0.357014) | 0.050220 / 0.128546 (-0.078327) | 0.014471 / 0.075646 (-0.061175) | 0.115903 / 0.419271 (-0.303368) | 0.065925 / 0.043533 (0.022392) | 0.527797 / 0.255139 (0.272658) | 0.543834 / 0.283200 (0.260634) | 0.043005 / 0.141683 (-0.098678) | 1.842846 / 1.452155 (0.390691) | 1.970615 / 1.492716 (0.477899) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287350 / 0.018006 (0.269343) | 0.591139 / 0.000490 (0.590649) | 0.006423 / 0.000200 (0.006223) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034594 / 0.037411 (-0.002818) | 0.137155 / 0.014526 (0.122629) | 0.154662 / 0.176557 (-0.021894) | 0.217834 / 0.737135 (-0.519301) | 0.159642 / 0.296338 (-0.136696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.664288 / 0.215209 (0.449079) | 6.926912 / 2.077655 (4.849257) | 3.028957 / 1.504120 (1.524837) | 2.625178 / 1.541195 (1.083983) | 2.725316 / 1.468490 (1.256826) | 1.015715 / 4.584777 (-3.569062) | 5.834694 / 3.745712 (2.088982) | 5.105269 / 5.269862 (-0.164593) | 2.316194 / 4.565676 (-2.249483) | 0.113802 / 0.424275 (-0.310473) | 0.014079 / 0.007607 (0.006472) | 0.893727 / 0.226044 (0.667683) | 8.577701 / 2.268929 (6.308772) | 3.706907 / 55.444624 (-51.737717) | 3.087530 / 6.876477 (-3.788947) | 3.295004 / 2.142072 (1.152931) | 1.204172 / 4.805227 (-3.601055) | 0.248720 / 6.500664 (-6.251944) | 0.107208 / 0.075469 (0.031739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.800058 / 1.841788 (-0.041730) | 19.253646 / 8.074308 (11.179338) | 22.590804 / 10.191392 (12.399412) | 0.270687 / 0.680424 (-0.409737) | 0.028678 / 0.534201 (-0.505522) | 0.534670 / 0.579283 (-0.044613) | 0.642881 / 0.434364 (0.208518) | 0.615521 / 0.540337 (0.075184) | 0.723733 / 1.386936 (-0.663203) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2591cd45a002a06bd551343ec785abf16f1433e2 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.017236 / 0.011353 (0.005883) | 0.005341 / 0.011008 (-0.005667) | 0.131471 / 0.038508 (0.092963) | 0.048868 / 0.023109 (0.025758) | 0.448942 / 0.275898 (0.173044) | 0.498721 / 0.323480 (0.175241) | 0.006825 / 0.007986 (-0.001161) | 0.004587 / 0.004328 (0.000259) | 0.104142 / 0.004250 (0.099891) | 0.075521 / 0.037052 (0.038469) | 0.439538 / 0.258489 (0.181049) | 0.498720 / 0.293841 (0.204879) | 0.051352 / 0.128546 (-0.077194) | 0.015070 / 0.075646 (-0.060576) | 0.441752 / 0.419271 (0.022480) | 0.089166 / 0.043533 (0.045633) | 0.428909 / 0.255139 (0.173770) | 0.446648 / 0.283200 (0.163448) | 0.042371 / 0.141683 (-0.099312) | 1.993948 / 1.452155 (0.541793) | 2.065756 / 1.492716 (0.573039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257279 / 0.018006 (0.239273) | 0.575453 / 0.000490 (0.574964) | 0.004120 / 0.000200 (0.003920) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034012 / 0.037411 (-0.003399) | 0.141737 / 0.014526 (0.127211) | 0.145241 / 0.176557 (-0.031316) | 0.226196 / 0.737135 (-0.510939) | 0.149526 / 0.296338 (-0.146813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665762 / 0.215209 (0.450553) | 6.683737 / 2.077655 (4.606083) | 2.869485 / 1.504120 (1.365365) | 2.462808 / 1.541195 (0.921613) | 2.526808 / 1.468490 (1.058318) | 0.957518 / 4.584777 (-3.627259) | 5.926261 / 3.745712 (2.180548) | 5.027822 / 5.269862 (-0.242040) | 2.643185 / 4.565676 (-1.922491) | 0.117014 / 0.424275 (-0.307261) | 0.015142 / 0.007607 (0.007535) | 0.835694 / 0.226044 (0.609650) | 8.427356 / 2.268929 (6.158427) | 3.649597 / 55.444624 (-51.795027) | 2.989607 / 6.876477 (-3.886870) | 3.043160 / 2.142072 (0.901088) | 1.158872 / 4.805227 (-3.646355) | 0.240456 / 6.500664 (-6.260208) | 0.089196 / 0.075469 (0.013726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.689361 / 1.841788 (-0.152427) | 18.842158 / 8.074308 (10.767850) | 22.604249 / 10.191392 (12.412857) | 0.248487 / 0.680424 (-0.431936) | 0.029668 / 0.534201 (-0.504533) | 0.536283 / 0.579283 (-0.043001) | 0.663253 / 0.434364 (0.228890) | 0.622973 / 0.540337 (0.082635) | 0.735297 / 1.386936 (-0.651639) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009296 / 0.011353 (-0.002057) | 0.005955 / 0.011008 (-0.005053) | 0.105723 / 0.038508 (0.067215) | 0.051184 / 0.023109 (0.028074) | 0.527095 / 0.275898 (0.251197) | 0.631697 / 0.323480 (0.308217) | 0.006577 / 0.007986 (-0.001408) | 0.004452 / 0.004328 (0.000124) | 0.105921 / 0.004250 (0.101670) | 0.071951 / 0.037052 (0.034899) | 0.572518 / 0.258489 (0.314029) | 0.623957 / 0.293841 (0.330116) | 0.050861 / 0.128546 (-0.077686) | 0.014897 / 0.075646 (-0.060749) | 0.122013 / 0.419271 (-0.297258) | 0.067194 / 0.043533 (0.023661) | 0.530352 / 0.255139 (0.275213) | 0.563912 / 0.283200 (0.280712) | 0.034756 / 0.141683 (-0.106927) | 1.961580 / 1.452155 (0.509425) | 2.052412 / 1.492716 (0.559696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304996 / 0.018006 (0.286990) | 0.584899 / 0.000490 (0.584409) | 0.010444 / 0.000200 (0.010244) | 0.000134 / 0.000054 (0.000080) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032540 / 0.037411 (-0.004871) | 0.137349 / 0.014526 (0.122823) | 0.146233 / 0.176557 (-0.030323) | 0.206978 / 0.737135 (-0.530157) | 0.154380 / 0.296338 (-0.141959) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.705438 / 0.215209 (0.490229) | 7.042159 / 2.077655 (4.964504) | 3.285501 / 1.504120 (1.781381) | 2.904710 / 1.541195 (1.363515) | 2.952838 / 1.468490 (1.484348) | 0.987784 / 4.584777 (-3.596993) | 5.949550 / 3.745712 (2.203838) | 2.927148 / 5.269862 (-2.342714) | 1.870054 / 4.565676 (-2.695622) | 0.119548 / 0.424275 (-0.304727) | 0.014565 / 0.007607 (0.006958) | 0.858311 / 0.226044 (0.632266) | 8.721679 / 2.268929 (6.452750) | 4.100825 / 55.444624 (-51.343800) | 3.358093 / 6.876477 (-3.518383) | 3.499637 / 2.142072 (1.357564) | 1.208932 / 4.805227 (-3.596295) | 0.232961 / 6.500664 (-6.267703) | 0.089727 / 0.075469 (0.014258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.780143 / 1.841788 (-0.061645) | 19.074991 / 8.074308 (11.000683) | 21.218487 / 10.191392 (11.027095) | 0.258690 / 0.680424 (-0.421734) | 0.029514 / 0.534201 (-0.504687) | 0.541764 / 0.579283 (-0.037519) | 0.640603 / 0.434364 (0.206239) | 0.635336 / 0.540337 (0.094999) | 0.756309 / 1.386936 (-0.630627) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1b525c199e6352aa8aac55f1dcddeb55a80db373 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009619 / 0.011353 (-0.001734) | 0.005683 / 0.011008 (-0.005325) | 0.136971 / 0.038508 (0.098463) | 0.051607 / 0.023109 (0.028497) | 0.439716 / 0.275898 (0.163818) | 0.486193 / 0.323480 (0.162713) | 0.006304 / 0.007986 (-0.001681) | 0.004489 / 0.004328 (0.000160) | 0.103837 / 0.004250 (0.099587) | 0.082954 / 0.037052 (0.045901) | 0.447286 / 0.258489 (0.188797) | 0.495434 / 0.293841 (0.201593) | 0.049244 / 0.128546 (-0.079302) | 0.015176 / 0.075646 (-0.060470) | 0.444406 / 0.419271 (0.025134) | 0.074766 / 0.043533 (0.031233) | 0.438585 / 0.255139 (0.183446) | 0.438232 / 0.283200 (0.155032) | 0.043372 / 0.141683 (-0.098311) | 2.057286 / 1.452155 (0.605131) | 2.049540 / 1.492716 (0.556824) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298038 / 0.018006 (0.280031) | 0.630771 / 0.000490 (0.630281) | 0.008287 / 0.000200 (0.008087) | 0.000123 / 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033637 / 0.037411 (-0.003775) | 0.128327 / 0.014526 (0.113801) | 0.150672 / 0.176557 (-0.025885) | 0.228521 / 0.737135 (-0.508614) | 0.142733 / 0.296338 (-0.153606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629072 / 0.215209 (0.413863) | 6.612047 / 2.077655 (4.534392) | 2.715594 / 1.504120 (1.211474) | 2.327823 / 1.541195 (0.786628) | 2.417508 / 1.468490 (0.949018) | 0.959134 / 4.584777 (-3.625643) | 5.669921 / 3.745712 (1.924209) | 2.977920 / 5.269862 (-2.291941) | 1.814564 / 4.565676 (-2.751112) | 0.120233 / 0.424275 (-0.304042) | 0.015859 / 0.007607 (0.008252) | 0.822618 / 0.226044 (0.596574) | 8.440306 / 2.268929 (6.171377) | 3.721611 / 55.444624 (-51.723013) | 2.954867 / 6.876477 (-3.921610) | 3.135364 / 2.142072 (0.993292) | 1.226475 / 4.805227 (-3.578752) | 0.246658 / 6.500664 (-6.254006) | 0.093920 / 0.075469 (0.018451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.665631 / 1.841788 (-0.176157) | 19.136369 / 8.074308 (11.062061) | 23.659564 / 10.191392 (13.468172) | 0.273430 / 0.680424 (-0.406994) | 0.028180 / 0.534201 (-0.506021) | 0.559588 / 0.579283 (-0.019695) | 0.649203 / 0.434364 (0.214840) | 0.647113 / 0.540337 (0.106776) | 0.737978 / 1.386936 (-0.648958) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009104 / 0.011353 (-0.002249) | 0.006838 / 0.011008 (-0.004171) | 0.104516 / 0.038508 (0.066008) | 0.047986 / 0.023109 (0.024877) | 0.521849 / 0.275898 (0.245951) | 0.586281 / 0.323480 (0.262801) | 0.006225 / 0.007986 (-0.001760) | 0.005713 / 0.004328 (0.001384) | 0.111507 / 0.004250 (0.107257) | 0.072320 / 0.037052 (0.035267) | 0.551061 / 0.258489 (0.292572) | 0.628034 / 0.293841 (0.334193) | 0.055417 / 0.128546 (-0.073129) | 0.019613 / 0.075646 (-0.056034) | 0.123958 / 0.419271 (-0.295314) | 0.066132 / 0.043533 (0.022600) | 0.504461 / 0.255139 (0.249322) | 0.560428 / 0.283200 (0.277229) | 0.036098 / 0.141683 (-0.105585) | 1.927398 / 1.452155 (0.475243) | 2.015952 / 1.492716 (0.523235) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313065 / 0.018006 (0.295059) | 0.609174 / 0.000490 (0.608684) | 0.008755 / 0.000200 (0.008555) | 0.000120 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.040042 / 0.037411 (0.002630) | 0.136053 / 0.014526 (0.121527) | 0.143406 / 0.176557 (-0.033150) | 0.213080 / 0.737135 (-0.524055) | 0.154730 / 0.296338 (-0.141609) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.692706 / 0.215209 (0.477497) | 6.952968 / 2.077655 (4.875314) | 3.232023 / 1.504120 (1.727903) | 2.835450 / 1.541195 (1.294256) | 2.933821 / 1.468490 (1.465331) | 0.984712 / 4.584777 (-3.600065) | 6.127651 / 3.745712 (2.381939) | 2.956781 / 5.269862 (-2.313081) | 1.879928 / 4.565676 (-2.685748) | 0.111069 / 0.424275 (-0.313206) | 0.014598 / 0.007607 (0.006991) | 0.871486 / 0.226044 (0.645442) | 8.588500 / 2.268929 (6.319572) | 3.910740 / 55.444624 (-51.533885) | 3.115781 / 6.876477 (-3.760695) | 3.222367 / 2.142072 (1.080294) | 1.229680 / 4.805227 (-3.575547) | 0.232092 / 6.500664 (-6.268572) | 0.097717 / 0.075469 (0.022248) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.774193 / 1.841788 (-0.067595) | 19.863087 / 8.074308 (11.788779) | 24.058856 / 10.191392 (13.867464) | 0.214917 / 0.680424 (-0.465507) | 0.028771 / 0.534201 (-0.505430) | 0.544548 / 0.579283 (-0.034735) | 0.655882 / 0.434364 (0.221518) | 0.629110 / 0.540337 (0.088773) | 0.749246 / 1.386936 (-0.637690) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f4a5ea6a42dcfef1577288b51beeccc0eb124cee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007075 / 0.011353 (-0.004278) | 0.005195 / 0.011008 (-0.005813) | 0.113043 / 0.038508 (0.074535) | 0.038442 / 0.023109 (0.015333) | 0.336310 / 0.275898 (0.060412) | 0.381888 / 0.323480 (0.058409) | 0.005990 / 0.007986 (-0.001996) | 0.003893 / 0.004328 (-0.000435) | 0.093123 / 0.004250 (0.088872) | 0.058449 / 0.037052 (0.021397) | 0.359463 / 0.258489 (0.100974) | 0.427485 / 0.293841 (0.133644) | 0.041454 / 0.128546 (-0.087092) | 0.013016 / 0.075646 (-0.062630) | 0.372849 / 0.419271 (-0.046422) | 0.059386 / 0.043533 (0.015853) | 0.381398 / 0.255139 (0.126259) | 0.367603 / 0.283200 (0.084403) | 0.033907 / 0.141683 (-0.107775) | 1.628903 / 1.452155 (0.176749) | 1.764131 / 1.492716 (0.271415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298329 / 0.018006 (0.280322) | 0.593030 / 0.000490 (0.592540) | 0.007653 / 0.000200 (0.007453) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025445 / 0.037411 (-0.011966) | 0.112062 / 0.014526 (0.097536) | 0.119863 / 0.176557 (-0.056693) | 0.178389 / 0.737135 (-0.558746) | 0.129934 / 0.296338 (-0.166404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.532834 / 0.215209 (0.317625) | 5.250908 / 2.077655 (3.173253) | 2.086920 / 1.504120 (0.582800) | 1.799745 / 1.541195 (0.258550) | 1.909648 / 1.468490 (0.441158) | 0.825382 / 4.584777 (-3.759395) | 5.268304 / 3.745712 (1.522592) | 2.533347 / 5.269862 (-2.736515) | 1.730187 / 4.565676 (-2.835490) | 0.099824 / 0.424275 (-0.324451) | 0.012969 / 0.007607 (0.005362) | 0.732234 / 0.226044 (0.506189) | 6.989066 / 2.268929 (4.720138) | 2.873486 / 55.444624 (-52.571138) | 2.274351 / 6.876477 (-4.602125) | 2.311060 / 2.142072 (0.168987) | 1.125366 / 4.805227 (-3.679861) | 0.214522 / 6.500664 (-6.286142) | 0.077579 / 0.075469 (0.002110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.670950 / 1.841788 (-0.170838) | 18.131528 / 8.074308 (10.057220) | 21.277823 / 10.191392 (11.086431) | 0.238807 / 0.680424 (-0.441617) | 0.032251 / 0.534201 (-0.501950) | 0.503859 / 0.579283 (-0.075424) | 0.604825 / 0.434364 (0.170461) | 0.555623 / 0.540337 (0.015286) | 0.647301 / 1.386936 (-0.739635) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010857 / 0.011353 (-0.000496) | 0.005581 / 0.011008 (-0.005427) | 0.094346 / 0.038508 (0.055838) | 0.053084 / 0.023109 (0.029975) | 0.457586 / 0.275898 (0.181688) | 0.545475 / 0.323480 (0.221995) | 0.006761 / 0.007986 (-0.001225) | 0.005094 / 0.004328 (0.000765) | 0.095509 / 0.004250 (0.091258) | 0.077182 / 0.037052 (0.040130) | 0.498717 / 0.258489 (0.240228) | 0.542433 / 0.293841 (0.248592) | 0.051547 / 0.128546 (-0.076999) | 0.014633 / 0.075646 (-0.061014) | 0.106843 / 0.419271 (-0.312428) | 0.068459 / 0.043533 (0.024926) | 0.435793 / 0.255139 (0.180654) | 0.475484 / 0.283200 (0.192285) | 0.039495 / 0.141683 (-0.102188) | 1.684906 / 1.452155 (0.232751) | 1.798693 / 1.492716 (0.305976) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279853 / 0.018006 (0.261847) | 0.601016 / 0.000490 (0.600526) | 0.002055 / 0.000200 (0.001855) | 0.000219 / 0.000054 (0.000165) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030935 / 0.037411 (-0.006477) | 0.121197 / 0.014526 (0.106671) | 0.143360 / 0.176557 (-0.033197) | 0.200862 / 0.737135 (-0.536274) | 0.138656 / 0.296338 (-0.157683) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.613904 / 0.215209 (0.398695) | 6.155422 / 2.077655 (4.077767) | 2.777238 / 1.504120 (1.273118) | 2.473045 / 1.541195 (0.931851) | 2.604470 / 1.468490 (1.135980) | 0.898871 / 4.584777 (-3.685906) | 5.739666 / 3.745712 (1.993954) | 4.719822 / 5.269862 (-0.550040) | 2.727354 / 4.565676 (-1.838322) | 0.108232 / 0.424275 (-0.316043) | 0.013632 / 0.007607 (0.006025) | 0.771802 / 0.226044 (0.545757) | 7.987466 / 2.268929 (5.718537) | 3.609856 / 55.444624 (-51.834768) | 2.974421 / 6.876477 (-3.902056) | 2.956567 / 2.142072 (0.814495) | 1.093792 / 4.805227 (-3.711435) | 0.213369 / 6.500664 (-6.287295) | 0.084486 / 0.075469 (0.009017) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.693855 / 1.841788 (-0.147933) | 18.055027 / 8.074308 (9.980719) | 21.397964 / 10.191392 (11.206571) | 0.240549 / 0.680424 (-0.439875) | 0.031212 / 0.534201 (-0.502989) | 0.513657 / 0.579283 (-0.065626) | 0.651348 / 0.434364 (0.216985) | 0.603740 / 0.540337 (0.063402) | 0.752287 / 1.386936 (-0.634649) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6f3f38d00dd40a444ae54c18caa28304ae36b9c3 \"CML watermark\")\n" ]
"2023-06-13T13:03:19Z"
"2023-06-27T16:47:51Z"
"2023-06-27T16:38:32Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5949.diff", "html_url": "https://github.com/huggingface/datasets/pull/5949", "merged_at": "2023-06-27T16:38:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/5949.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5949" }
Use `huggingface_hub`'s RepoCard API instead of `DatasetMetadata` for modifying the card's YAML, and deprecate `datasets.utils.metadata` and `datasets.utils.readme`. After removing these modules, we can also delete `datasets.utils.resources` since the moon landing repo now stores its own version of these resources for the metadata UI. PS: this change requires bumping `huggingface_hub` to 0.13.0 (Transformers requires 0.14.0, so should be ok)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5949/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5949/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2690/comments
https://api.github.com/repos/huggingface/datasets/issues/2690/events
https://github.com/huggingface/datasets/pull/2690
949,574,500
MDExOlB1bGxSZXF1ZXN0Njk0MjU5MDc1
2,690
Docs details
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
[ "Thanks for all the comments and for the corrections in the docs !\r\n\r\nAbout all the points you mentioned:\r\n\r\n> * the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + a one-liner that installs all the requirements / alternatively a requirements.txt file)\r\n\r\nYes good idea\r\n\r\n> * \"If you’d like to play with the examples, you must install it from source.\" in https://huggingface.co/docs/datasets/installation.html: it's not clear to me what this means (what are these \"examples\"?)\r\n\r\nIt refers to examples scripts inside the git repository of the library, see the `examples` folder in the `transformers` repo.\r\nWe don't have examples yet in the git repo of `datasets` as in transformers. So currently there are no examples. Maybe we can just remove this sentence from the docs for now\r\n\r\n> * in https://huggingface.co/docs/datasets/loading_datasets.html: \"or AWS bucket if it’s not already stored in the library\". It's the only place in the doc (aside from the docstring https://huggingface.co/docs/datasets/package_reference/loading_methods.html?highlight=aws bucket#datasets.list_datasets) where the \"AWS bucket\" is mentioned. It's not easy to understand what this means. Maybe explain more, and link to https://s3.amazonaws.com/datasets.huggingface.co and/or https://huggingface.co/docs/datasets/filesystems.html.\r\n\r\nThis is outdated and must be replaced by\r\n```\r\nor from the Hugging Face Hub if it’s not already stored in the library\r\n```\r\n\r\n> * example in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files is obsoleted by [Enable auto-download for PAN-X / Wikiann domain in XTREME #2326](https://github.com/huggingface/datasets/pull/2326). Also: see [xtreme / pan-x cannot be downloaded #2691](https://github.com/huggingface/datasets/issues/2691) for a bug on this specific dataset.\r\n\r\nWe can replace the `XTREME` `PANX` dataste by `matinf` instead for example\r\n\r\n> * in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files the doc says \"After you’ve downloaded the files, you can point to the folder hosting them locally with the data_dir argument as follows:\", but the following example does not show how to use `data_dir`\r\n\r\nLet's add `data_dir=\"path/to/your/downloaded/data\"` for example\r\n\r\n> * in https://huggingface.co/docs/datasets/loading_datasets.html#csv-files, it would be nice to have an URL to the csv loader reference (but I'm not sure there is one in the API reference). This comment applies in many places in the doc: I would want the API reference to contain doc for all the code/functions/classes... and I would want a lot more links inside the doc pointing to the API entries.\r\n\r\nCurrently there's no documentation for the CSV loader config. Maybe we can add the docstrings to the `CsvConfig` class to explain the parameters and how it works, and then redirect to the doc of this class in this section of the documentation.\r\n\r\n> * in the API reference (docstrings) I would prefer \"SOURCE\" to link to github instead of a copy of the code inside the docs site (eg. https://github.com/huggingface/datasets/blob/master/src/datasets/load.py#L711 instead of https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset)\r\n\r\nThis is the same as in `transformers`, not sure if this is a big issue\r\n\r\n> * it seems like not all the API is exposed in the doc. For example, there is no doc for [`disable_progress_bar`](https://github.com/huggingface/datasets/search?q=disable_progress_bar), see https://huggingface.co/docs/datasets/search.html?q=disable_progress_bar, even if the code contains docstrings. Does it mean that the function is not officially supported? (otherwise, maybe it also deserves a mention in https://huggingface.co/docs/datasets/package_reference/logging_methods.html)\r\n\r\nThe function `disable_progress_bar` should definitely be in the docs, thanks. We can add it to the logging methods\r\n\r\n> * in https://huggingface.co/docs/datasets/loading_datasets.html?highlight=most%20efficient%20format%20have%20json%20files%20consisting%20multiple%20json%20objects#json-files, \"The most efficient format is to have JSON files consisting of multiple JSON objects, one per line, representing individual data rows:\", maybe link to https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON and give it a name (\"line-delimited JSON\"? \"JSON Lines\" as in https://huggingface.co/docs/datasets/processing.html#exporting-a-dataset-to-csv-json-parquet-or-to-python-objects ?)\r\n\r\nYes good idea !\r\n\r\n> * in https://huggingface.co/docs/datasets/loading_datasets.html, for the local files sections, it would be nice to provide sample csv / json / text files to download, so that it's easier for the reader to try to load them (instead: they won't try)\r\n\r\nSure why not. Moreover the csv loader now supports remote files so you could just run the code pass an an URL to the sample csv file.\r\n\r\n> * the doc explains how to shard a dataset, but does not explain why and when a dataset should be sharded (I have no idea... for [parallelizing](https://huggingface.co/docs/datasets/processing.html#multiprocessing)?). It does neither give an idea of the number of shards a dataset typically should have and why.\r\n\r\nThis can be used for distributed processing or just to use a percentage of the data. We can definitely give example of use cases\r\n\r\n> * the code example in https://huggingface.co/docs/datasets/processing.html#mapping-in-a-distributed-setting does not work, because `training_args` has not been defined before in the doc.\r\n\r\n`training_args` comes from `transformers`, it's a practical way to define all your arguments to train a model. Maybe we can just import it from `transformers` and use it with the default values\r\n\r\n" ]
"2021-07-21T10:43:14Z"
"2021-07-27T18:40:54Z"
"2021-07-27T18:40:54Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2690.diff", "html_url": "https://github.com/huggingface/datasets/pull/2690", "merged_at": "2021-07-27T18:40:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/2690.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2690" }
Some comments here: - the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + a one-liner that installs all the requirements / alternatively a requirements.txt file) - "If you’d like to play with the examples, you must install it from source." in https://huggingface.co/docs/datasets/installation.html: it's not clear to me what this means (what are these "examples"?) - in https://huggingface.co/docs/datasets/loading_datasets.html: "or AWS bucket if it’s not already stored in the library". It's the only place in the doc (aside from the docstring https://huggingface.co/docs/datasets/package_reference/loading_methods.html?highlight=aws bucket#datasets.list_datasets) where the "AWS bucket" is mentioned. It's not easy to understand what this means. Maybe explain more, and link to https://s3.amazonaws.com/datasets.huggingface.co and/or https://huggingface.co/docs/datasets/filesystems.html. - example in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files is obsoleted by https://github.com/huggingface/datasets/pull/2326. Also: see https://github.com/huggingface/datasets/issues/2691 for a bug on this specific dataset. - in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files the doc says "After you’ve downloaded the files, you can point to the folder hosting them locally with the data_dir argument as follows:", but the following example does not show how to use `data_dir` - in https://huggingface.co/docs/datasets/loading_datasets.html#csv-files, it would be nice to have an URL to the csv loader reference (but I'm not sure there is one in the API reference). This comment applies in many places in the doc: I would want the API reference to contain doc for all the code/functions/classes... and I would want a lot more links inside the doc pointing to the API entries. - in the API reference (docstrings) I would prefer "SOURCE" to link to github instead of a copy of the code inside the docs site (eg. https://github.com/huggingface/datasets/blob/master/src/datasets/load.py#L711 instead of https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset) - it seems like not all the API is exposed in the doc. For example, there is no doc for [`disable_progress_bar`](https://github.com/huggingface/datasets/search?q=disable_progress_bar), see https://huggingface.co/docs/datasets/search.html?q=disable_progress_bar, even if the code contains docstrings. Does it mean that the function is not officially supported? (otherwise, maybe it also deserves a mention in https://huggingface.co/docs/datasets/package_reference/logging_methods.html) - in https://huggingface.co/docs/datasets/loading_datasets.html?highlight=most%20efficient%20format%20have%20json%20files%20consisting%20multiple%20json%20objects#json-files, "The most efficient format is to have JSON files consisting of multiple JSON objects, one per line, representing individual data rows:", maybe link to https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON and give it a name ("line-delimited JSON"? "JSON Lines" as in https://huggingface.co/docs/datasets/processing.html#exporting-a-dataset-to-csv-json-parquet-or-to-python-objects ?) - in https://huggingface.co/docs/datasets/loading_datasets.html, for the local files sections, it would be nice to provide sample csv / json / text files to download, so that it's easier for the reader to try to load them (instead: they won't try) - the doc explains how to shard a dataset, but does not explain why and when a dataset should be sharded (I have no idea... for [parallelizing](https://huggingface.co/docs/datasets/processing.html#multiprocessing)?). It does neither give an idea of the number of shards a dataset typically should have and why. - the code example in https://huggingface.co/docs/datasets/processing.html#mapping-in-a-distributed-setting does not work, because `training_args` has not been defined before in the doc.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2690/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2690/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4198/comments
https://api.github.com/repos/huggingface/datasets/issues/4198/events
https://github.com/huggingface/datasets/issues/4198
1,211,456,559
I_kwDODunzps5INVwv
4,198
There is no dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1625647?v=4", "events_url": "https://api.github.com/users/wilfoderek/events{/privacy}", "followers_url": "https://api.github.com/users/wilfoderek/followers", "following_url": "https://api.github.com/users/wilfoderek/following{/other_user}", "gists_url": "https://api.github.com/users/wilfoderek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wilfoderek", "id": 1625647, "login": "wilfoderek", "node_id": "MDQ6VXNlcjE2MjU2NDc=", "organizations_url": "https://api.github.com/users/wilfoderek/orgs", "received_events_url": "https://api.github.com/users/wilfoderek/received_events", "repos_url": "https://api.github.com/users/wilfoderek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wilfoderek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wilfoderek/subscriptions", "type": "User", "url": "https://api.github.com/users/wilfoderek" }
[]
closed
false
null
[]
null
[]
"2022-04-21T19:19:26Z"
"2022-05-03T11:29:05Z"
"2022-04-22T06:12:25Z"
NONE
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4198/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4198/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3158/comments
https://api.github.com/repos/huggingface/datasets/issues/3158/events
https://github.com/huggingface/datasets/pull/3158
1,035,158,070
PR_kwDODunzps4toGpe
3,158
Fix string encoding for Value type
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "That was fast! \r\n" ]
"2021-10-25T13:44:13Z"
"2021-10-25T14:12:06Z"
"2021-10-25T14:12:05Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3158.diff", "html_url": "https://github.com/huggingface/datasets/pull/3158", "merged_at": "2021-10-25T14:12:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/3158.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3158" }
Some metrics have `string` features but currently it fails if users pass integers instead. Indeed feature encoding that handles the conversion of the user's objects to the right python type is missing a case for `string`, while it already works as expected for integers, floats and booleans Here is an example code that didn't work previously, but that works with this fix: ```python import datasets # Note that 'id' is an integer while the SQuAD metric uses strings predictions = [{'prediction_text': '1976', 'id': 5}] references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': 5}] squad_metric = datasets.load_metric("squad") squad_metric.add_batch(predictions=predictions, references=references) results = squad_metric.compute() # {'exact_match': 100.0, 'f1': 100.0} ``` cc @sgugger @philschmid
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3158/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3158/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3795
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3795/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3795/comments
https://api.github.com/repos/huggingface/datasets/issues/3795/events
https://github.com/huggingface/datasets/issues/3795
1,153,261,281
I_kwDODunzps5EvV7h
3,795
can not flatten natural_questions dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hannibal046", "id": 38466901, "login": "Hannibal046", "node_id": "MDQ6VXNlcjM4NDY2OTAx", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "repos_url": "https://api.github.com/users/Hannibal046/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "type": "User", "url": "https://api.github.com/users/Hannibal046" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "same issue. downgrade it to a lower version.", "Thanks for reporting, I'll take a look tomorrow :)" ]
"2022-02-27T13:57:40Z"
"2022-03-21T14:36:12Z"
"2022-03-21T14:36:12Z"
NONE
null
null
null
## Describe the bug after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('natural_questions',cache_dir = 'data/dataset_cache_dir') dataset['train'].flatten() ``` ## Expected results a dataset with `long_answer` as features ## Actual results Traceback (most recent call last): File "temp.py", line 5, in <module> dataset['train'].flatten() File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper out = func(self, *args, **kwargs) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1296, in flatten dataset._data = update_metadata_with_features(dataset._data, dataset.features) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in update_metadata_with_features features = Features({col_name: features[col_name] for col_name in table.column_names}) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in <dictcomp> features = Features({col_name: features[col_name] for col_name in table.column_names}) KeyError: 'annotations.long_answer' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.13 - Platform: MBP - Python version: 3.8 - PyArrow version: 6.0.1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3795/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3795/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4465/comments
https://api.github.com/repos/huggingface/datasets/issues/4465/events
https://github.com/huggingface/datasets/pull/4465
1,265,754,479
PR_kwDODunzps45X0XY
4,465
Fix bigbench config names
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-06-09T08:06:19Z"
"2022-06-09T14:38:36Z"
"2022-06-09T14:29:19Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4465.diff", "html_url": "https://github.com/huggingface/datasets/pull/4465", "merged_at": "2022-06-09T14:29:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/4465.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4465" }
Fix https://github.com/huggingface/datasets/issues/4462 in the case of bigbench
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4465/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4465/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3983/comments
https://api.github.com/repos/huggingface/datasets/issues/3983/events
https://github.com/huggingface/datasets/issues/3983
1,175,759,412
I_kwDODunzps5GFKo0
3,983
Infinitely attempting lock
{ "avatar_url": "https://avatars.githubusercontent.com/u/11869652?v=4", "events_url": "https://api.github.com/users/jyrr/events{/privacy}", "followers_url": "https://api.github.com/users/jyrr/followers", "following_url": "https://api.github.com/users/jyrr/following{/other_user}", "gists_url": "https://api.github.com/users/jyrr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jyrr", "id": 11869652, "login": "jyrr", "node_id": "MDQ6VXNlcjExODY5NjUy", "organizations_url": "https://api.github.com/users/jyrr/orgs", "received_events_url": "https://api.github.com/users/jyrr/received_events", "repos_url": "https://api.github.com/users/jyrr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jyrr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jyrr/subscriptions", "type": "User", "url": "https://api.github.com/users/jyrr" }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting. We're using `py-filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest version of `py-filelock` and see if it works ?\r\n\r\n```\r\npip install filelock\r\n```\r\nhere is a code example from the `py-filelock` documentation that you can try:\r\n```python\r\nfrom filelock import Timeout, FileLock\r\n\r\nlock = FileLock(\"high_ground.txt.lock\")\r\nwith lock:\r\n with open(\"high_ground.txt\", \"a\") as f:\r\n f.write(\"You were the chosen one.\")\r\n```" ]
"2022-03-21T18:11:57Z"
"2022-05-06T16:12:18Z"
"2022-05-06T16:12:18Z"
NONE
null
null
null
I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`. Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS). ``` %sh python /dbfs/transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /dbfs/transformers/tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --log_level debug \ --cache_dir /dbfs/transformers/cache ``` All goes well until acquiring a lock -- ``` 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... ``` and so on. I imagine this has to do with DBFS -- is there a way to tackle this?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3983/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3983/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4988/comments
https://api.github.com/repos/huggingface/datasets/issues/4988/events
https://github.com/huggingface/datasets/issues/4988
1,376,096,584
I_kwDODunzps5SBZFI
4,988
Add `IterableDataset.from_generator` to the API
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4", "events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}", "followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers", "following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}", "gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hamid-vakilzadeh", "id": 56002455, "login": "hamid-vakilzadeh", "node_id": "MDQ6VXNlcjU2MDAyNDU1", "organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs", "received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events", "repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions", "type": "User", "url": "https://api.github.com/users/hamid-vakilzadeh" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4", "events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}", "followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers", "following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}", "gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hamid-vakilzadeh", "id": 56002455, "login": "hamid-vakilzadeh", "node_id": "MDQ6VXNlcjU2MDAyNDU1", "organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs", "received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events", "repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions", "type": "User", "url": "https://api.github.com/users/hamid-vakilzadeh" } ]
null
[ "#take", "Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help", "Thank you! I certainly will reach out if I need any help." ]
"2022-09-16T15:19:41Z"
"2022-10-05T12:10:49Z"
"2022-10-05T12:10:49Z"
CONTRIBUTOR
null
null
null
We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator. cc @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4988/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4988/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5904/comments
https://api.github.com/repos/huggingface/datasets/issues/5904/events
https://github.com/huggingface/datasets/pull/5904
1,727,415,626
PR_kwDODunzps5Rbfks
5,904
Validate name parameter in make_file_instructions
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007401 / 0.011353 (-0.003952) | 0.005198 / 0.011008 (-0.005810) | 0.112317 / 0.038508 (0.073809) | 0.038406 / 0.023109 (0.015297) | 0.358008 / 0.275898 (0.082110) | 0.395350 / 0.323480 (0.071870) | 0.006201 / 0.007986 (-0.001785) | 0.004368 / 0.004328 (0.000039) | 0.087718 / 0.004250 (0.083467) | 0.055299 / 0.037052 (0.018247) | 0.350481 / 0.258489 (0.091992) | 0.419876 / 0.293841 (0.126035) | 0.032459 / 0.128546 (-0.096087) | 0.010635 / 0.075646 (-0.065011) | 0.383282 / 0.419271 (-0.035989) | 0.059241 / 0.043533 (0.015708) | 0.365101 / 0.255139 (0.109962) | 0.378144 / 0.283200 (0.094944) | 0.114287 / 0.141683 (-0.027396) | 1.680870 / 1.452155 (0.228715) | 1.788183 / 1.492716 (0.295467) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242919 / 0.018006 (0.224913) | 0.489850 / 0.000490 (0.489360) | 0.011408 / 0.000200 (0.011208) | 0.000444 / 0.000054 (0.000389) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030742 / 0.037411 (-0.006669) | 0.123092 / 0.014526 (0.108566) | 0.138246 / 0.176557 (-0.038311) | 0.207299 / 0.737135 (-0.529836) | 0.142647 / 0.296338 (-0.153691) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472553 / 0.215209 (0.257344) | 4.671763 / 2.077655 (2.594108) | 2.119986 / 1.504120 (0.615866) | 1.891851 / 1.541195 (0.350656) | 1.979094 / 1.468490 (0.510604) | 0.617956 / 4.584777 (-3.966821) | 4.969418 / 3.745712 (1.223706) | 4.672083 / 5.269862 (-0.597779) | 2.119049 / 4.565676 (-2.446627) | 0.077466 / 0.424275 (-0.346809) | 0.014434 / 0.007607 (0.006827) | 0.580746 / 0.226044 (0.354701) | 5.805458 / 2.268929 (3.536530) | 2.622498 / 55.444624 (-52.822126) | 2.259499 / 6.876477 (-4.616978) | 2.362078 / 2.142072 (0.220006) | 0.719911 / 4.805227 (-4.085317) | 0.164939 / 6.500664 (-6.335725) | 0.074762 / 0.075469 (-0.000707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.496709 / 1.841788 (-0.345079) | 18.247499 / 8.074308 (10.173191) | 15.397075 / 10.191392 (5.205683) | 0.181163 / 0.680424 (-0.499261) | 0.022604 / 0.534201 (-0.511597) | 0.462791 / 0.579283 (-0.116492) | 0.504473 / 0.434364 (0.070109) | 0.582254 / 0.540337 (0.041917) | 0.673849 / 1.386936 (-0.713087) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007633 / 0.011353 (-0.003720) | 0.004859 / 0.011008 (-0.006149) | 0.091194 / 0.038508 (0.052686) | 0.038255 / 0.023109 (0.015146) | 0.460972 / 0.275898 (0.185074) | 0.470441 / 0.323480 (0.146961) | 0.006482 / 0.007986 (-0.001504) | 0.004500 / 0.004328 (0.000172) | 0.089998 / 0.004250 (0.085748) | 0.055470 / 0.037052 (0.018418) | 0.459188 / 0.258489 (0.200699) | 0.491255 / 0.293841 (0.197414) | 0.032200 / 0.128546 (-0.096346) | 0.010372 / 0.075646 (-0.065274) | 0.097429 / 0.419271 (-0.321843) | 0.052469 / 0.043533 (0.008936) | 0.452492 / 0.255139 (0.197353) | 0.475210 / 0.283200 (0.192010) | 0.116976 / 0.141683 (-0.024707) | 1.752742 / 1.452155 (0.300587) | 1.849535 / 1.492716 (0.356819) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229822 / 0.018006 (0.211816) | 0.472259 / 0.000490 (0.471770) | 0.000455 / 0.000200 (0.000255) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033796 / 0.037411 (-0.003615) | 0.136151 / 0.014526 (0.121625) | 0.144015 / 0.176557 (-0.032542) | 0.199337 / 0.737135 (-0.537798) | 0.150024 / 0.296338 (-0.146315) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522737 / 0.215209 (0.307528) | 5.165223 / 2.077655 (3.087568) | 2.630334 / 1.504120 (1.126214) | 2.392383 / 1.541195 (0.851188) | 2.488966 / 1.468490 (1.020476) | 0.608981 / 4.584777 (-3.975796) | 4.711545 / 3.745712 (0.965833) | 2.121537 / 5.269862 (-3.148325) | 1.205477 / 4.565676 (-3.360199) | 0.078277 / 0.424275 (-0.345998) | 0.014175 / 0.007607 (0.006568) | 0.640720 / 0.226044 (0.414675) | 6.391173 / 2.268929 (4.122245) | 3.265131 / 55.444624 (-52.179493) | 2.939188 / 6.876477 (-3.937289) | 2.919217 / 2.142072 (0.777145) | 0.745095 / 4.805227 (-4.060132) | 0.164065 / 6.500664 (-6.336599) | 0.076993 / 0.075469 (0.001524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.539971 / 1.841788 (-0.301817) | 18.597296 / 8.074308 (10.522988) | 16.899330 / 10.191392 (6.707938) | 0.169005 / 0.680424 (-0.511419) | 0.020447 / 0.534201 (-0.513754) | 0.465862 / 0.579283 (-0.113421) | 0.522819 / 0.434364 (0.088455) | 0.547111 / 0.540337 (0.006773) | 0.657777 / 1.386936 (-0.729159) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#56aff9ecb4e565eb95faad525558914648cc22f1 \"CML watermark\")\n" ]
"2023-05-26T11:12:46Z"
"2023-05-31T07:43:32Z"
"2023-05-31T07:34:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5904.diff", "html_url": "https://github.com/huggingface/datasets/pull/5904", "merged_at": "2023-05-31T07:34:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5904.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5904" }
Validate `name` parameter in `make_file_instructions`. This way users get more informative error messages, instead of: ```stacktrace .../huggingface/datasets/src/datasets/arrow_reader.py in make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path) 110 name2len = {info.name: info.num_examples for info in split_infos} 111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos} --> 112 name2filenames = { 113 info.name: filenames_for_dataset_split( 114 path=prefix_path, .../huggingface/datasets/src/datasets/arrow_reader.py in <dictcomp>(.0) 111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos} 112 name2filenames = { --> 113 info.name: filenames_for_dataset_split( 114 path=prefix_path, 115 dataset_name=name, .../huggingface/datasets/src/datasets/naming.py in filenames_for_dataset_split(path, dataset_name, split, filetype_suffix, shard_lengths) 68 69 def filenames_for_dataset_split(path, dataset_name, split, filetype_suffix=None, shard_lengths=None): ---> 70 prefix = filename_prefix_for_split(dataset_name, split) 71 prefix = os.path.join(path, prefix) 72 .../huggingface/datasets/src/datasets/naming.py in filename_prefix_for_split(name, split) 52 53 def filename_prefix_for_split(name, split): ---> 54 if os.path.basename(name) != name: 55 raise ValueError(f"Should be a dataset name, not a path: {name}") 56 if not re.match(_split_re, split): .../lib/python3.9/posixpath.py in basename(p) 140 def basename(p): 141 """Returns the final component of a pathname""" --> 142 p = os.fspath(p) 143 sep = _get_sep(p) 144 i = p.rfind(sep) + 1 TypeError: expected str, bytes or os.PathLike object, not NoneType ``` Related to #5895.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5904/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5904/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3465/comments
https://api.github.com/repos/huggingface/datasets/issues/3465/events
https://github.com/huggingface/datasets/issues/3465
1,085,400,432
I_kwDODunzps5AseVw
3,465
Unable to load 'cnn_dailymail' dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42352729?v=4", "events_url": "https://api.github.com/users/talha1503/events{/privacy}", "followers_url": "https://api.github.com/users/talha1503/followers", "following_url": "https://api.github.com/users/talha1503/following{/other_user}", "gists_url": "https://api.github.com/users/talha1503/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/talha1503", "id": 42352729, "login": "talha1503", "node_id": "MDQ6VXNlcjQyMzUyNzI5", "organizations_url": "https://api.github.com/users/talha1503/orgs", "received_events_url": "https://api.github.com/users/talha1503/received_events", "repos_url": "https://api.github.com/users/talha1503/repos", "site_admin": false, "starred_url": "https://api.github.com/users/talha1503/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/talha1503/subscriptions", "type": "User", "url": "https://api.github.com/users/talha1503" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "Hi @talha1503, thanks for reporting.\r\n\r\nIt seems there is an issue with one of the data files hosted at Google Drive:\r\n```\r\nGoogle Drive - Quota exceeded\r\n\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nAs you probably know, Hugging Face does not host the data, and in this case the data owner decided to host their data at Google Drive, which has quota limits.\r\n\r\nIs there anything we could do, @lhoestq @mariosasko?", "This looks related to https://github.com/huggingface/datasets/issues/996", "It seems that [this](https://huggingface.co/datasets/ccdv/cnn_dailymail) copy of the dataset has fixed the problem" ]
"2021-12-21T03:32:21Z"
"2022-02-17T14:13:57Z"
"2022-02-17T14:13:57Z"
NONE
null
null
null
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True) ``` ## Expected results Expecting to load 'cnn_dailymail' dataset. ## Actual results `NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3465/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3465/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6293/comments
https://api.github.com/repos/huggingface/datasets/issues/6293/events
https://github.com/huggingface/datasets/issues/6293
1,937,238,047
I_kwDODunzps5zd-gf
6,293
Choose columns to stream parquet data in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
"2023-10-11T08:59:36Z"
"2023-10-11T16:21:38Z"
"2023-10-11T16:21:38Z"
MEMBER
null
null
null
Currently passing columns= to load_dataset in streaming mode fails ``` Tried to load parquet data with columns '['link']' with mismatching features '{'caption': Value(dtype='string', id=None), 'image': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='null', id=None)}, 'link': Value(dtype='string', id=None), 'message_id': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None)}' ``` similar to https://github.com/huggingface/datasets/issues/6039 reported at https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65259a09617407d4520f4ad9
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6293/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6293/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5011/comments
https://api.github.com/repos/huggingface/datasets/issues/5011/events
https://github.com/huggingface/datasets/issues/5011
1,382,609,587
I_kwDODunzps5SaPKz
5,011
Audio: `encode_example` fails with IndexError
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Sorry bug on my part 😅 Closing " ]
"2022-09-22T15:07:27Z"
"2022-09-23T09:05:18Z"
"2022-09-23T09:05:18Z"
CONTRIBUTOR
null
null
null
## Describe the bug Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functions correctly, and there were no issues when I had the dataset locally. Don't think it's a sound file bug as the version matches what worked previously. Update: the bug appeared for me on a GPU, mysteriously on a TPU I can't repro and it downloads correctly... ## Steps to reproduce the bug ```python from datasets import load_dataset earnings22 = load_dataset("sanchit-gandhi/earnings22_split") ``` ## Expected results ``` >>> earnings22 DatasetDict({ validation: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 2650 }) train: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 52006 }) test: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 2735 }) }) ``` ## Actual results ``` Traceback (most recent call last): File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single writer.write(example) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 451, in write self.write_examples_on_file() File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 409, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 508, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 231, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 197, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1795, in cast_array_to_feature return feature.cast_storage(array) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in cast_storage storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in <listcomp> storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 92, in encode_example sf.write(buffer, value["array"], value["sampling_rate"], format="wav") File "/opt/conda/envs/hf/lib/python3.8/site-packages/soundfile.py", line 313, in write channels = data.shape[1] IndexError: tuple index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3 Plus: - SoundFile version: 0.10.3.post1 cc @lhoestq @polinaeterna
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5011/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5011/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5273/comments
https://api.github.com/repos/huggingface/datasets/issues/5273/events
https://github.com/huggingface/datasets/issues/5273
1,458,018,050
I_kwDODunzps5W55cC
5,273
download_mode="force_redownload" does not refresh cached dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/28439912?v=4", "events_url": "https://api.github.com/users/nomisto/events{/privacy}", "followers_url": "https://api.github.com/users/nomisto/followers", "following_url": "https://api.github.com/users/nomisto/following{/other_user}", "gists_url": "https://api.github.com/users/nomisto/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nomisto", "id": 28439912, "login": "nomisto", "node_id": "MDQ6VXNlcjI4NDM5OTEy", "organizations_url": "https://api.github.com/users/nomisto/orgs", "received_events_url": "https://api.github.com/users/nomisto/received_events", "repos_url": "https://api.github.com/users/nomisto/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nomisto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nomisto/subscriptions", "type": "User", "url": "https://api.github.com/users/nomisto" }
[]
open
false
null
[]
null
[]
"2022-11-21T14:12:43Z"
"2022-11-21T14:13:03Z"
null
NONE
null
null
null
### Describe the bug `load_datasets` does not refresh dataset when features are imported from external file, even with `download_mode="force_redownload"`. The bug is not limited to nested fields, however it is more likely to occur with nested fields. ### Steps to reproduce the bug To reproduce the bug 3 files are needed: `dataset.py` (contains dataset loading script), `schema.py` (contains features of dataset) and `main.py` (to run `load_datasets`) `dataset.py` ```python import datasets from schema import features class NewDataset(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo( features=features ) def _split_generators(self, dl_manager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN ) ] def _generate_examples(self): data = [ {"id": 0, "nested": []}, {"id": 1, "nested": []} ] for key, example in enumerate(data): yield key, example ``` `schema.py` ```python import datasets features = datasets.Features( { "id": datasets.Value("int32"), "nested": [ {"text": datasets.Value("string")} ] } ) ``` `main.py` ```python import datasets a = datasets.load_dataset("dataset.py") print(a["train"].info.features) ``` Now if `main.py` is run it prints the following correct output: `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`. However, if f.e. the label of the feature "text" is changed to something else, f.e. to `schema.py` ```python import datasets features = datasets.Features( { "id": datasets.Value("int32"), "nested": [ {"textfoo": datasets.Value("string")} ] } ) ``` `main.py` still prints `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`, even if run with `download_mode="force_redownload"`. The only fix is to delete the folder in the cache. ### Expected behavior The cached dataset is deleted and refreshed when using `load_datasets` with `download_mode="force_redownload"`. ### Environment info - `datasets` version: 2.7.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.9 - PyArrow version: 10.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5273/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5273/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5726/comments
https://api.github.com/repos/huggingface/datasets/issues/5726/events
https://github.com/huggingface/datasets/issues/5726
1,660,944,807
I_kwDODunzps5jAAGn
5,726
Fallback JSON Dataset loading does not load all values when features specified manually
{ "avatar_url": "https://avatars.githubusercontent.com/u/3610788?v=4", "events_url": "https://api.github.com/users/myluki2000/events{/privacy}", "followers_url": "https://api.github.com/users/myluki2000/followers", "following_url": "https://api.github.com/users/myluki2000/following{/other_user}", "gists_url": "https://api.github.com/users/myluki2000/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/myluki2000", "id": 3610788, "login": "myluki2000", "node_id": "MDQ6VXNlcjM2MTA3ODg=", "organizations_url": "https://api.github.com/users/myluki2000/orgs", "received_events_url": "https://api.github.com/users/myluki2000/received_events", "repos_url": "https://api.github.com/users/myluki2000/repos", "site_admin": false, "starred_url": "https://api.github.com/users/myluki2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/myluki2000/subscriptions", "type": "User", "url": "https://api.github.com/users/myluki2000" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @myluki2000.\r\n\r\nI am working on a fix." ]
"2023-04-10T15:22:14Z"
"2023-04-21T06:35:28Z"
"2023-04-21T06:35:28Z"
NONE
null
null
null
### Describe the bug The fallback JSON dataset loader located here: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153 does not load the values of features correctly when features are specified manually and not all features have a value in the first entry of the dataset. I'm pretty sure this is not supposed to be expected bahavior? To fix this you'd have to change this line: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L140 To pass a schema to pyarrow which has the same structure as the features argument passed to the load_dataset() method. ### Steps to reproduce the bug Consider a dataset JSON like this: ``` [ { "instruction": "Do stuff", "output": "Answer stuff" }, { "instruction": "Do stuff2", "input": "Additional Input2", "output": "Answer stuff2" } ] ``` Using this code to load the dataset: ``` from datasets import load_dataset, Features, Value features = { "instruction": Value("string"), "input": Value("string"), "output": Value("string") } features = Features(features) ds = load_dataset("json", data_files="./ds.json", features=features) for row in ds["train"]: print(row) ``` we get a dataset that looks like this: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | None | "Answer Stuff2" | ### Expected behavior The input column should contain values other than None for dataset entries that have the "input" attribute set: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | "Additional Input2" | "Answer Stuff2" | ### Environment info Python 3.10.10 Datasets 2.11.0 Windows 10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5726/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5726/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2694/comments
https://api.github.com/repos/huggingface/datasets/issues/2694/events
https://github.com/huggingface/datasets/pull/2694
949,844,722
MDExOlB1bGxSZXF1ZXN0Njk0NDg0NTcy
2,694
fix: 🐛 change string format to allow copy/paste to work in bash
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
[]
"2021-07-21T15:30:40Z"
"2021-07-22T10:41:47Z"
"2021-07-22T10:41:47Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2694.diff", "html_url": "https://github.com/huggingface/datasets/pull/2694", "merged_at": "2021-07-22T10:41:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/2694.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2694" }
Before: copy/paste resulted in an error because the square bracket characters `[]` are special characters in bash
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2694/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2694/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1628/comments
https://api.github.com/repos/huggingface/datasets/issues/1628/events
https://github.com/huggingface/datasets/pull/1628
774,091,411
MDExOlB1bGxSZXF1ZXN0NTQ1MDY5NTAy
1,628
made suggested changes to hate-speech-and-offensive-language
{ "avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4", "events_url": "https://api.github.com/users/MisbahKhan789/events{/privacy}", "followers_url": "https://api.github.com/users/MisbahKhan789/followers", "following_url": "https://api.github.com/users/MisbahKhan789/following{/other_user}", "gists_url": "https://api.github.com/users/MisbahKhan789/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MisbahKhan789", "id": 15351802, "login": "MisbahKhan789", "node_id": "MDQ6VXNlcjE1MzUxODAy", "organizations_url": "https://api.github.com/users/MisbahKhan789/orgs", "received_events_url": "https://api.github.com/users/MisbahKhan789/received_events", "repos_url": "https://api.github.com/users/MisbahKhan789/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MisbahKhan789/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MisbahKhan789/subscriptions", "type": "User", "url": "https://api.github.com/users/MisbahKhan789" }
[]
closed
false
null
[]
null
[]
"2020-12-23T23:25:32Z"
"2020-12-28T10:11:20Z"
"2020-12-28T10:11:20Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1628.diff", "html_url": "https://github.com/huggingface/datasets/pull/1628", "merged_at": "2020-12-28T10:11:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/1628.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1628" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1628/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1628/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4944/comments
https://api.github.com/repos/huggingface/datasets/issues/4944/events
https://github.com/huggingface/datasets/issues/4944
1,364,313,569
I_kwDODunzps5RUcXh
4,944
larger dataset, larger GPU memory in the training phase? Is that correct?
{ "avatar_url": "https://avatars.githubusercontent.com/u/38886373?v=4", "events_url": "https://api.github.com/users/debby1103/events{/privacy}", "followers_url": "https://api.github.com/users/debby1103/followers", "following_url": "https://api.github.com/users/debby1103/following{/other_user}", "gists_url": "https://api.github.com/users/debby1103/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/debby1103", "id": 38886373, "login": "debby1103", "node_id": "MDQ6VXNlcjM4ODg2Mzcz", "organizations_url": "https://api.github.com/users/debby1103/orgs", "received_events_url": "https://api.github.com/users/debby1103/received_events", "repos_url": "https://api.github.com/users/debby1103/repos", "site_admin": false, "starred_url": "https://api.github.com/users/debby1103/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/debby1103/subscriptions", "type": "User", "url": "https://api.github.com/users/debby1103" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "does the trainer save it in GPU? sooo curious... how to fix it", "It's my bad. didn't limit the input length" ]
"2022-09-07T08:46:30Z"
"2022-09-07T12:34:58Z"
"2022-09-07T12:34:58Z"
NONE
null
null
null
from datasets import set_caching_enabled set_caching_enabled(False) for ds_name in ["squad","newsqa","nqopen","narrativeqa"]: train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name)) break train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1 trainer = QuestionAnsweringTrainer( #huggingface trainer model=model, args=training_args, train_dataset=train_ds, eval_dataset= None, eval_examples=None, answer_column_name=answer_column, dataset_name="squad", tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics if training_args.predict_with_generate else None, ) with operation 1, the GPU memory increases from 16G to 23G
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4944/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4944/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1574/comments
https://api.github.com/repos/huggingface/datasets/issues/1574/events
https://github.com/huggingface/datasets/pull/1574
767,015,317
MDExOlB1bGxSZXF1ZXN0NTM5ODY1Mzcy
1,574
Diplomacy detection 3
{ "avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4", "events_url": "https://api.github.com/users/MisbahKhan789/events{/privacy}", "followers_url": "https://api.github.com/users/MisbahKhan789/followers", "following_url": "https://api.github.com/users/MisbahKhan789/following{/other_user}", "gists_url": "https://api.github.com/users/MisbahKhan789/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MisbahKhan789", "id": 15351802, "login": "MisbahKhan789", "node_id": "MDQ6VXNlcjE1MzUxODAy", "organizations_url": "https://api.github.com/users/MisbahKhan789/orgs", "received_events_url": "https://api.github.com/users/MisbahKhan789/received_events", "repos_url": "https://api.github.com/users/MisbahKhan789/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MisbahKhan789/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MisbahKhan789/subscriptions", "type": "User", "url": "https://api.github.com/users/MisbahKhan789" }
[]
closed
false
null
[]
null
[]
"2020-12-14T23:28:51Z"
"2020-12-14T23:29:32Z"
"2020-12-14T23:29:32Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1574.diff", "html_url": "https://github.com/huggingface/datasets/pull/1574", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1574.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1574" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1574/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1574/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3175/comments
https://api.github.com/repos/huggingface/datasets/issues/3175/events
https://github.com/huggingface/datasets/pull/3175
1,038,945,271
PR_kwDODunzps4t0bXw
3,175
Add docs for `to_tf_dataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "This looks great, thank you!", "Thanks !\r\n\r\nFor some reason the new GIF is 6MB, which is a bit heavy for an image on a website. The previous one was around 200KB though which is perfect. For a good experience we usually expect images to be less than 500KB - otherwise for users with poor connection it takes too long to load. Could you try to reduce its size ? Than I think we can merge :)" ]
"2021-10-28T20:55:22Z"
"2021-11-03T15:39:36Z"
"2021-11-03T10:07:23Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3175.diff", "html_url": "https://github.com/huggingface/datasets/pull/3175", "merged_at": "2021-11-03T10:07:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3175.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3175" }
This PR adds some documentation for new features released in v1.13.0, with the main addition being `to_tf_dataset`: - Show how to use `to_tf_dataset` in the tutorial, and move `set_format(type='tensorflow'...)` to the Process section (let me know if I'm missing anything @Rocketknight1 😅). - Add an example for loading dataset from multiple zipped CSV files to the Load section. - Add an example for removing columns for an `IterableDataset`. - Add graphic for visualizing streaming.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3175/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3175/timeline
null
null
true