url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.21B
1.85B
| node_id
stringlengths 18
19
| number
int64 4.19k
6.15k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5938/comments | https://api.github.com/repos/huggingface/datasets/issues/5938/events | https://github.com/huggingface/datasets/pull/5938 | 1,749,462,851 | PR_kwDODunzps5SmbkI | 5,938 | Make get_from_cache use custom temp filename that is locked | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007241 / 0.011353 (-0.004112) | 0.004574 / 0.011008 (-0.006434) | 0.120481 / 0.038508 (0.081973) | 0.040492 / 0.023109 (0.017383) | 0.391399 / 0.275898 (0.115501) | 0.422844 / 0.323480 (0.099365) | 0.004441 / 0.007986 (-0.003545) | 0.004544 / 0.004328 (0.000216) | 0.089482 / 0.004250 (0.085231) | 0.052939 / 0.037052 (0.015887) | 0.393649 / 0.258489 (0.135160) | 0.433852 / 0.293841 (0.140011) | 0.035882 / 0.128546 (-0.092664) | 0.010172 / 0.075646 (-0.065474) | 0.410331 / 0.419271 (-0.008940) | 0.061481 / 0.043533 (0.017948) | 0.405066 / 0.255139 (0.149927) | 0.417732 / 0.283200 (0.134532) | 0.121647 / 0.141683 (-0.020035) | 1.790624 / 1.452155 (0.338469) | 1.863398 / 1.492716 (0.370681) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250650 / 0.018006 (0.232644) | 0.489044 / 0.000490 (0.488554) | 0.010421 / 0.000200 (0.010222) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030340 / 0.037411 (-0.007071) | 0.128318 / 0.014526 (0.113792) | 0.140463 / 0.176557 (-0.036093) | 0.205762 / 0.737135 (-0.531373) | 0.147996 / 0.296338 (-0.148342) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.493158 / 0.215209 (0.277949) | 4.858346 / 2.077655 (2.780691) | 2.242942 / 1.504120 (0.738822) | 2.010092 / 1.541195 (0.468897) | 2.076765 / 1.468490 (0.608275) | 0.636669 / 4.584777 (-3.948108) | 4.478027 / 3.745712 (0.732314) | 2.157843 / 5.269862 (-3.112019) | 1.305133 / 4.565676 (-3.260543) | 0.079220 / 0.424275 (-0.345055) | 0.013858 / 0.007607 (0.006251) | 0.604501 / 0.226044 (0.378457) | 5.950071 / 2.268929 (3.681143) | 2.738373 / 55.444624 (-52.706251) | 2.380275 / 6.876477 (-4.496201) | 2.517108 / 2.142072 (0.375035) | 0.772249 / 4.805227 (-4.032979) | 0.169874 / 6.500664 (-6.330790) | 0.078026 / 0.075469 (0.002557) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.450200 / 1.841788 (-0.391588) | 17.810965 / 8.074308 (9.736657) | 15.518998 / 10.191392 (5.327606) | 0.200469 / 0.680424 (-0.479954) | 0.020777 / 0.534201 (-0.513424) | 0.504556 / 0.579283 (-0.074727) | 0.518493 / 0.434364 (0.084129) | 0.615335 / 0.540337 (0.074998) | 0.754065 / 1.386936 (-0.632871) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007224 / 0.011353 (-0.004129) | 0.004663 / 0.011008 (-0.006345) | 0.092151 / 0.038508 (0.053643) | 0.038359 / 0.023109 (0.015250) | 0.486413 / 0.275898 (0.210515) | 0.521596 / 0.323480 (0.198116) | 0.004207 / 0.007986 (-0.003778) | 0.003745 / 0.004328 (-0.000583) | 0.089840 / 0.004250 (0.085589) | 0.050996 / 0.037052 (0.013943) | 0.498090 / 0.258489 (0.239601) | 0.533647 / 0.293841 (0.239806) | 0.035151 / 0.128546 (-0.093395) | 0.010293 / 0.075646 (-0.065354) | 0.099056 / 0.419271 (-0.320215) | 0.057365 / 0.043533 (0.013833) | 0.470652 / 0.255139 (0.215513) | 0.509801 / 0.283200 (0.226602) | 0.115650 / 0.141683 (-0.026033) | 1.810860 / 1.452155 (0.358705) | 1.896775 / 1.492716 (0.404059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261887 / 0.018006 (0.243880) | 0.489919 / 0.000490 (0.489430) | 0.006117 / 0.000200 (0.005917) | 0.000134 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035033 / 0.037411 (-0.002378) | 0.141093 / 0.014526 (0.126567) | 0.152613 / 0.176557 (-0.023943) | 0.218351 / 0.737135 (-0.518785) | 0.158366 / 0.296338 (-0.137972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.542219 / 0.215209 (0.327010) | 5.479358 / 2.077655 (3.401703) | 2.749586 / 1.504120 (1.245466) | 2.537686 / 1.541195 (0.996491) | 2.582351 / 1.468490 (1.113861) | 0.636750 / 4.584777 (-3.948027) | 4.537501 / 3.745712 (0.791789) | 2.141392 / 5.269862 (-3.128469) | 1.279711 / 4.565676 (-3.285965) | 0.079227 / 0.424275 (-0.345048) | 0.014141 / 0.007607 (0.006534) | 0.662070 / 0.226044 (0.436025) | 6.572144 / 2.268929 (4.303215) | 3.321349 / 55.444624 (-52.123275) | 2.928219 / 6.876477 (-3.948258) | 3.002732 / 2.142072 (0.860659) | 0.773808 / 4.805227 (-4.031419) | 0.166017 / 6.500664 (-6.334647) | 0.076424 / 0.075469 (0.000955) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584325 / 1.841788 (-0.257463) | 18.359247 / 8.074308 (10.284938) | 16.977875 / 10.191392 (6.786483) | 0.195381 / 0.680424 (-0.485043) | 0.021048 / 0.534201 (-0.513153) | 0.512237 / 0.579283 (-0.067047) | 0.511435 / 0.434364 (0.077071) | 0.592856 / 0.540337 (0.052518) | 0.711905 / 1.386936 (-0.675031) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d536e37b21a6dd5c122b6d8113994ec50846c5b5 \"CML watermark\")\n"
] | 2023-06-09T09:01:13 | 2023-06-14T13:35:38 | 2023-06-14T13:27:24 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5938",
"html_url": "https://github.com/huggingface/datasets/pull/5938",
"diff_url": "https://github.com/huggingface/datasets/pull/5938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5938.patch",
"merged_at": "2023-06-14T13:27:24"
} | This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache.
This PR stops using `tempfile` to generate the temporary filename.
Additionally, the behavior now is aligned for both `resume_download` `True` and `False`.
Refactor temp_file_manager so that it uses the filename that is locked:
- Use: `cache_path + ".incomplete"`, when the locked one is `cache_path + ".lock"`
Before it was using `tempfile` inside `cache_dir`, which was not locked: although very improbable name collision (8 random characters), this was not impossible when huge number of multiple processes.
Maybe related to "Stale file handle" issues caused by `tempfile`:
- [ ] https://huggingface.co/datasets/tapaco/discussions/4
- [ ] https://huggingface.co/datasets/xcsr/discussions/1
- [ ] https://huggingface.co/datasets/covost2/discussions/3
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1188, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 907, in get_module
dataset_readme_path = self.download_dataset_readme_file()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 896, in download_dataset_readme_file
return cached_path(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache
http_get(
File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__
result = self.file.__exit__(exc, value, tb)
OSError: [Errno 116] Stale file handle
```
- the stale file handle error can be raised when `tempfile` tries to close (when exiting its context manager) a filename that has been already closed by other process
- note that `tempfile` filenames are randomly generated but not locked in our code
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5938/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5938/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5937/comments | https://api.github.com/repos/huggingface/datasets/issues/5937/events | https://github.com/huggingface/datasets/pull/5937 | 1,749,388,597 | PR_kwDODunzps5SmLIs | 5,937 | Avoid parallel redownload in cache | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006157 / 0.011353 (-0.005196) | 0.003790 / 0.011008 (-0.007219) | 0.097889 / 0.038508 (0.059381) | 0.029038 / 0.023109 (0.005929) | 0.306918 / 0.275898 (0.031020) | 0.339637 / 0.323480 (0.016157) | 0.003526 / 0.007986 (-0.004460) | 0.003102 / 0.004328 (-0.001227) | 0.076908 / 0.004250 (0.072658) | 0.039254 / 0.037052 (0.002201) | 0.309197 / 0.258489 (0.050708) | 0.345635 / 0.293841 (0.051794) | 0.027954 / 0.128546 (-0.100593) | 0.008510 / 0.075646 (-0.067136) | 0.314674 / 0.419271 (-0.104598) | 0.057102 / 0.043533 (0.013569) | 0.307495 / 0.255139 (0.052356) | 0.329501 / 0.283200 (0.046302) | 0.098450 / 0.141683 (-0.043233) | 1.480102 / 1.452155 (0.027948) | 1.550554 / 1.492716 (0.057838) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207440 / 0.018006 (0.189434) | 0.426560 / 0.000490 (0.426071) | 0.003250 / 0.000200 (0.003050) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023777 / 0.037411 (-0.013634) | 0.103905 / 0.014526 (0.089379) | 0.108324 / 0.176557 (-0.068233) | 0.167223 / 0.737135 (-0.569913) | 0.113529 / 0.296338 (-0.182810) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426770 / 0.215209 (0.211561) | 4.251806 / 2.077655 (2.174151) | 2.010426 / 1.504120 (0.506306) | 1.858630 / 1.541195 (0.317435) | 1.941318 / 1.468490 (0.472828) | 0.558056 / 4.584777 (-4.026721) | 3.399107 / 3.745712 (-0.346606) | 1.758386 / 5.269862 (-3.511476) | 1.036305 / 4.565676 (-3.529372) | 0.067094 / 0.424275 (-0.357182) | 0.011167 / 0.007607 (0.003560) | 0.526705 / 0.226044 (0.300661) | 5.250319 / 2.268929 (2.981390) | 2.496723 / 55.444624 (-52.947902) | 2.154013 / 6.876477 (-4.722464) | 2.394724 / 2.142072 (0.252652) | 0.669723 / 4.805227 (-4.135504) | 0.136367 / 6.500664 (-6.364297) | 0.067080 / 0.075469 (-0.008389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269700 / 1.841788 (-0.572088) | 14.099775 / 8.074308 (6.025467) | 14.422936 / 10.191392 (4.231544) | 0.132344 / 0.680424 (-0.548080) | 0.016744 / 0.534201 (-0.517457) | 0.378286 / 0.579283 (-0.200997) | 0.392282 / 0.434364 (-0.042082) | 0.437648 / 0.540337 (-0.102689) | 0.528554 / 1.386936 (-0.858382) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006086 / 0.011353 (-0.005267) | 0.003769 / 0.011008 (-0.007239) | 0.077414 / 0.038508 (0.038906) | 0.027806 / 0.023109 (0.004697) | 0.360333 / 0.275898 (0.084434) | 0.404725 / 0.323480 (0.081245) | 0.003443 / 0.007986 (-0.004543) | 0.004434 / 0.004328 (0.000106) | 0.077309 / 0.004250 (0.073059) | 0.040441 / 0.037052 (0.003388) | 0.358627 / 0.258489 (0.100138) | 0.415246 / 0.293841 (0.121405) | 0.027718 / 0.128546 (-0.100829) | 0.008495 / 0.075646 (-0.067151) | 0.082874 / 0.419271 (-0.336397) | 0.042323 / 0.043533 (-0.001210) | 0.354895 / 0.255139 (0.099756) | 0.390032 / 0.283200 (0.106832) | 0.092377 / 0.141683 (-0.049306) | 1.492817 / 1.452155 (0.040662) | 1.551859 / 1.492716 (0.059143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198921 / 0.018006 (0.180915) | 0.417699 / 0.000490 (0.417209) | 0.001349 / 0.000200 (0.001149) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026349 / 0.037411 (-0.011062) | 0.105712 / 0.014526 (0.091186) | 0.111792 / 0.176557 (-0.064765) | 0.163677 / 0.737135 (-0.573459) | 0.116864 / 0.296338 (-0.179474) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.447532 / 0.215209 (0.232323) | 4.468770 / 2.077655 (2.391116) | 2.403820 / 1.504120 (0.899700) | 2.273640 / 1.541195 (0.732445) | 2.337505 / 1.468490 (0.869015) | 0.560729 / 4.584777 (-4.024048) | 3.389165 / 3.745712 (-0.356547) | 2.697614 / 5.269862 (-2.572247) | 1.351909 / 4.565676 (-3.213768) | 0.068089 / 0.424275 (-0.356186) | 0.011639 / 0.007607 (0.004032) | 0.555277 / 0.226044 (0.329233) | 5.559291 / 2.268929 (3.290363) | 2.657609 / 55.444624 (-52.787015) | 2.346667 / 6.876477 (-4.529809) | 2.615823 / 2.142072 (0.473751) | 0.668662 / 4.805227 (-4.136566) | 0.136593 / 6.500664 (-6.364071) | 0.068384 / 0.075469 (-0.007085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.312089 / 1.841788 (-0.529699) | 14.477510 / 8.074308 (6.403202) | 14.231432 / 10.191392 (4.040040) | 0.132015 / 0.680424 (-0.548409) | 0.016908 / 0.534201 (-0.517293) | 0.368315 / 0.579283 (-0.210968) | 0.397964 / 0.434364 (-0.036400) | 0.432446 / 0.540337 (-0.107891) | 0.526349 / 1.386936 (-0.860587) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#78b4d55c3cfc60e309eb033d3ed0aba5e796b6ce \"CML watermark\")\n"
] | 2023-06-09T08:18:36 | 2023-06-14T12:30:59 | 2023-06-14T12:23:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5937",
"html_url": "https://github.com/huggingface/datasets/pull/5937",
"diff_url": "https://github.com/huggingface/datasets/pull/5937.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5937.patch",
"merged_at": "2023-06-14T12:23:57"
} | Avoid parallel redownload in cache by retrying inside the lock if path exists. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5937/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5936/comments | https://api.github.com/repos/huggingface/datasets/issues/5936/events | https://github.com/huggingface/datasets/issues/5936 | 1,748,424,388 | I_kwDODunzps5oNtbE | 5,936 | Sequence of array not supported for most dtype | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Related, `float16` is the only dtype not supported by `Array2D` (probably by every `ArrayND`):\r\n\r\n```python\r\nfrom datasets import Array2D, Features, Dataset\r\n\r\nimport numpy as np\r\n\r\nfor dtype in [\r\n \"bool\", # ok\r\n \"int8\", # ok\r\n \"int16\", # ok\r\n \"int32\", # ok\r\n \"int64\", # ok\r\n \"uint8\", # ok\r\n \"uint16\", # ok\r\n \"uint32\", # ok\r\n \"uint64\", # ok\r\n \"float16\", # failed\r\n \"float32\", # ok\r\n \"float64\", # ok\r\n]:\r\n features = Features({\"foo\": Array2D(dtype=dtype, shape=(3, 4))})\r\n array = np.zeros((3, 4), dtype=dtype)\r\n try:\r\n dataset = Dataset.from_dict({\"foo\": [array]}, features=features)\r\n except Exception as e:\r\n print(f\"Failed for dtype={dtype}\")\r\n```",
"Here's something I can't explain:\r\n\r\nWhen an array is encoded in the `from_dict` method, the numpy array is converted to a list (thus losing the original dtype, which is transfromed to the nearest builtin Python type)\r\n\r\nhttps://github.com/huggingface/datasets/blob/6ee61e6e695b1df9f232d47faf3a5e2b30b33737/src/datasets/features/features.py#L524-L525\r\n\r\nHowever, later on, this same data is written to memory, and it seems authorized that the data is an array (or in this case, a list of arrays). \r\n\r\nhttps://github.com/huggingface/datasets/blob/6ee61e6e695b1df9f232d47faf3a5e2b30b33737/src/datasets/arrow_writer.py#L185-L186\r\n\r\nSo the question is: why convert it to a Python list? This seems to be quite expensive both in terms of write time (all data is copied) and memory (e.g., an int8 is converted to an int64).\r\n\r\nFinally, if I try to remove this step, it solves all the previous problems, and it seems to me that it doesn't break anything (the CI passes without problem).",
"Arrow only support 1d numpy arrays, so we convert multidim arrays to lists of 1s arrays (and keep the dtype).\r\n\r\nThough you noticed that it's concerting to lists and lose the dtype. If it's the case then it's a bug.",
"Ok the conversion to list shouldn't be there indeed ! Could you open a PR to remove it ?"
] | 2023-06-08T18:18:07 | 2023-06-14T15:03:34 | 2023-06-14T15:03:34 | CONTRIBUTOR | null | null | null | ### Describe the bug
Create a dataset composed of sequence of array fails for most dtypes (see code below).
### Steps to reproduce the bug
```python
from datasets import Sequence, Array2D, Features, Dataset
import numpy as np
for dtype in [
"bool", # ok
"int8", # failed
"int16", # failed
"int32", # failed
"int64", # ok
"uint8", # failed
"uint16", # failed
"uint32", # failed
"uint64", # failed
"float16", # failed
"float32", # failed
"float64", # ok
]:
features = Features({"foo": Sequence(Array2D(dtype=dtype, shape=(2, 2)))})
sequence = [
[[1.0, 2.0], [3.0, 4.0]],
[[5.0, 6.0], [7.0, 8.0]],
]
array = np.array(sequence, dtype=dtype)
try:
dataset = Dataset.from_dict({"foo": [array]}, features=features)
except Exception as e:
print(f"Failed for dtype={dtype}")
```
Traceback for `dtype="int8"`:
```
Traceback (most recent call last):
File "/home/qgallouedec/datasets/a.py", line 29, in <module>
raise e
File "/home/qgallouedec/datasets/a.py", line 26, in <module>
dataset = Dataset.from_dict({"foo": [array]}, features=features)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 899, in from_dict
pa_table = InMemoryTable.from_pydict(mapping=mapping)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 799, in from_pydict
return cls(pa.Table.from_pydict(*args, **kwargs))
File "pyarrow/table.pxi", line 3725, in pyarrow.lib.Table.from_pydict
File "pyarrow/table.pxi", line 5254, in pyarrow.lib._from_pydict
File "pyarrow/array.pxi", line 350, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 236, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 204, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2091, in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 2139, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/qgallouedec/env/lib/python3.10/site-packages/datasets/table.py", line 1967, in array_cast
return pa_type.wrap_array(array)
File "pyarrow/types.pxi", line 879, in pyarrow.lib.BaseExtensionType.wrap_array
TypeError: Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: int8>>, got list<item: list<item: int64>>
```
### Expected behavior
Not to fail.
### Environment info
- Python 3.10.6
- datasets: master branch
- Numpy: 1.23.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5936/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5935/comments | https://api.github.com/repos/huggingface/datasets/issues/5935/events | https://github.com/huggingface/datasets/pull/5935 | 1,748,090,220 | PR_kwDODunzps5Sh9Mg | 5,935 | Better row group size in push_to_hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007489 / 0.011353 (-0.003864) | 0.004914 / 0.011008 (-0.006095) | 0.111626 / 0.038508 (0.073117) | 0.037920 / 0.023109 (0.014811) | 0.350571 / 0.275898 (0.074673) | 0.389667 / 0.323480 (0.066187) | 0.006309 / 0.007986 (-0.001676) | 0.005488 / 0.004328 (0.001160) | 0.083962 / 0.004250 (0.079712) | 0.050728 / 0.037052 (0.013675) | 0.360997 / 0.258489 (0.102508) | 0.392736 / 0.293841 (0.098895) | 0.031975 / 0.128546 (-0.096571) | 0.009941 / 0.075646 (-0.065705) | 0.379840 / 0.419271 (-0.039432) | 0.056522 / 0.043533 (0.012989) | 0.359379 / 0.255139 (0.104240) | 0.384487 / 0.283200 (0.101287) | 0.117523 / 0.141683 (-0.024160) | 1.683639 / 1.452155 (0.231485) | 1.791645 / 1.492716 (0.298929) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236862 / 0.018006 (0.218856) | 0.481208 / 0.000490 (0.480719) | 0.007455 / 0.000200 (0.007255) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030854 / 0.037411 (-0.006557) | 0.126892 / 0.014526 (0.112367) | 0.139207 / 0.176557 (-0.037350) | 0.206447 / 0.737135 (-0.530689) | 0.143095 / 0.296338 (-0.153244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474677 / 0.215209 (0.259468) | 4.699534 / 2.077655 (2.621879) | 2.152102 / 1.504120 (0.647983) | 1.934815 / 1.541195 (0.393620) | 1.986448 / 1.468490 (0.517958) | 0.607184 / 4.584777 (-3.977593) | 4.480385 / 3.745712 (0.734673) | 2.074729 / 5.269862 (-3.195132) | 1.182383 / 4.565676 (-3.383294) | 0.075624 / 0.424275 (-0.348651) | 0.014046 / 0.007607 (0.006439) | 0.598859 / 0.226044 (0.372814) | 5.959551 / 2.268929 (3.690622) | 2.700851 / 55.444624 (-52.743773) | 2.303775 / 6.876477 (-4.572702) | 2.456441 / 2.142072 (0.314369) | 0.747185 / 4.805227 (-4.058042) | 0.165787 / 6.500664 (-6.334878) | 0.075817 / 0.075469 (0.000348) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.411859 / 1.841788 (-0.429928) | 17.375495 / 8.074308 (9.301187) | 15.187098 / 10.191392 (4.995706) | 0.169953 / 0.680424 (-0.510471) | 0.020204 / 0.534201 (-0.513997) | 0.461424 / 0.579283 (-0.117859) | 0.494443 / 0.434364 (0.060080) | 0.544583 / 0.540337 (0.004246) | 0.648231 / 1.386936 (-0.738705) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007785 / 0.011353 (-0.003568) | 0.005314 / 0.011008 (-0.005694) | 0.087273 / 0.038508 (0.048765) | 0.037810 / 0.023109 (0.014701) | 0.425473 / 0.275898 (0.149575) | 0.459976 / 0.323480 (0.136497) | 0.007270 / 0.007986 (-0.000716) | 0.004631 / 0.004328 (0.000303) | 0.087063 / 0.004250 (0.082812) | 0.052630 / 0.037052 (0.015578) | 0.432384 / 0.258489 (0.173895) | 0.500291 / 0.293841 (0.206450) | 0.033144 / 0.128546 (-0.095402) | 0.010101 / 0.075646 (-0.065545) | 0.096068 / 0.419271 (-0.323204) | 0.062750 / 0.043533 (0.019217) | 0.419308 / 0.255139 (0.164169) | 0.437099 / 0.283200 (0.153900) | 0.122289 / 0.141683 (-0.019394) | 1.737829 / 1.452155 (0.285674) | 1.851481 / 1.492716 (0.358765) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014277 / 0.018006 (-0.003729) | 0.489835 / 0.000490 (0.489345) | 0.008423 / 0.000200 (0.008223) | 0.000188 / 0.000054 (0.000134) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032966 / 0.037411 (-0.004445) | 0.130069 / 0.014526 (0.115544) | 0.144372 / 0.176557 (-0.032185) | 0.200400 / 0.737135 (-0.536735) | 0.149384 / 0.296338 (-0.146954) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.511542 / 0.215209 (0.296333) | 5.093879 / 2.077655 (3.016225) | 2.572088 / 1.504120 (1.067968) | 2.339118 / 1.541195 (0.797923) | 2.441637 / 1.468490 (0.973147) | 0.614818 / 4.584777 (-3.969959) | 4.724441 / 3.745712 (0.978729) | 5.431978 / 5.269862 (0.162116) | 2.257794 / 4.565676 (-2.307883) | 0.078109 / 0.424275 (-0.346166) | 0.013821 / 0.007607 (0.006214) | 0.639232 / 0.226044 (0.413188) | 6.424623 / 2.268929 (4.155694) | 3.163018 / 55.444624 (-52.281606) | 2.756786 / 6.876477 (-4.119690) | 2.808655 / 2.142072 (0.666583) | 0.745843 / 4.805227 (-4.059385) | 0.165562 / 6.500664 (-6.335102) | 0.076610 / 0.075469 (0.001141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.738630 / 1.841788 (-0.103158) | 18.073573 / 8.074308 (9.999265) | 16.482820 / 10.191392 (6.291428) | 0.213233 / 0.680424 (-0.467191) | 0.022839 / 0.534201 (-0.511362) | 0.487043 / 0.579283 (-0.092240) | 0.512518 / 0.434364 (0.078154) | 0.549365 / 0.540337 (0.009028) | 0.656612 / 1.386936 (-0.730324) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#288e92b03bd4ec91c10c8a529b32631cfaba9fb7 \"CML watermark\")\n",
"Good idea!\r\n\r\nI was wondering: if we want to optimize the balance between the size of downloading a row group, and the number of rows in the group, would it make sense to compute the row group size by checking the average size of the rows?\r\n\r\neg. 32x32 images could have a larger row group size than full HD images, no? Relying on the size would even remove the need to check the column types.\r\n\r\n(in this proposal, we could use the computed row group size, eg 837, or use the nearest row group size in a list of values: 10, 100, 1000, 10000)",
"Probably, but I would go for a simpler solution first :p",
"Sure! I wanted to understand if the idea made sense or not, but it's not for this PR.",
"I think it will be more useful for people who use the viewer and won't impact sequential io that much.",
"DuckDB [paragraph](https://duckdb.org/docs/data/parquet/tips.html#selecting-a-row_group_size) that explains how to choose the `row_group_size`. Our default shard size is 500 MB in `push_to_hub`, so, ideally, we should aim for 64 MB row groups (and make this part configurable for power users 🙂).\r\n\r\nSo, before merging this PR, let's add a TODO or open an issue as a reminder that this can be improved.",
"I moved the config values, improved the features check and mentioned the improvements we could do in the docstring :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006211 / 0.011353 (-0.005141) | 0.004244 / 0.011008 (-0.006764) | 0.097941 / 0.038508 (0.059433) | 0.028564 / 0.023109 (0.005455) | 0.299651 / 0.275898 (0.023753) | 0.340694 / 0.323480 (0.017214) | 0.005161 / 0.007986 (-0.002824) | 0.004764 / 0.004328 (0.000435) | 0.075505 / 0.004250 (0.071255) | 0.039656 / 0.037052 (0.002603) | 0.309242 / 0.258489 (0.050753) | 0.350783 / 0.293841 (0.056942) | 0.025145 / 0.128546 (-0.103401) | 0.008498 / 0.075646 (-0.067148) | 0.317657 / 0.419271 (-0.101615) | 0.043926 / 0.043533 (0.000394) | 0.305915 / 0.255139 (0.050776) | 0.331630 / 0.283200 (0.048430) | 0.088564 / 0.141683 (-0.053119) | 1.533175 / 1.452155 (0.081021) | 1.581017 / 1.492716 (0.088301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206032 / 0.018006 (0.188025) | 0.433446 / 0.000490 (0.432956) | 0.003955 / 0.000200 (0.003755) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023468 / 0.037411 (-0.013943) | 0.103292 / 0.014526 (0.088766) | 0.107234 / 0.176557 (-0.069322) | 0.168525 / 0.737135 (-0.568610) | 0.113218 / 0.296338 (-0.183120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431085 / 0.215209 (0.215875) | 4.302082 / 2.077655 (2.224427) | 2.068290 / 1.504120 (0.564171) | 1.850718 / 1.541195 (0.309523) | 1.964261 / 1.468490 (0.495771) | 0.547562 / 4.584777 (-4.037215) | 3.410739 / 3.745712 (-0.334974) | 1.779640 / 5.269862 (-3.490221) | 1.005466 / 4.565676 (-3.560210) | 0.066250 / 0.424275 (-0.358025) | 0.011877 / 0.007607 (0.004270) | 0.525185 / 0.226044 (0.299141) | 5.234786 / 2.268929 (2.965857) | 2.398045 / 55.444624 (-53.046580) | 2.073020 / 6.876477 (-4.803457) | 2.210753 / 2.142072 (0.068680) | 0.654897 / 4.805227 (-4.150331) | 0.134639 / 6.500664 (-6.366025) | 0.067050 / 0.075469 (-0.008419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180210 / 1.841788 (-0.661577) | 13.613091 / 8.074308 (5.538783) | 13.441837 / 10.191392 (3.250445) | 0.146048 / 0.680424 (-0.534376) | 0.016505 / 0.534201 (-0.517696) | 0.363210 / 0.579283 (-0.216073) | 0.405484 / 0.434364 (-0.028880) | 0.428712 / 0.540337 (-0.111625) | 0.522300 / 1.386936 (-0.864636) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006147 / 0.011353 (-0.005206) | 0.004161 / 0.011008 (-0.006847) | 0.075861 / 0.038508 (0.037353) | 0.027948 / 0.023109 (0.004839) | 0.362466 / 0.275898 (0.086568) | 0.398227 / 0.323480 (0.074747) | 0.005014 / 0.007986 (-0.002972) | 0.004772 / 0.004328 (0.000444) | 0.075674 / 0.004250 (0.071423) | 0.039158 / 0.037052 (0.002106) | 0.363567 / 0.258489 (0.105078) | 0.410378 / 0.293841 (0.116537) | 0.025510 / 0.128546 (-0.103036) | 0.008528 / 0.075646 (-0.067118) | 0.081803 / 0.419271 (-0.337468) | 0.040954 / 0.043533 (-0.002579) | 0.358492 / 0.255139 (0.103353) | 0.381345 / 0.283200 (0.098145) | 0.092347 / 0.141683 (-0.049336) | 1.567695 / 1.452155 (0.115540) | 1.668412 / 1.492716 (0.175696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203367 / 0.018006 (0.185360) | 0.424642 / 0.000490 (0.424152) | 0.002451 / 0.000200 (0.002251) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026129 / 0.037411 (-0.011282) | 0.102564 / 0.014526 (0.088039) | 0.110583 / 0.176557 (-0.065973) | 0.164332 / 0.737135 (-0.572804) | 0.115706 / 0.296338 (-0.180632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468925 / 0.215209 (0.253716) | 4.657266 / 2.077655 (2.579612) | 2.423280 / 1.504120 (0.919160) | 2.236284 / 1.541195 (0.695089) | 2.323019 / 1.468490 (0.854529) | 0.548120 / 4.584777 (-4.036657) | 3.455602 / 3.745712 (-0.290110) | 1.730421 / 5.269862 (-3.539441) | 1.006089 / 4.565676 (-3.559588) | 0.067478 / 0.424275 (-0.356797) | 0.011465 / 0.007607 (0.003857) | 0.574235 / 0.226044 (0.348190) | 5.744404 / 2.268929 (3.475475) | 2.882225 / 55.444624 (-52.562400) | 2.618246 / 6.876477 (-4.258231) | 2.642920 / 2.142072 (0.500847) | 0.661441 / 4.805227 (-4.143787) | 0.137358 / 6.500664 (-6.363306) | 0.070372 / 0.075469 (-0.005097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333815 / 1.841788 (-0.507973) | 14.689667 / 8.074308 (6.615359) | 14.362294 / 10.191392 (4.170902) | 0.152011 / 0.680424 (-0.528413) | 0.016869 / 0.534201 (-0.517332) | 0.370433 / 0.579283 (-0.208851) | 0.399642 / 0.434364 (-0.034722) | 0.433759 / 0.540337 (-0.106578) | 0.525443 / 1.386936 (-0.861493) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09e9f9a88edd9055b5c540e3d83b5a11d48f8ba8 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006564 / 0.011353 (-0.004789) | 0.004350 / 0.011008 (-0.006658) | 0.096277 / 0.038508 (0.057769) | 0.032956 / 0.023109 (0.009847) | 0.303675 / 0.275898 (0.027777) | 0.336384 / 0.323480 (0.012904) | 0.005789 / 0.007986 (-0.002197) | 0.003957 / 0.004328 (-0.000371) | 0.073990 / 0.004250 (0.069740) | 0.050974 / 0.037052 (0.013922) | 0.321754 / 0.258489 (0.063265) | 0.349489 / 0.293841 (0.055648) | 0.031138 / 0.128546 (-0.097409) | 0.009000 / 0.075646 (-0.066646) | 0.325445 / 0.419271 (-0.093826) | 0.070173 / 0.043533 (0.026640) | 0.304706 / 0.255139 (0.049567) | 0.321803 / 0.283200 (0.038603) | 0.109405 / 0.141683 (-0.032278) | 1.489812 / 1.452155 (0.037657) | 1.577729 / 1.492716 (0.085013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287187 / 0.018006 (0.269181) | 0.527625 / 0.000490 (0.527135) | 0.006533 / 0.000200 (0.006333) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026659 / 0.037411 (-0.010752) | 0.106236 / 0.014526 (0.091710) | 0.118615 / 0.176557 (-0.057941) | 0.173156 / 0.737135 (-0.563979) | 0.122883 / 0.296338 (-0.173456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407189 / 0.215209 (0.191980) | 4.055732 / 2.077655 (1.978078) | 1.865594 / 1.504120 (0.361474) | 1.664325 / 1.541195 (0.123130) | 1.668961 / 1.468490 (0.200471) | 0.521207 / 4.584777 (-4.063570) | 3.740424 / 3.745712 (-0.005288) | 3.431973 / 5.269862 (-1.837889) | 1.636669 / 4.565676 (-2.929008) | 0.065271 / 0.424275 (-0.359005) | 0.012151 / 0.007607 (0.004544) | 0.514233 / 0.226044 (0.288189) | 5.110150 / 2.268929 (2.841222) | 2.264340 / 55.444624 (-53.180284) | 1.940428 / 6.876477 (-4.936049) | 2.042286 / 2.142072 (-0.099787) | 0.639200 / 4.805227 (-4.166028) | 0.139537 / 6.500664 (-6.361127) | 0.063195 / 0.075469 (-0.012274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.179501 / 1.841788 (-0.662286) | 14.600133 / 8.074308 (6.525825) | 14.902137 / 10.191392 (4.710745) | 0.144509 / 0.680424 (-0.535915) | 0.017449 / 0.534201 (-0.516752) | 0.393135 / 0.579283 (-0.186148) | 0.413103 / 0.434364 (-0.021261) | 0.459897 / 0.540337 (-0.080440) | 0.552602 / 1.386936 (-0.834334) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006891 / 0.011353 (-0.004462) | 0.004633 / 0.011008 (-0.006375) | 0.073093 / 0.038508 (0.034585) | 0.032509 / 0.023109 (0.009399) | 0.348332 / 0.275898 (0.072434) | 0.381920 / 0.323480 (0.058440) | 0.005978 / 0.007986 (-0.002007) | 0.005360 / 0.004328 (0.001032) | 0.074307 / 0.004250 (0.070056) | 0.049668 / 0.037052 (0.012615) | 0.354713 / 0.258489 (0.096224) | 0.398521 / 0.293841 (0.104681) | 0.032013 / 0.128546 (-0.096534) | 0.008890 / 0.075646 (-0.066756) | 0.080013 / 0.419271 (-0.339259) | 0.051820 / 0.043533 (0.008288) | 0.349730 / 0.255139 (0.094591) | 0.369267 / 0.283200 (0.086067) | 0.103874 / 0.141683 (-0.037809) | 1.484148 / 1.452155 (0.031993) | 1.573927 / 1.492716 (0.081211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009699 / 0.018006 (-0.008307) | 0.511176 / 0.000490 (0.510686) | 0.002938 / 0.000200 (0.002738) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027847 / 0.037411 (-0.009564) | 0.111565 / 0.014526 (0.097039) | 0.120625 / 0.176557 (-0.055932) | 0.172130 / 0.737135 (-0.565006) | 0.125949 / 0.296338 (-0.170389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430634 / 0.215209 (0.215424) | 4.315377 / 2.077655 (2.237722) | 2.070764 / 1.504120 (0.566644) | 1.881962 / 1.541195 (0.340767) | 1.904053 / 1.468490 (0.435563) | 0.524973 / 4.584777 (-4.059804) | 3.718359 / 3.745712 (-0.027353) | 3.415344 / 5.269862 (-1.854518) | 1.224568 / 4.565676 (-3.341108) | 0.065593 / 0.424275 (-0.358682) | 0.011643 / 0.007607 (0.004036) | 0.537050 / 0.226044 (0.311006) | 5.352155 / 2.268929 (3.083226) | 2.557361 / 55.444624 (-52.887263) | 2.217770 / 6.876477 (-4.658707) | 2.194975 / 2.142072 (0.052902) | 0.635142 / 4.805227 (-4.170085) | 0.140642 / 6.500664 (-6.360022) | 0.064690 / 0.075469 (-0.010779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266125 / 1.841788 (-0.575663) | 14.836413 / 8.074308 (6.762105) | 14.446870 / 10.191392 (4.255478) | 0.191545 / 0.680424 (-0.488878) | 0.017433 / 0.534201 (-0.516768) | 0.392296 / 0.579283 (-0.186987) | 0.420698 / 0.434364 (-0.013666) | 0.463225 / 0.540337 (-0.077112) | 0.556127 / 1.386936 (-0.830809) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7fcbe5b1575c8d162b65b9397b3dfda995a4e048 \"CML watermark\")\n"
] | 2023-06-08T15:01:15 | 2023-06-09T17:47:37 | 2023-06-09T17:40:09 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5935",
"html_url": "https://github.com/huggingface/datasets/pull/5935",
"diff_url": "https://github.com/huggingface/datasets/pull/5935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5935.patch",
"merged_at": "2023-06-09T17:40:09"
} | This is a very simple change that improves `to_parquet` to use a more reasonable row group size for image and audio datasets.
This is especially useful for `push_to_hub` and will provide a better experience with the dataset viewer on HF | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5935/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5934/comments | https://api.github.com/repos/huggingface/datasets/issues/5934/events | https://github.com/huggingface/datasets/pull/5934 | 1,747,904,840 | PR_kwDODunzps5ShUxQ | 5,934 | Modify levels of some logging messages | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I've addressed this as part of #6019, so feel free to close this PR. ",
"Thanks !"
] | 2023-06-08T13:31:44 | 2023-07-12T18:21:03 | 2023-07-12T18:21:02 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5934",
"html_url": "https://github.com/huggingface/datasets/pull/5934",
"diff_url": "https://github.com/huggingface/datasets/pull/5934.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5934.patch",
"merged_at": null
} | Some warning messages didn't quite sound like warnings so I modified their logging levels to info. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5934/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5933/comments | https://api.github.com/repos/huggingface/datasets/issues/5933/events | https://github.com/huggingface/datasets/pull/5933 | 1,747,382,500 | PR_kwDODunzps5Sfi5J | 5,933 | Fix `to_numpy` when None values in the sequence | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just added the same test with dynamic shape",
"_The documentation is not available anymore as the PR was closed or merged._",
"Awesome ! I'm merging now if you don't mind :)\r\nWe should probably give you permissions to merge your own PRs when you have an approval",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009980 / 0.011353 (-0.001373) | 0.005709 / 0.011008 (-0.005300) | 0.132185 / 0.038508 (0.093677) | 0.039299 / 0.023109 (0.016190) | 0.400168 / 0.275898 (0.124270) | 0.470582 / 0.323480 (0.147102) | 0.007753 / 0.007986 (-0.000233) | 0.005196 / 0.004328 (0.000868) | 0.093698 / 0.004250 (0.089448) | 0.052631 / 0.037052 (0.015579) | 0.430347 / 0.258489 (0.171858) | 0.460162 / 0.293841 (0.166321) | 0.057511 / 0.128546 (-0.071035) | 0.013944 / 0.075646 (-0.061702) | 0.459008 / 0.419271 (0.039737) | 0.075532 / 0.043533 (0.031999) | 0.405165 / 0.255139 (0.150026) | 0.456142 / 0.283200 (0.172942) | 0.117309 / 0.141683 (-0.024374) | 1.945787 / 1.452155 (0.493633) | 2.067162 / 1.492716 (0.574446) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285755 / 0.018006 (0.267749) | 0.619965 / 0.000490 (0.619476) | 0.005071 / 0.000200 (0.004871) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031112 / 0.037411 (-0.006299) | 0.128514 / 0.014526 (0.113988) | 0.137161 / 0.176557 (-0.039396) | 0.211363 / 0.737135 (-0.525772) | 0.151045 / 0.296338 (-0.145293) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.609361 / 0.215209 (0.394152) | 6.124844 / 2.077655 (4.047189) | 2.440757 / 1.504120 (0.936637) | 2.034495 / 1.541195 (0.493300) | 2.047192 / 1.468490 (0.578702) | 0.883171 / 4.584777 (-3.701606) | 5.470552 / 3.745712 (1.724840) | 4.401696 / 5.269862 (-0.868165) | 2.378674 / 4.565676 (-2.187003) | 0.108065 / 0.424275 (-0.316210) | 0.013239 / 0.007607 (0.005632) | 0.830957 / 0.226044 (0.604913) | 8.090659 / 2.268929 (5.821731) | 3.289203 / 55.444624 (-52.155422) | 2.500777 / 6.876477 (-4.375700) | 2.561440 / 2.142072 (0.419367) | 1.064893 / 4.805227 (-3.740334) | 0.220486 / 6.500664 (-6.280178) | 0.079507 / 0.075469 (0.004038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.544334 / 1.841788 (-0.297454) | 17.878997 / 8.074308 (9.804689) | 18.952191 / 10.191392 (8.760799) | 0.245166 / 0.680424 (-0.435258) | 0.028022 / 0.534201 (-0.506179) | 0.517828 / 0.579283 (-0.061455) | 0.618988 / 0.434364 (0.184624) | 0.589742 / 0.540337 (0.049405) | 0.670902 / 1.386936 (-0.716034) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009616 / 0.011353 (-0.001737) | 0.006098 / 0.011008 (-0.004911) | 0.100301 / 0.038508 (0.061793) | 0.037792 / 0.023109 (0.014683) | 0.484667 / 0.275898 (0.208769) | 0.519286 / 0.323480 (0.195806) | 0.007427 / 0.007986 (-0.000558) | 0.007172 / 0.004328 (0.002844) | 0.104429 / 0.004250 (0.100179) | 0.056567 / 0.037052 (0.019515) | 0.502641 / 0.258489 (0.244152) | 0.549629 / 0.293841 (0.255788) | 0.049574 / 0.128546 (-0.078972) | 0.015223 / 0.075646 (-0.060424) | 0.113947 / 0.419271 (-0.305324) | 0.064585 / 0.043533 (0.021053) | 0.512962 / 0.255139 (0.257823) | 0.507218 / 0.283200 (0.224019) | 0.122194 / 0.141683 (-0.019488) | 1.927821 / 1.452155 (0.475667) | 2.051161 / 1.492716 (0.558445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291350 / 0.018006 (0.273344) | 0.588099 / 0.000490 (0.587610) | 0.001368 / 0.000200 (0.001168) | 0.000153 / 0.000054 (0.000099) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030604 / 0.037411 (-0.006807) | 0.126810 / 0.014526 (0.112285) | 0.139309 / 0.176557 (-0.037248) | 0.208030 / 0.737135 (-0.529105) | 0.138985 / 0.296338 (-0.157353) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.681254 / 0.215209 (0.466045) | 6.753856 / 2.077655 (4.676201) | 2.780704 / 1.504120 (1.276585) | 2.475205 / 1.541195 (0.934010) | 2.486784 / 1.468490 (1.018294) | 0.879223 / 4.584777 (-3.705554) | 5.662294 / 3.745712 (1.916582) | 2.698705 / 5.269862 (-2.571156) | 1.660620 / 4.565676 (-2.905057) | 0.112218 / 0.424275 (-0.312057) | 0.014211 / 0.007607 (0.006604) | 0.796957 / 0.226044 (0.570913) | 8.180897 / 2.268929 (5.911969) | 3.540419 / 55.444624 (-51.904205) | 2.899467 / 6.876477 (-3.977010) | 2.870306 / 2.142072 (0.728233) | 1.069537 / 4.805227 (-3.735690) | 0.211281 / 6.500664 (-6.289383) | 0.078898 / 0.075469 (0.003429) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.666790 / 1.841788 (-0.174998) | 18.302127 / 8.074308 (10.227819) | 21.317546 / 10.191392 (11.126153) | 0.242795 / 0.680424 (-0.437629) | 0.026754 / 0.534201 (-0.507447) | 0.493375 / 0.579283 (-0.085908) | 0.605400 / 0.434364 (0.171036) | 0.586888 / 0.540337 (0.046550) | 0.722809 / 1.386936 (-0.664127) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ce2328e7b1d62998b22510492530af55d4493b73 \"CML watermark\")\n"
] | 2023-06-08T08:38:56 | 2023-06-09T13:49:41 | 2023-06-09T13:23:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5933",
"html_url": "https://github.com/huggingface/datasets/pull/5933",
"diff_url": "https://github.com/huggingface/datasets/pull/5933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5933.patch",
"merged_at": "2023-06-09T13:23:48"
} | Closes #5927
I've realized that the error was overlooked during testing due to the presence of only one None value in the sequence.
Unfortunately, it was the only case where the function works as expected. When the sequence contained more than one None value, the function failed. Consequently, I've updated the tests to include sequences with multiple None values. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5933/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5932/comments | https://api.github.com/repos/huggingface/datasets/issues/5932/events | https://github.com/huggingface/datasets/pull/5932 | 1,746,249,161 | PR_kwDODunzps5Sbrzo | 5,932 | [doc build] Use secrets | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008499 / 0.011353 (-0.002854) | 0.006155 / 0.011008 (-0.004853) | 0.124032 / 0.038508 (0.085524) | 0.037337 / 0.023109 (0.014228) | 0.389274 / 0.275898 (0.113376) | 0.427736 / 0.323480 (0.104257) | 0.006929 / 0.007986 (-0.001057) | 0.005017 / 0.004328 (0.000689) | 0.096356 / 0.004250 (0.092105) | 0.055694 / 0.037052 (0.018642) | 0.391417 / 0.258489 (0.132928) | 0.448098 / 0.293841 (0.154257) | 0.042442 / 0.128546 (-0.086105) | 0.013456 / 0.075646 (-0.062190) | 0.423502 / 0.419271 (0.004230) | 0.062919 / 0.043533 (0.019386) | 0.384317 / 0.255139 (0.129178) | 0.410851 / 0.283200 (0.127652) | 0.112807 / 0.141683 (-0.028875) | 1.746050 / 1.452155 (0.293895) | 1.977974 / 1.492716 (0.485257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306382 / 0.018006 (0.288375) | 0.620310 / 0.000490 (0.619820) | 0.009309 / 0.000200 (0.009109) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026900 / 0.037411 (-0.010511) | 0.140125 / 0.014526 (0.125599) | 0.136295 / 0.176557 (-0.040261) | 0.207721 / 0.737135 (-0.529414) | 0.146328 / 0.296338 (-0.150011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616712 / 0.215209 (0.401503) | 6.237820 / 2.077655 (4.160166) | 2.503809 / 1.504120 (0.999689) | 2.129739 / 1.541195 (0.588544) | 2.160768 / 1.468490 (0.692277) | 0.971273 / 4.584777 (-3.613504) | 5.687161 / 3.745712 (1.941449) | 2.738148 / 5.269862 (-2.531713) | 1.692695 / 4.565676 (-2.872981) | 0.113701 / 0.424275 (-0.310574) | 0.014809 / 0.007607 (0.007202) | 0.774795 / 0.226044 (0.548750) | 7.660012 / 2.268929 (5.391083) | 3.253036 / 55.444624 (-52.191588) | 2.607498 / 6.876477 (-4.268979) | 2.681678 / 2.142072 (0.539606) | 1.095275 / 4.805227 (-3.709952) | 0.239078 / 6.500664 (-6.261586) | 0.081034 / 0.075469 (0.005565) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574547 / 1.841788 (-0.267240) | 18.323566 / 8.074308 (10.249258) | 19.274482 / 10.191392 (9.083090) | 0.210275 / 0.680424 (-0.470149) | 0.031843 / 0.534201 (-0.502358) | 0.514843 / 0.579283 (-0.064440) | 0.633782 / 0.434364 (0.199418) | 0.588569 / 0.540337 (0.048232) | 0.721401 / 1.386936 (-0.665535) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008866 / 0.011353 (-0.002487) | 0.006460 / 0.011008 (-0.004548) | 0.121337 / 0.038508 (0.082829) | 0.033896 / 0.023109 (0.010786) | 0.455702 / 0.275898 (0.179804) | 0.509685 / 0.323480 (0.186205) | 0.007650 / 0.007986 (-0.000336) | 0.005578 / 0.004328 (0.001250) | 0.098505 / 0.004250 (0.094255) | 0.056122 / 0.037052 (0.019069) | 0.478483 / 0.258489 (0.219994) | 0.560008 / 0.293841 (0.266167) | 0.044926 / 0.128546 (-0.083620) | 0.014562 / 0.075646 (-0.061085) | 0.115027 / 0.419271 (-0.304244) | 0.066494 / 0.043533 (0.022961) | 0.463434 / 0.255139 (0.208296) | 0.513856 / 0.283200 (0.230656) | 0.126436 / 0.141683 (-0.015247) | 1.874729 / 1.452155 (0.422575) | 1.925080 / 1.492716 (0.432364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012672 / 0.018006 (-0.005334) | 0.615797 / 0.000490 (0.615307) | 0.001606 / 0.000200 (0.001406) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031104 / 0.037411 (-0.006307) | 0.130107 / 0.014526 (0.115581) | 0.140587 / 0.176557 (-0.035970) | 0.205081 / 0.737135 (-0.532054) | 0.144068 / 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.646549 / 0.215209 (0.431340) | 6.403962 / 2.077655 (4.326307) | 2.812594 / 1.504120 (1.308474) | 2.478480 / 1.541195 (0.937285) | 2.552385 / 1.468490 (1.083895) | 0.991987 / 4.584777 (-3.592790) | 5.777917 / 3.745712 (2.032205) | 5.697830 / 5.269862 (0.427969) | 2.370583 / 4.565676 (-2.195094) | 0.109905 / 0.424275 (-0.314370) | 0.013801 / 0.007607 (0.006193) | 0.799932 / 0.226044 (0.573888) | 8.155672 / 2.268929 (5.886743) | 3.711662 / 55.444624 (-51.732963) | 3.042164 / 6.876477 (-3.834312) | 3.073549 / 2.142072 (0.931477) | 1.137515 / 4.805227 (-3.667712) | 0.231266 / 6.500664 (-6.269398) | 0.080893 / 0.075469 (0.005424) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669210 / 1.841788 (-0.172577) | 18.747144 / 8.074308 (10.672836) | 21.084589 / 10.191392 (10.893197) | 0.241379 / 0.680424 (-0.439045) | 0.029473 / 0.534201 (-0.504728) | 0.524605 / 0.579283 (-0.054678) | 0.622852 / 0.434364 (0.188488) | 0.604941 / 0.540337 (0.064604) | 0.715978 / 1.386936 (-0.670958) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#142484a60b1330359d7713e906fc9e5e30aa9f64 \"CML watermark\")\n",
"Cool ! what about `.github/workflows/build_pr_documentation.yml` and `.github/workflows/delete_doc_comment.yml` ?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005973 / 0.011353 (-0.005380) | 0.004389 / 0.011008 (-0.006620) | 0.096076 / 0.038508 (0.057568) | 0.031569 / 0.023109 (0.008460) | 0.328300 / 0.275898 (0.052402) | 0.359356 / 0.323480 (0.035876) | 0.005378 / 0.007986 (-0.002607) | 0.003703 / 0.004328 (-0.000625) | 0.075251 / 0.004250 (0.071000) | 0.042340 / 0.037052 (0.005287) | 0.346103 / 0.258489 (0.087614) | 0.379896 / 0.293841 (0.086055) | 0.027493 / 0.128546 (-0.101053) | 0.009033 / 0.075646 (-0.066613) | 0.327829 / 0.419271 (-0.091442) | 0.064074 / 0.043533 (0.020541) | 0.337703 / 0.255139 (0.082564) | 0.355335 / 0.283200 (0.072136) | 0.101179 / 0.141683 (-0.040504) | 1.471738 / 1.452155 (0.019584) | 1.539031 / 1.492716 (0.046315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194097 / 0.018006 (0.176091) | 0.434190 / 0.000490 (0.433701) | 0.005730 / 0.000200 (0.005530) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025634 / 0.037411 (-0.011778) | 0.105080 / 0.014526 (0.090555) | 0.116508 / 0.176557 (-0.060049) | 0.173867 / 0.737135 (-0.563269) | 0.117749 / 0.296338 (-0.178590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401566 / 0.215209 (0.186357) | 4.003558 / 2.077655 (1.925903) | 1.802756 / 1.504120 (0.298636) | 1.604222 / 1.541195 (0.063027) | 1.656617 / 1.468490 (0.188127) | 0.523385 / 4.584777 (-4.061392) | 3.744292 / 3.745712 (-0.001420) | 1.794295 / 5.269862 (-3.475567) | 1.044690 / 4.565676 (-3.520987) | 0.064992 / 0.424275 (-0.359284) | 0.011542 / 0.007607 (0.003935) | 0.507830 / 0.226044 (0.281785) | 5.061574 / 2.268929 (2.792645) | 2.252896 / 55.444624 (-53.191729) | 1.912551 / 6.876477 (-4.963926) | 2.073510 / 2.142072 (-0.068562) | 0.642148 / 4.805227 (-4.163079) | 0.140151 / 6.500664 (-6.360513) | 0.062623 / 0.075469 (-0.012846) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180367 / 1.841788 (-0.661421) | 14.263475 / 8.074308 (6.189167) | 12.917251 / 10.191392 (2.725859) | 0.143815 / 0.680424 (-0.536608) | 0.017286 / 0.534201 (-0.516915) | 0.388411 / 0.579283 (-0.190872) | 0.430512 / 0.434364 (-0.003851) | 0.466595 / 0.540337 (-0.073742) | 0.564545 / 1.386936 (-0.822391) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006059 / 0.011353 (-0.005294) | 0.004419 / 0.011008 (-0.006590) | 0.074206 / 0.038508 (0.035697) | 0.031180 / 0.023109 (0.008071) | 0.380031 / 0.275898 (0.104133) | 0.410373 / 0.323480 (0.086893) | 0.005397 / 0.007986 (-0.002589) | 0.003952 / 0.004328 (-0.000376) | 0.074426 / 0.004250 (0.070176) | 0.046256 / 0.037052 (0.009203) | 0.385543 / 0.258489 (0.127054) | 0.430724 / 0.293841 (0.136883) | 0.028052 / 0.128546 (-0.100494) | 0.008810 / 0.075646 (-0.066836) | 0.080749 / 0.419271 (-0.338522) | 0.046746 / 0.043533 (0.003214) | 0.380325 / 0.255139 (0.125186) | 0.398901 / 0.283200 (0.115701) | 0.099607 / 0.141683 (-0.042076) | 1.433343 / 1.452155 (-0.018812) | 1.520447 / 1.492716 (0.027730) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202232 / 0.018006 (0.184225) | 0.431342 / 0.000490 (0.430852) | 0.001020 / 0.000200 (0.000820) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028762 / 0.037411 (-0.008649) | 0.111777 / 0.014526 (0.097251) | 0.119283 / 0.176557 (-0.057273) | 0.168151 / 0.737135 (-0.568985) | 0.126093 / 0.296338 (-0.170245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442689 / 0.215209 (0.227480) | 4.369202 / 2.077655 (2.291547) | 2.167703 / 1.504120 (0.663583) | 1.960580 / 1.541195 (0.419385) | 2.001459 / 1.468490 (0.532969) | 0.527169 / 4.584777 (-4.057608) | 3.738987 / 3.745712 (-0.006726) | 1.819002 / 5.269862 (-3.450860) | 1.082786 / 4.565676 (-3.482891) | 0.066209 / 0.424275 (-0.358066) | 0.011549 / 0.007607 (0.003942) | 0.545959 / 0.226044 (0.319915) | 5.466655 / 2.268929 (3.197727) | 2.671448 / 55.444624 (-52.773176) | 2.340968 / 6.876477 (-4.535509) | 2.358805 / 2.142072 (0.216733) | 0.649456 / 4.805227 (-4.155771) | 0.142009 / 6.500664 (-6.358655) | 0.064199 / 0.075469 (-0.011270) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259819 / 1.841788 (-0.581969) | 14.456988 / 8.074308 (6.382680) | 14.478982 / 10.191392 (4.287590) | 0.163156 / 0.680424 (-0.517268) | 0.017090 / 0.534201 (-0.517111) | 0.391339 / 0.579283 (-0.187944) | 0.422021 / 0.434364 (-0.012343) | 0.465340 / 0.540337 (-0.074997) | 0.564517 / 1.386936 (-0.822419) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#97358c88f996a65f49923ec215358044e4146a95 \"CML watermark\")\n",
"> .github/workflows/delete_doc_comment.yml \r\n\r\nis already updated https://github.com/huggingface/datasets/pull/5932/files\r\n\r\n> .github/workflows/build_pr_documentation.yml\r\n\r\nindeed no changes are needed"
] | 2023-06-07T16:09:39 | 2023-06-09T10:16:58 | 2023-06-09T09:53:16 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5932",
"html_url": "https://github.com/huggingface/datasets/pull/5932",
"diff_url": "https://github.com/huggingface/datasets/pull/5932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5932.patch",
"merged_at": "2023-06-09T09:53:16"
} | Companion pr to https://github.com/huggingface/doc-builder/pull/379 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5932/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5932/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5931/comments | https://api.github.com/repos/huggingface/datasets/issues/5931/events | https://github.com/huggingface/datasets/issues/5931 | 1,745,408,784 | I_kwDODunzps5oCNMQ | 5,931 | `datasets.map` not reusing cached copy by default | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This can happen when a map transform cannot be hashed deterministically (e.g., an object referenced by the transform changes its state after the first call - an issue with fast tokenizers). The solution is to provide `cache_file_name` in the `map` call to check this file for the cached result instead of relying on the default caching mechanism."
] | 2023-06-07T09:03:33 | 2023-06-21T16:15:40 | 2023-06-21T16:15:40 | CONTRIBUTOR | null | null | null | ### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?
One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?
### Steps to reproduce the bug
```
# make sure that dataset decodes audio with correct sampling rate
dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate
if dataset_sampling_rate != self.feature_extractor.sampling_rate:
self.raw_datasets = self.raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)
)
vectorized_datasets = self.raw_datasets.map(
self.prepare_dataset,
remove_columns=next(iter(self.raw_datasets.values())).column_names,
num_proc=self.num_workers,
desc="preprocess datasets",
)
# filter data that is longer than max_input_length
self.vectorized_datasets = vectorized_datasets.filter(
self.is_audio_in_length_range,
num_proc=self.num_workers,
input_columns=["input_length"],
)
def prepare_dataset(self, batch):
# load audio
sample = batch["audio"]
inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
batch["labels"] = self.tokenizer(batch["target_text"]).input_ids
return batch
```
### Expected behavior
`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5931/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5930/comments | https://api.github.com/repos/huggingface/datasets/issues/5930/events | https://github.com/huggingface/datasets/issues/5930 | 1,745,184,395 | I_kwDODunzps5oBWaL | 5,930 | loading private custom dataset script - authentication error | {
"login": "flckv",
"id": 103381497,
"node_id": "U_kgDOBil5-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/103381497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flckv",
"html_url": "https://github.com/flckv",
"followers_url": "https://api.github.com/users/flckv/followers",
"following_url": "https://api.github.com/users/flckv/following{/other_user}",
"gists_url": "https://api.github.com/users/flckv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flckv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flckv/subscriptions",
"organizations_url": "https://api.github.com/users/flckv/orgs",
"repos_url": "https://api.github.com/users/flckv/repos",
"events_url": "https://api.github.com/users/flckv/events{/privacy}",
"received_events_url": "https://api.github.com/users/flckv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This issue seems to have been resolved, so I'm closing it."
] | 2023-06-07T06:58:23 | 2023-06-15T14:49:21 | 2023-06-15T14:49:20 | NONE | null | null | null | ### Describe the bug
Train model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ?
I am logged in in the terminal, in the browser. I receive this error:
/python3.8/site-packages/datasets/utils/file_utils.py", line 566, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels `(ConnectionError('Unauthorized for URL `https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels. Please use the parameter `**`use_auth_token=True`**` after logging in with `**`huggingface-cli login`**`'))
when I added: `use_auth_token=True` and logged in via terminal then I received error:
or the same error in different format:
raise ConnectionError(f"`Couldn't reach {url} (error {response.status_code}`)")
ConnectionError: Couldn't reach https://huggingface.co/datasets/fkov/s/blob/main/data/s/train/labels (`error 401`)
### Steps to reproduce the bug
1. cloned transformers library locally:
https://huggingface.co/docs/transformers/v4.15.0/examples :
> git clone https://github.com/huggingface/transformers
> cd transformers
> pip install .
> cd /transformers/examples/pytorch/audio-classification
> pip install -r requirements.txt
2. created **loading script**
> https://huggingface.co/docs/datasets/dataset_script added next to dataset:
3. uploaded **private custom dataset** with loading script to HuggingFace
> https://huggingface.co/docs/datasets/dataset_script
4. added dataset loading script to **local directory** in the above cloned transformers library:
> cd /transformers/examples/pytorch/audio-classification
5. logged in to HuggingFace on local terminal with :
> **huggingface-cli login**
6. run the model with the custom dataset stored on HuggingFace with code: https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md
cd /transformers/examples/pytorch/audio-classification
> python run_audio_classification.py \
> --model_name_or_path facebook/wav2vec2-base \
> --output_dir l/users/flck/outputs/wav2vec2-base-s \
> --overwrite_output_dir \
> --dataset_name s \
> --dataset_config_name s \
> --remove_unused_columns False \
> --do_train \
> --do_eval \
> --fp16 \
> --learning_rate 3e-5 \
> --max_length_seconds 1 \
> --attention_mask False \
> --warmup_ratio 0.1 \
> --num_train_epochs 5 \
> --per_device_train_batch_size 32 \
> --gradient_accumulation_steps 4 \
> --per_device_eval_batch_size 32 \
> --dataloader_num_workers 4 \
> --logging_strategy steps \
> --logging_steps 10 \
> --evaluation_strategy epoch \
> --save_strategy epoch \
> --load_best_model_at_end True \
> --metric_for_best_model accuracy \
> --save_total_limit 3 \
> --seed 0 \
> --push_to_hub \
> **--use_auth_token=True**
### Expected behavior
Be able to train a model the https://github.com/huggingface/transformers/blob/main/examples/pytorch/audio-classification/ run_audio_classification.py with private custom dataset stored on HuggingFace.
### Environment info
- datasets version: 2.12.0
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.4.204-ql-generic-12.0-19-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[conda] numpy 1.24.3 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5930/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5929/comments | https://api.github.com/repos/huggingface/datasets/issues/5929/events | https://github.com/huggingface/datasets/issues/5929 | 1,744,478,456 | I_kwDODunzps5n-qD4 | 5,929 | Importing PyTorch reduces multiprocessing performance for map | {
"login": "Maxscha",
"id": 12814709,
"node_id": "MDQ6VXNlcjEyODE0NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/12814709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Maxscha",
"html_url": "https://github.com/Maxscha",
"followers_url": "https://api.github.com/users/Maxscha/followers",
"following_url": "https://api.github.com/users/Maxscha/following{/other_user}",
"gists_url": "https://api.github.com/users/Maxscha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Maxscha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Maxscha/subscriptions",
"organizations_url": "https://api.github.com/users/Maxscha/orgs",
"repos_url": "https://api.github.com/users/Maxscha/repos",
"events_url": "https://api.github.com/users/Maxscha/events{/privacy}",
"received_events_url": "https://api.github.com/users/Maxscha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! The times match when I run this code locally or on Colab.\r\n\r\nAlso, we use `multiprocess`, not `multiprocessing`, for parallelization, and torch's `__init__.py` (executed on `import torch` ) slightly modifies the latter.",
"Hey Mariosasko,\r\n\r\nThanks for looking into it. We further did some investigations after your comment and figured out it's only affecting some hardware/software configurations with the `pytorch` installation of `conda-forge`. Based on this we found the following issue in PyTorch: https://github.com/pytorch/pytorch/issues/102269 with a quick fix for now.\r\n\r\nSince it seems to be a deeper issue with forking processes, the difference between`multiprocess` and `multiprocessing` didn't make a difference.\r\n\r\nClosing this, since the issue comes from `pytorch` not `dataset`. \r\n"
] | 2023-06-06T19:42:25 | 2023-06-16T13:09:12 | 2023-06-16T13:09:12 | NONE | null | null | null | ### Describe the bug
I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported.
### Steps to reproduce the bug
I created two example scripts to reproduce this behavior:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
Takes around 4 seconds on my machine.
While the same code, but with an `import torch`:
```
import datasets
datasets.disable_caching()
from datasets import Dataset
import time
import torch
PROC=32
if __name__ == "__main__":
dataset = [True] * 10000000
dataset = Dataset.from_dict({'train': dataset})
start = time.time()
dataset.map(lambda x: x, num_proc=PROC)
end = time.time()
print(end - start)
```
takes around 22 seconds.
### Expected behavior
I would expect that the import of torch to not have such a significant effect on the performance of map using multiprocessing.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
- torch: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5929/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5928/comments | https://api.github.com/repos/huggingface/datasets/issues/5928/events | https://github.com/huggingface/datasets/pull/5928 | 1,744,098,371 | PR_kwDODunzps5SUXPC | 5,928 | Fix link to quickstart docs in README.md | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006693 / 0.011353 (-0.004660) | 0.004331 / 0.011008 (-0.006677) | 0.098022 / 0.038508 (0.059514) | 0.032764 / 0.023109 (0.009654) | 0.295812 / 0.275898 (0.019914) | 0.325029 / 0.323480 (0.001550) | 0.005779 / 0.007986 (-0.002206) | 0.005381 / 0.004328 (0.001052) | 0.075785 / 0.004250 (0.071535) | 0.048759 / 0.037052 (0.011707) | 0.308986 / 0.258489 (0.050497) | 0.348000 / 0.293841 (0.054159) | 0.027686 / 0.128546 (-0.100860) | 0.008839 / 0.075646 (-0.066807) | 0.328389 / 0.419271 (-0.090883) | 0.062173 / 0.043533 (0.018640) | 0.312257 / 0.255139 (0.057119) | 0.325024 / 0.283200 (0.041824) | 0.103886 / 0.141683 (-0.037797) | 1.440215 / 1.452155 (-0.011940) | 1.528665 / 1.492716 (0.035948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210082 / 0.018006 (0.192076) | 0.442480 / 0.000490 (0.441990) | 0.006559 / 0.000200 (0.006359) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026774 / 0.037411 (-0.010637) | 0.108362 / 0.014526 (0.093837) | 0.117631 / 0.176557 (-0.058926) | 0.176657 / 0.737135 (-0.560478) | 0.124154 / 0.296338 (-0.172184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428136 / 0.215209 (0.212927) | 4.270287 / 2.077655 (2.192632) | 2.014728 / 1.504120 (0.510608) | 1.806772 / 1.541195 (0.265577) | 1.946284 / 1.468490 (0.477794) | 0.525542 / 4.584777 (-4.059235) | 3.667025 / 3.745712 (-0.078687) | 1.878751 / 5.269862 (-3.391111) | 1.048321 / 4.565676 (-3.517356) | 0.065550 / 0.424275 (-0.358725) | 0.011881 / 0.007607 (0.004274) | 0.529873 / 0.226044 (0.303829) | 5.289641 / 2.268929 (3.020712) | 2.489403 / 55.444624 (-52.955221) | 2.141037 / 6.876477 (-4.735440) | 2.230735 / 2.142072 (0.088662) | 0.639781 / 4.805227 (-4.165447) | 0.141410 / 6.500664 (-6.359254) | 0.064374 / 0.075469 (-0.011095) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.159462 / 1.841788 (-0.682325) | 14.524730 / 8.074308 (6.450422) | 13.578070 / 10.191392 (3.386678) | 0.152138 / 0.680424 (-0.528286) | 0.017255 / 0.534201 (-0.516946) | 0.387607 / 0.579283 (-0.191676) | 0.413652 / 0.434364 (-0.020712) | 0.453644 / 0.540337 (-0.086693) | 0.550051 / 1.386936 (-0.836885) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006668 / 0.011353 (-0.004685) | 0.004677 / 0.011008 (-0.006331) | 0.075950 / 0.038508 (0.037442) | 0.032439 / 0.023109 (0.009329) | 0.381839 / 0.275898 (0.105941) | 0.419411 / 0.323480 (0.095931) | 0.005813 / 0.007986 (-0.002172) | 0.004090 / 0.004328 (-0.000238) | 0.075052 / 0.004250 (0.070802) | 0.048453 / 0.037052 (0.011401) | 0.388076 / 0.258489 (0.129587) | 0.431793 / 0.293841 (0.137952) | 0.028408 / 0.128546 (-0.100138) | 0.009028 / 0.075646 (-0.066618) | 0.082569 / 0.419271 (-0.336702) | 0.046772 / 0.043533 (0.003239) | 0.380182 / 0.255139 (0.125043) | 0.401828 / 0.283200 (0.118629) | 0.105388 / 0.141683 (-0.036294) | 1.453356 / 1.452155 (0.001201) | 1.561483 / 1.492716 (0.068767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.008922 / 0.018006 (-0.009084) | 0.444112 / 0.000490 (0.443623) | 0.002756 / 0.000200 (0.002556) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030408 / 0.037411 (-0.007003) | 0.112924 / 0.014526 (0.098399) | 0.124625 / 0.176557 (-0.051932) | 0.176915 / 0.737135 (-0.560220) | 0.129141 / 0.296338 (-0.167198) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448197 / 0.215209 (0.232987) | 4.476548 / 2.077655 (2.398893) | 2.243977 / 1.504120 (0.739857) | 2.054060 / 1.541195 (0.512865) | 2.130680 / 1.468490 (0.662190) | 0.526815 / 4.584777 (-4.057962) | 3.759312 / 3.745712 (0.013600) | 3.333618 / 5.269862 (-1.936244) | 1.579611 / 4.565676 (-2.986065) | 0.065714 / 0.424275 (-0.358561) | 0.011939 / 0.007607 (0.004332) | 0.550313 / 0.226044 (0.324269) | 5.476946 / 2.268929 (3.208018) | 2.726521 / 55.444624 (-52.718104) | 2.364977 / 6.876477 (-4.511499) | 2.450624 / 2.142072 (0.308551) | 0.647174 / 4.805227 (-4.158053) | 0.141265 / 6.500664 (-6.359399) | 0.065493 / 0.075469 (-0.009976) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249702 / 1.841788 (-0.592085) | 15.205647 / 8.074308 (7.131338) | 14.678310 / 10.191392 (4.486918) | 0.141539 / 0.680424 (-0.538884) | 0.017323 / 0.534201 (-0.516878) | 0.387602 / 0.579283 (-0.191681) | 0.415106 / 0.434364 (-0.019258) | 0.458146 / 0.540337 (-0.082192) | 0.553318 / 1.386936 (-0.833618) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#55127d7bf399fd2f3a8713db9822e8cb47cdbbed \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008567 / 0.011353 (-0.002786) | 0.005245 / 0.011008 (-0.005763) | 0.115074 / 0.038508 (0.076566) | 0.032567 / 0.023109 (0.009458) | 0.352297 / 0.275898 (0.076399) | 0.393403 / 0.323480 (0.069923) | 0.006402 / 0.007986 (-0.001583) | 0.004353 / 0.004328 (0.000025) | 0.087903 / 0.004250 (0.083653) | 0.048424 / 0.037052 (0.011372) | 0.370078 / 0.258489 (0.111588) | 0.410192 / 0.293841 (0.116351) | 0.042396 / 0.128546 (-0.086150) | 0.014426 / 0.075646 (-0.061220) | 0.411358 / 0.419271 (-0.007914) | 0.059546 / 0.043533 (0.016013) | 0.364721 / 0.255139 (0.109582) | 0.385100 / 0.283200 (0.101901) | 0.100572 / 0.141683 (-0.041111) | 1.741457 / 1.452155 (0.289302) | 1.933134 / 1.492716 (0.440418) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217177 / 0.018006 (0.199171) | 0.510399 / 0.000490 (0.509909) | 0.005542 / 0.000200 (0.005342) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026852 / 0.037411 (-0.010559) | 0.125580 / 0.014526 (0.111054) | 0.132164 / 0.176557 (-0.044392) | 0.189073 / 0.737135 (-0.548063) | 0.135980 / 0.296338 (-0.160358) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.601924 / 0.215209 (0.386715) | 5.891397 / 2.077655 (3.813743) | 2.389494 / 1.504120 (0.885375) | 2.044013 / 1.541195 (0.502818) | 2.019367 / 1.468490 (0.550877) | 0.883807 / 4.584777 (-3.700970) | 5.141349 / 3.745712 (1.395636) | 2.607415 / 5.269862 (-2.662446) | 1.567268 / 4.565676 (-2.998409) | 0.102738 / 0.424275 (-0.321537) | 0.013480 / 0.007607 (0.005873) | 0.744979 / 0.226044 (0.518934) | 7.404182 / 2.268929 (5.135254) | 2.983406 / 55.444624 (-52.461219) | 2.331847 / 6.876477 (-4.544630) | 2.465119 / 2.142072 (0.323047) | 1.106725 / 4.805227 (-3.698502) | 0.205779 / 6.500664 (-6.294885) | 0.081019 / 0.075469 (0.005550) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.527840 / 1.841788 (-0.313947) | 16.989487 / 8.074308 (8.915179) | 18.016123 / 10.191392 (7.824731) | 0.216157 / 0.680424 (-0.464266) | 0.025393 / 0.534201 (-0.508808) | 0.496743 / 0.579283 (-0.082540) | 0.575365 / 0.434364 (0.141002) | 0.559978 / 0.540337 (0.019641) | 0.677474 / 1.386936 (-0.709462) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008913 / 0.011353 (-0.002440) | 0.005540 / 0.011008 (-0.005469) | 0.100001 / 0.038508 (0.061493) | 0.034432 / 0.023109 (0.011323) | 0.419824 / 0.275898 (0.143926) | 0.443566 / 0.323480 (0.120086) | 0.006372 / 0.007986 (-0.001614) | 0.004405 / 0.004328 (0.000077) | 0.094927 / 0.004250 (0.090677) | 0.050300 / 0.037052 (0.013248) | 0.424806 / 0.258489 (0.166317) | 0.480793 / 0.293841 (0.186952) | 0.050869 / 0.128546 (-0.077677) | 0.015899 / 0.075646 (-0.059747) | 0.111413 / 0.419271 (-0.307859) | 0.058093 / 0.043533 (0.014560) | 0.430575 / 0.255139 (0.175436) | 0.483786 / 0.283200 (0.200586) | 0.106878 / 0.141683 (-0.034805) | 1.763576 / 1.452155 (0.311422) | 1.837750 / 1.492716 (0.345033) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011565 / 0.018006 (-0.006441) | 0.484411 / 0.000490 (0.483922) | 0.004869 / 0.000200 (0.004669) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030706 / 0.037411 (-0.006706) | 0.126901 / 0.014526 (0.112375) | 0.130367 / 0.176557 (-0.046190) | 0.206568 / 0.737135 (-0.530567) | 0.146505 / 0.296338 (-0.149834) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627266 / 0.215209 (0.412057) | 6.314049 / 2.077655 (4.236394) | 2.582920 / 1.504120 (1.078800) | 2.249401 / 1.541195 (0.708206) | 2.244960 / 1.468490 (0.776470) | 0.907770 / 4.584777 (-3.677007) | 5.349622 / 3.745712 (1.603910) | 4.591244 / 5.269862 (-0.678618) | 2.301612 / 4.565676 (-2.264064) | 0.108813 / 0.424275 (-0.315462) | 0.013187 / 0.007607 (0.005580) | 0.806071 / 0.226044 (0.580027) | 7.843903 / 2.268929 (5.574974) | 3.405968 / 55.444624 (-52.038656) | 2.564301 / 6.876477 (-4.312176) | 2.652208 / 2.142072 (0.510135) | 1.168142 / 4.805227 (-3.637086) | 0.218551 / 6.500664 (-6.282113) | 0.078120 / 0.075469 (0.002651) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.562517 / 1.841788 (-0.279271) | 17.519325 / 8.074308 (9.445017) | 20.727083 / 10.191392 (10.535691) | 0.207135 / 0.680424 (-0.473288) | 0.028208 / 0.534201 (-0.505993) | 0.496157 / 0.579283 (-0.083126) | 0.569239 / 0.434364 (0.134875) | 0.566137 / 0.540337 (0.025799) | 0.704208 / 1.386936 (-0.682728) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8eb3f34d876da98e722d866be90d7f26135ea9e3 \"CML watermark\")\n"
] | 2023-06-06T15:23:01 | 2023-06-06T15:52:34 | 2023-06-06T15:43:53 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5928",
"html_url": "https://github.com/huggingface/datasets/pull/5928",
"diff_url": "https://github.com/huggingface/datasets/pull/5928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5928.patch",
"merged_at": "2023-06-06T15:43:53"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5928/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5927/comments | https://api.github.com/repos/huggingface/datasets/issues/5927/events | https://github.com/huggingface/datasets/issues/5927 | 1,744,009,032 | I_kwDODunzps5n83dI | 5,927 | `IndexError` when indexing `Sequence` of `Array2D` with `None` values | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Easy fix would be to add:\r\n\r\n```python\r\nnull_indices -= np.arange(len(null_indices))\r\n```\r\n\r\nbefore L279, but I'm not sure it's the most intuitive way to fix it.",
"Same issue here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7fcbe5b1575c8d162b65b9397b3dfda995a4e048/src/datasets/features/features.py#L1398\r\n\r\nFixed in #5948 "
] | 2023-06-06T14:36:22 | 2023-06-13T12:39:39 | 2023-06-09T13:23:50 | CONTRIBUTOR | null | null | null | ### Describe the bug
Having `None` values in a `Sequence` of `ArrayND` fails.
### Steps to reproduce the bug
```python
from datasets import Array2D, Dataset, Features, Sequence
data = [
[
[[0]],
None,
None,
]
]
feature = Sequence(Array2D((1, 1), dtype="int64"))
dataset = Dataset.from_dict({"a": data}, features=Features({"a": feature}))
dataset[0] # error raised only when indexing
```
```
Traceback (most recent call last):
File "/Users/quentingallouedec/gia/c.py", line 13, in <module>
dataset[0] # error raised only when indexing
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2658, in __getitem__
return self._getitem(key)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2643, in _getitem
formatted_output = format_table(
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 634, in format_table
return formatter(pa_table, query_type=query_type)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 406, in __call__
return self.format_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 441, in format_row
row = self.python_arrow_extractor().extract_row(pa_table)
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 144, in extract_row
return _unnest(pa_table.to_pydict())
File "pyarrow/table.pxi", line 4146, in pyarrow.lib.Table.to_pydict
File "pyarrow/table.pxi", line 1312, in pyarrow.lib.ChunkedArray.to_pylist
File "pyarrow/array.pxi", line 1521, in pyarrow.lib.Array.to_pylist
File "pyarrow/scalar.pxi", line 675, in pyarrow.lib.ListScalar.as_py
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 760, in to_pylist
return self.to_numpy(zero_copy_only=zero_copy_only).tolist()
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/datasets/features/features.py", line 725, in to_numpy
numpy_arr = np.insert(numpy_arr.astype(np.float64), null_indices, np.nan, axis=0)
File "<__array_function__ internals>", line 200, in insert
File "/Users/quentingallouedec/gia/env/lib/python3.10/site-packages/numpy/lib/function_base.py", line 5426, in insert
old_mask[indices] = False
IndexError: index 3 is out of bounds for axis 0 with size 3
```
AFAIK, the problem only occurs when you use a `Sequence` of `ArrayND`.
I strongly suspect that the problem comes from this line, or `np.insert` is misused:
https://github.com/huggingface/datasets/blob/02ee418831aba68d0be93227bce8b3f42ef8980f/src/datasets/features/features.py#L729
To put t simply, you want something that do that:
```python
import numpy as np
numpy_arr = np.zeros((1, 1, 1))
null_indices = np.array([1, 2])
np.insert(numpy_arr, null_indices, np.nan, axis=0)
# raise an error, instead of outputting
# array([[[ 0.]],
# [[nan]],
# [[nan]]])
```
### Expected behavior
The previous code should not raise an error.
### Environment info
- Python 3.10.11
- datasets 2.10.0
- pyarrow 12.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5927/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5926/comments | https://api.github.com/repos/huggingface/datasets/issues/5926/events | https://github.com/huggingface/datasets/issues/5926 | 1,743,922,028 | I_kwDODunzps5n8iNs | 5,926 | Uncaught exception when generating the splits from a dataset that miss data | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @severo.\r\n\r\nThis is a known issue with `fsspec`:\r\n- #5862\r\n- https://github.com/fsspec/filesystem_spec/issues/1265"
] | 2023-06-06T13:51:01 | 2023-06-07T07:53:16 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
Dataset https://huggingface.co/datasets/blog_authorship_corpus has an issue with its hosting platform, since https://drive.google.com/u/0/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error.
But when trying to generate the split names, we get an exception which is now correctly caught.
Seen originally in https://github.com/huggingface/datasets-server/blob/adbdcd6710ffed4e2eb2e4cd905b5e0dff530a15/services/worker/src/worker/job_runners/config/parquet_and_info.py#L435
### Steps to reproduce the bug
```python
>>> from datasets import StreamingDownloadManager, load_dataset_builder
>>> builder = load_dataset_builder(path="blog_authorship_corpus")
Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.60k/5.60k [00:00<00:00, 23.1MB/s]
Downloading metadata: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.81k/2.81k [00:00<00:00, 14.7MB/s]
Downloading readme: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.30k/7.30k [00:00<00:00, 30.8MB/s]
>>> dl_manager = StreamingDownloadManager(base_path=builder.base_path)
>>> builder._split_generators(dl_manager)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/blog_authorship_corpus/6f5d78241afd8313111956f877a57db7a0e9fc6718255dc85df0928197feb683/blog_authorship_corpus.py", line 79, in _split_generators
data = dl_manager.download_and_extract(_DATA_URL)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1087, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1039, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 435, in map_nested
return function(data_struct)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1044, in _extract
protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 433, in _get_extraction_protocol
with fsspec.open(urlpath, **kwargs) as f:
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 439, in open
return open_files(
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 194, in __getitem__
out = super().__getitem__(item)
IndexError: list index out of range
```
### Expected behavior
We should have an Exception raised by the datasets library.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.19.0-1026-aws-x86_64-with-glibc2.35
- Python version: 3.9.15
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5926/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5925/comments | https://api.github.com/repos/huggingface/datasets/issues/5925/events | https://github.com/huggingface/datasets/issues/5925 | 1,741,941,436 | I_kwDODunzps5n0-q8 | 5,925 | Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets | {
"login": "mtkinit",
"id": 78868366,
"node_id": "MDQ6VXNlcjc4ODY4MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/78868366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mtkinit",
"html_url": "https://github.com/mtkinit",
"followers_url": "https://api.github.com/users/mtkinit/followers",
"following_url": "https://api.github.com/users/mtkinit/following{/other_user}",
"gists_url": "https://api.github.com/users/mtkinit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mtkinit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtkinit/subscriptions",
"organizations_url": "https://api.github.com/users/mtkinit/orgs",
"repos_url": "https://api.github.com/users/mtkinit/repos",
"events_url": "https://api.github.com/users/mtkinit/events{/privacy}",
"received_events_url": "https://api.github.com/users/mtkinit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-06-05T14:46:04 | 2023-06-19T17:22:43 | 2023-06-19T17:22:43 | NONE | null | null | null | ### Describe the bug
Hi all,
after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`.
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
Thanks,
Martin
### Steps to reproduce the bug
Here, the code crashed after we updated the `datasets` library:
```python
# list_datasets no longer returns a list, which leads to an error when one tries to slice it
for datasets.list_datasets(with_details=True)[:limit]:
...
```
### Expected behavior
It would be helpful to indicate that by the return type of the `datasets.list_datasets` function.
### Environment info
Ubuntu 22.04
datasets 2.12.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5925/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5924/comments | https://api.github.com/repos/huggingface/datasets/issues/5924/events | https://github.com/huggingface/datasets/pull/5924 | 1,738,889,236 | PR_kwDODunzps5SCiFv | 5,924 | Add parallel module using joblib for Spark | {
"login": "es94129",
"id": 12763339,
"node_id": "MDQ6VXNlcjEyNzYzMzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/es94129",
"html_url": "https://github.com/es94129",
"followers_url": "https://api.github.com/users/es94129/followers",
"following_url": "https://api.github.com/users/es94129/following{/other_user}",
"gists_url": "https://api.github.com/users/es94129/gists{/gist_id}",
"starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/es94129/subscriptions",
"organizations_url": "https://api.github.com/users/es94129/orgs",
"repos_url": "https://api.github.com/users/es94129/repos",
"events_url": "https://api.github.com/users/es94129/events{/privacy}",
"received_events_url": "https://api.github.com/users/es94129/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq, I added the `parallel` part according to the discussion we had. Could you take a look to see if this is aligned with your proposal?\r\n\r\nMeanwhile I'm working on adding a `parallel_backend` parameter to `load_datasets` so that it can be used like:\r\n```python\r\nwith parallel_backend('spark', steps=['downloading']) as backend:\r\n ds = load_dataset(..., parallel_backend=backend)\r\n```\r\nwhere `parallel_backend` is a `ParallelBackend` class.",
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Thanks for the comments!\r\nWith your suggestion, no changes made to `load_dataset` and I validated that downloading with spark is working now with this:\r\n```py\r\nwith parallel_backend('spark', steps=[\"download\"]):\r\n dataset = load_dataset(..., num_proc=2)\r\n```",
"@lhoestq Can a maintainer help trigger the tests again?\r\n> One idea is to decorate the download method to set the current global step to \"download\", and then only use joblib if the current step is one of the steps provided in parallel_backend.\r\n\r\nYes I think this is doable in a subsequent PR.\r\nFor throwing `NotImplementedError` I also think it can be done in a subsequent PR, because I'm not sure if `Dataset.map` is the only function that a user would expect to run using `with parallel_backend`.",
"Just triggered the tests :)\r\n\r\n> Yes I think this is doable in a subsequent PR.\r\nFor throwing NotImplementedError I also think it can be done in a subsequent PR, because I'm not sure if Dataset.map is the only function that a user would expect to run using with parallel_backend.\r\n\r\nI think any Dataset method that has a `num_proc` argument: Dataset.map (the other methods like filter or cast or based on map), and later we can see for the to_xxx methods (to_csv, to_parquet, etc.)",
"Hi maintainers, I've just addressed most of the comments, please take another look, thank you.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008422 / 0.011353 (-0.002931) | 0.005658 / 0.011008 (-0.005350) | 0.135372 / 0.038508 (0.096864) | 0.044766 / 0.023109 (0.021657) | 0.417876 / 0.275898 (0.141978) | 0.462785 / 0.323480 (0.139305) | 0.005485 / 0.007986 (-0.002501) | 0.005640 / 0.004328 (0.001311) | 0.105020 / 0.004250 (0.100770) | 0.049114 / 0.037052 (0.012062) | 0.490450 / 0.258489 (0.231961) | 0.467693 / 0.293841 (0.173852) | 0.050929 / 0.128546 (-0.077617) | 0.014644 / 0.075646 (-0.061002) | 0.452373 / 0.419271 (0.033101) | 0.074897 / 0.043533 (0.031364) | 0.425816 / 0.255139 (0.170677) | 0.420415 / 0.283200 (0.137215) | 0.134121 / 0.141683 (-0.007561) | 1.927744 / 1.452155 (0.475589) | 2.014417 / 1.492716 (0.521701) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254811 / 0.018006 (0.236805) | 0.550011 / 0.000490 (0.549521) | 0.004913 / 0.000200 (0.004714) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032644 / 0.037411 (-0.004768) | 0.135672 / 0.014526 (0.121146) | 0.158984 / 0.176557 (-0.017572) | 0.218267 / 0.737135 (-0.518869) | 0.150348 / 0.296338 (-0.145991) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.625723 / 0.215209 (0.410514) | 6.247559 / 2.077655 (4.169905) | 2.626785 / 1.504120 (1.122666) | 2.195224 / 1.541195 (0.654030) | 2.232140 / 1.468490 (0.763650) | 0.943082 / 4.584777 (-3.641695) | 5.799262 / 3.745712 (2.053550) | 2.849411 / 5.269862 (-2.420450) | 1.744160 / 4.565676 (-2.821516) | 0.119056 / 0.424275 (-0.305219) | 0.014233 / 0.007607 (0.006626) | 0.795238 / 0.226044 (0.569194) | 7.569586 / 2.268929 (5.300657) | 3.179481 / 55.444624 (-52.265143) | 2.519772 / 6.876477 (-4.356704) | 2.714570 / 2.142072 (0.572498) | 1.107197 / 4.805227 (-3.698030) | 0.229986 / 6.500664 (-6.270678) | 0.087993 / 0.075469 (0.012524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.535610 / 1.841788 (-0.306178) | 18.639369 / 8.074308 (10.565061) | 21.081844 / 10.191392 (10.890452) | 0.253247 / 0.680424 (-0.427177) | 0.026711 / 0.534201 (-0.507490) | 0.503790 / 0.579283 (-0.075493) | 0.600124 / 0.434364 (0.165760) | 0.617944 / 0.540337 (0.077607) | 0.766947 / 1.386936 (-0.619989) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007885 / 0.011353 (-0.003468) | 0.004761 / 0.011008 (-0.006248) | 0.097995 / 0.038508 (0.059487) | 0.033624 / 0.023109 (0.010515) | 0.504307 / 0.275898 (0.228409) | 0.534803 / 0.323480 (0.211323) | 0.006048 / 0.007986 (-0.001937) | 0.005042 / 0.004328 (0.000714) | 0.102288 / 0.004250 (0.098038) | 0.048695 / 0.037052 (0.011643) | 0.559086 / 0.258489 (0.300597) | 0.553233 / 0.293841 (0.259392) | 0.044596 / 0.128546 (-0.083950) | 0.013696 / 0.075646 (-0.061950) | 0.109875 / 0.419271 (-0.309397) | 0.059993 / 0.043533 (0.016460) | 0.485579 / 0.255139 (0.230440) | 0.519835 / 0.283200 (0.236635) | 0.123504 / 0.141683 (-0.018179) | 1.820506 / 1.452155 (0.368351) | 1.963448 / 1.492716 (0.470732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292663 / 0.018006 (0.274656) | 0.557783 / 0.000490 (0.557293) | 0.001330 / 0.000200 (0.001130) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036890 / 0.037411 (-0.000522) | 0.140373 / 0.014526 (0.125847) | 0.140176 / 0.176557 (-0.036381) | 0.237378 / 0.737135 (-0.499757) | 0.160186 / 0.296338 (-0.136152) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.673599 / 0.215209 (0.458390) | 6.510280 / 2.077655 (4.432625) | 2.981617 / 1.504120 (1.477497) | 2.684664 / 1.541195 (1.143469) | 2.760471 / 1.468490 (1.291981) | 0.975413 / 4.584777 (-3.609364) | 5.708933 / 3.745712 (1.963220) | 2.772069 / 5.269862 (-2.497793) | 1.763627 / 4.565676 (-2.802049) | 0.111632 / 0.424275 (-0.312643) | 0.013223 / 0.007607 (0.005616) | 0.791545 / 0.226044 (0.565500) | 8.063287 / 2.268929 (5.794359) | 3.671920 / 55.444624 (-51.772704) | 3.057248 / 6.876477 (-3.819229) | 3.083569 / 2.142072 (0.941497) | 1.118136 / 4.805227 (-3.687092) | 0.214655 / 6.500664 (-6.286009) | 0.083074 / 0.075469 (0.007605) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.761731 / 1.841788 (-0.080056) | 18.874200 / 8.074308 (10.799892) | 22.383693 / 10.191392 (12.192301) | 0.240292 / 0.680424 (-0.440132) | 0.028850 / 0.534201 (-0.505351) | 0.557334 / 0.579283 (-0.021949) | 0.627732 / 0.434364 (0.193369) | 0.634484 / 0.540337 (0.094146) | 0.767372 / 1.386936 (-0.619564) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#accaaf2e69fbb5dc5e50229d2eb1591b8ad982b6 \"CML watermark\")\n"
] | 2023-06-02T22:25:25 | 2023-06-14T10:25:10 | 2023-06-14T10:15:46 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5924",
"html_url": "https://github.com/huggingface/datasets/pull/5924",
"diff_url": "https://github.com/huggingface/datasets/pull/5924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5924.patch",
"merged_at": "2023-06-14T10:15:46"
} | Discussion in https://github.com/huggingface/datasets/issues/5798 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5924/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5923/comments | https://api.github.com/repos/huggingface/datasets/issues/5923/events | https://github.com/huggingface/datasets/issues/5923 | 1,737,436,227 | I_kwDODunzps5njyxD | 5,923 | Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility | {
"login": "ehuangc",
"id": 71412682,
"node_id": "MDQ6VXNlcjcxNDEyNjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/71412682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehuangc",
"html_url": "https://github.com/ehuangc",
"followers_url": "https://api.github.com/users/ehuangc/followers",
"following_url": "https://api.github.com/users/ehuangc/following{/other_user}",
"gists_url": "https://api.github.com/users/ehuangc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehuangc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehuangc/subscriptions",
"organizations_url": "https://api.github.com/users/ehuangc/orgs",
"repos_url": "https://api.github.com/users/ehuangc/repos",
"events_url": "https://api.github.com/users/ehuangc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehuangc/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Based on https://github.com/rapidsai/cudf/issues/10187, this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n\r\nCan you please execute the following commands in the terminal and paste the output here?\r\n```\r\nconda list | grep arrow\r\n``` \r\n```\r\npython -c \"import pyarrow; print(pyarrow.__file__)\"\r\n```\r\n\r\n\r\n",
"> Based on [rapidsai/cudf#10187](https://github.com/rapidsai/cudf/issues/10187), this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n> \r\n> Can you please execute the following commands in the terminal and paste the output here?\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\n\r\nHere is the output to the first command:\r\n```\r\narrow-cpp 11.0.0 py39h7f74497_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n```\r\nand the second:\r\n```\r\n/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/__init__.py\r\n```\r\nThanks!\r\n\r\n\r\n\r\n",
"after installing pytesseract 0.3.10, I got the above error. FYI ",
"RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):\r\npyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject",
"I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n\r\nDo we need to update dependencies? ",
"Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https://github.com/huggingface/datasets/actions/runs/5157324334/jobs/9289582291",
"For conda with python3.8.16 this solved my problem! thanks!\r\n\r\n> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies? I can work on that if no one else is working on it.\r\n\r\n",
"Thanks for replying. I am not sure about those environments but it seems like pyarrow-12.0.0 does not work for conda with python 3.8.16. \r\n\r\n> Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https://github.com/huggingface/datasets/actions/runs/5157324334/jobs/9289582291\r\n\r\n",
"Got the same error with:\r\n\r\n```\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n\r\npython 3.10.11 h7a1cb2a_2 \r\n\r\ndatasets 2.13.0 pyhd8ed1ab_0 conda-forge\r\n```",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nThis solved the issue for me as well.",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nSolved it for me also",
"> 基于 [rapidsai/cudf#10187](https://github.com/rapidsai/cudf/issues/10187),这可能意味着您的安装与 不兼容。`pyarrow``datasets`\r\n> \r\n> 您能否在终端中执行以下命令并将输出粘贴到此处?\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.1 pypi_0 pypi\r\n\r\n/root/miniconda3/lib/python3.10/site-packages/pyarrow/__init__.py",
"Got the same problem with\r\n\r\narrow-cpp 11.0.0 py310h1fc3239_0 \r\npyarrow 12.0.1 pypi_0 pypi\r\n\r\nminiforge3/envs/mlp/lib/python3.10/site-packages/pyarrow/__init__.py\r\n\r\nReverting back to pyarrow 11 solved the problem.\r\n",
"Solved with `pip install pyarrow==11.0.0`"
] | 2023-06-02T04:16:32 | 2023-08-07T08:59:34 | null | NONE | null | null | null | ### Describe the bug
When trying to import datasets, I get a pyarrow ValueError:
Traceback (most recent call last):
File "/Users/edward/test/test.py", line 1, in <module>
import datasets
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_reader.py", line 28, in <module>
import pyarrow.parquet as pq
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 45, in <module>
from pyarrow.fs import (LocalFileSystem, FileSystem, FileType,
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/fs.py", line 49, in <module>
from pyarrow._gcsfs import GcsFileSystem # noqa
File "pyarrow/_gcsfs.pyx", line 1, in init pyarrow._gcsfs
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
`import datasets`
### Expected behavior
Successful import
### Environment info
Conda environment, MacOS
python 3.9.12
datasets 2.12.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5923/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5923/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5922/comments | https://api.github.com/repos/huggingface/datasets/issues/5922/events | https://github.com/huggingface/datasets/issues/5922 | 1,736,898,953 | I_kwDODunzps5nhvmJ | 5,922 | Length of table does not accurately reflect the split | {
"login": "amogkam",
"id": 8068268,
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amogkam",
"html_url": "https://github.com/amogkam",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"repos_url": "https://api.github.com/users/amogkam/repos",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"As already replied by @lhoestq (private channel):\r\n> `.train_test_split` (as well as `.shard`, `.select`) doesn't create a new arrow table to save time and disk space. Instead, it uses an indices mapping on top of the table that locate which examples are part of train or test.",
"This is an optimization that we don't plan to \"fix\", so I'm closing this issue."
] | 2023-06-01T18:56:26 | 2023-06-02T16:13:31 | 2023-06-02T16:13:31 | NONE | null | null | null | ### Describe the bug
I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not.
### Steps to reproduce the bug
![image](https://github.com/huggingface/datasets/assets/8068268/83e5768f-8b4c-422a-945c-832a7585afff)
### Expected behavior
The expected behavior is when `len(hf_dataset["train"].data)` should match the length of the train split, and not be the entire unsplit dataset.
### Environment info
datasets 2.10.1
python 3.10.11 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5922/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5921/comments | https://api.github.com/repos/huggingface/datasets/issues/5921/events | https://github.com/huggingface/datasets/pull/5921 | 1,736,563,023 | PR_kwDODunzps5R6j-y | 5,921 | Fix streaming parquet with image feature in schema | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007088 / 0.011353 (-0.004265) | 0.005216 / 0.011008 (-0.005793) | 0.097572 / 0.038508 (0.059064) | 0.036510 / 0.023109 (0.013401) | 0.316885 / 0.275898 (0.040987) | 0.348541 / 0.323480 (0.025061) | 0.006513 / 0.007986 (-0.001473) | 0.004579 / 0.004328 (0.000251) | 0.073779 / 0.004250 (0.069529) | 0.057500 / 0.037052 (0.020448) | 0.329840 / 0.258489 (0.071351) | 0.357530 / 0.293841 (0.063690) | 0.028515 / 0.128546 (-0.100031) | 0.009156 / 0.075646 (-0.066491) | 0.328340 / 0.419271 (-0.090932) | 0.068400 / 0.043533 (0.024867) | 0.313692 / 0.255139 (0.058553) | 0.329170 / 0.283200 (0.045971) | 0.111969 / 0.141683 (-0.029714) | 1.422096 / 1.452155 (-0.030059) | 1.550042 / 1.492716 (0.057326) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285113 / 0.018006 (0.267107) | 0.546788 / 0.000490 (0.546298) | 0.006992 / 0.000200 (0.006792) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026841 / 0.037411 (-0.010570) | 0.108413 / 0.014526 (0.093887) | 0.118375 / 0.176557 (-0.058181) | 0.174889 / 0.737135 (-0.562246) | 0.122781 / 0.296338 (-0.173558) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404187 / 0.215209 (0.188978) | 4.039673 / 2.077655 (1.962019) | 1.894616 / 1.504120 (0.390496) | 1.729182 / 1.541195 (0.187987) | 1.772917 / 1.468490 (0.304427) | 0.524046 / 4.584777 (-4.060731) | 3.628111 / 3.745712 (-0.117601) | 1.866075 / 5.269862 (-3.403787) | 1.026435 / 4.565676 (-3.539242) | 0.065328 / 0.424275 (-0.358947) | 0.012717 / 0.007607 (0.005110) | 0.505821 / 0.226044 (0.279777) | 5.049518 / 2.268929 (2.780589) | 2.338486 / 55.444624 (-53.106139) | 2.002874 / 6.876477 (-4.873602) | 2.193049 / 2.142072 (0.050976) | 0.664638 / 4.805227 (-4.140589) | 0.151323 / 6.500664 (-6.349341) | 0.063774 / 0.075469 (-0.011695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.168168 / 1.841788 (-0.673620) | 15.289200 / 8.074308 (7.214891) | 13.614249 / 10.191392 (3.422857) | 0.167950 / 0.680424 (-0.512474) | 0.017522 / 0.534201 (-0.516679) | 0.393480 / 0.579283 (-0.185803) | 0.420549 / 0.434364 (-0.013815) | 0.461425 / 0.540337 (-0.078912) | 0.563583 / 1.386936 (-0.823353) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006859 / 0.011353 (-0.004493) | 0.004864 / 0.011008 (-0.006144) | 0.075084 / 0.038508 (0.036576) | 0.033989 / 0.023109 (0.010880) | 0.372512 / 0.275898 (0.096614) | 0.394725 / 0.323480 (0.071246) | 0.006382 / 0.007986 (-0.001604) | 0.004521 / 0.004328 (0.000193) | 0.076422 / 0.004250 (0.072172) | 0.055383 / 0.037052 (0.018331) | 0.400974 / 0.258489 (0.142485) | 0.411570 / 0.293841 (0.117729) | 0.028264 / 0.128546 (-0.100282) | 0.009123 / 0.075646 (-0.066523) | 0.081257 / 0.419271 (-0.338015) | 0.048147 / 0.043533 (0.004614) | 0.390735 / 0.255139 (0.135596) | 0.376426 / 0.283200 (0.093226) | 0.108164 / 0.141683 (-0.033518) | 1.429667 / 1.452155 (-0.022488) | 1.556291 / 1.492716 (0.063575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289514 / 0.018006 (0.271508) | 0.532860 / 0.000490 (0.532370) | 0.003810 / 0.000200 (0.003611) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031292 / 0.037411 (-0.006119) | 0.116530 / 0.014526 (0.102005) | 0.127624 / 0.176557 (-0.048932) | 0.178276 / 0.737135 (-0.558859) | 0.133742 / 0.296338 (-0.162597) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431505 / 0.215209 (0.216296) | 4.309206 / 2.077655 (2.231551) | 2.174779 / 1.504120 (0.670659) | 1.998122 / 1.541195 (0.456927) | 2.126478 / 1.468490 (0.657988) | 0.528971 / 4.584777 (-4.055806) | 3.797608 / 3.745712 (0.051895) | 1.876275 / 5.269862 (-3.393586) | 1.087458 / 4.565676 (-3.478218) | 0.066940 / 0.424275 (-0.357335) | 0.012432 / 0.007607 (0.004825) | 0.538346 / 0.226044 (0.312301) | 5.370968 / 2.268929 (3.102039) | 2.613718 / 55.444624 (-52.830906) | 2.246585 / 6.876477 (-4.629892) | 2.375695 / 2.142072 (0.233622) | 0.652227 / 4.805227 (-4.153001) | 0.143246 / 6.500664 (-6.357418) | 0.066163 / 0.075469 (-0.009306) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291263 / 1.841788 (-0.550524) | 16.532281 / 8.074308 (8.457973) | 15.038471 / 10.191392 (4.847079) | 0.168139 / 0.680424 (-0.512285) | 0.017724 / 0.534201 (-0.516477) | 0.391636 / 0.579283 (-0.187648) | 0.429690 / 0.434364 (-0.004674) | 0.474941 / 0.540337 (-0.065396) | 0.579461 / 1.386936 (-0.807475) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db690affa0373b08f7cef04e25fe2113ee831ef5 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006083 / 0.011353 (-0.005269) | 0.004085 / 0.011008 (-0.006923) | 0.098337 / 0.038508 (0.059829) | 0.027573 / 0.023109 (0.004464) | 0.305688 / 0.275898 (0.029790) | 0.341767 / 0.323480 (0.018287) | 0.005143 / 0.007986 (-0.002842) | 0.003396 / 0.004328 (-0.000932) | 0.076925 / 0.004250 (0.072674) | 0.041027 / 0.037052 (0.003975) | 0.307877 / 0.258489 (0.049388) | 0.346559 / 0.293841 (0.052718) | 0.025183 / 0.128546 (-0.103363) | 0.008575 / 0.075646 (-0.067071) | 0.319449 / 0.419271 (-0.099823) | 0.043378 / 0.043533 (-0.000154) | 0.304563 / 0.255139 (0.049424) | 0.332019 / 0.283200 (0.048819) | 0.087725 / 0.141683 (-0.053958) | 1.484904 / 1.452155 (0.032749) | 1.582780 / 1.492716 (0.090064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197503 / 0.018006 (0.179497) | 0.410370 / 0.000490 (0.409880) | 0.003840 / 0.000200 (0.003640) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024179 / 0.037411 (-0.013232) | 0.098876 / 0.014526 (0.084350) | 0.106189 / 0.176557 (-0.070367) | 0.168964 / 0.737135 (-0.568171) | 0.109723 / 0.296338 (-0.186616) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429453 / 0.215209 (0.214244) | 4.295584 / 2.077655 (2.217929) | 2.014330 / 1.504120 (0.510210) | 1.841119 / 1.541195 (0.299924) | 1.928378 / 1.468490 (0.459888) | 0.554571 / 4.584777 (-4.030206) | 3.431769 / 3.745712 (-0.313943) | 1.716204 / 5.269862 (-3.553658) | 0.995054 / 4.565676 (-3.570622) | 0.067374 / 0.424275 (-0.356902) | 0.012557 / 0.007607 (0.004950) | 0.533785 / 0.226044 (0.307740) | 5.363360 / 2.268929 (3.094431) | 2.535190 / 55.444624 (-52.909434) | 2.191646 / 6.876477 (-4.684831) | 2.400799 / 2.142072 (0.258727) | 0.663961 / 4.805227 (-4.141266) | 0.135992 / 6.500664 (-6.364672) | 0.067378 / 0.075469 (-0.008092) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235110 / 1.841788 (-0.606678) | 13.820695 / 8.074308 (5.746387) | 13.667202 / 10.191392 (3.475810) | 0.143025 / 0.680424 (-0.537399) | 0.016757 / 0.534201 (-0.517444) | 0.356262 / 0.579283 (-0.223021) | 0.401871 / 0.434364 (-0.032493) | 0.423928 / 0.540337 (-0.116410) | 0.514598 / 1.386936 (-0.872338) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006260 / 0.011353 (-0.005093) | 0.004159 / 0.011008 (-0.006850) | 0.076780 / 0.038508 (0.038272) | 0.027899 / 0.023109 (0.004789) | 0.412756 / 0.275898 (0.136858) | 0.455145 / 0.323480 (0.131665) | 0.005029 / 0.007986 (-0.002956) | 0.003482 / 0.004328 (-0.000847) | 0.076148 / 0.004250 (0.071898) | 0.038969 / 0.037052 (0.001917) | 0.429975 / 0.258489 (0.171486) | 0.465880 / 0.293841 (0.172039) | 0.025555 / 0.128546 (-0.102991) | 0.008612 / 0.075646 (-0.067034) | 0.082604 / 0.419271 (-0.336667) | 0.039690 / 0.043533 (-0.003842) | 0.403644 / 0.255139 (0.148505) | 0.440438 / 0.283200 (0.157238) | 0.090984 / 0.141683 (-0.050699) | 1.465915 / 1.452155 (0.013760) | 1.564227 / 1.492716 (0.071511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010502 / 0.018006 (-0.007504) | 0.410573 / 0.000490 (0.410083) | 0.000384 / 0.000200 (0.000184) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025726 / 0.037411 (-0.011686) | 0.101760 / 0.014526 (0.087235) | 0.110102 / 0.176557 (-0.066454) | 0.161321 / 0.737135 (-0.575815) | 0.112507 / 0.296338 (-0.183832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469925 / 0.215209 (0.254716) | 4.718740 / 2.077655 (2.641085) | 2.466272 / 1.504120 (0.962152) | 2.267357 / 1.541195 (0.726162) | 2.331343 / 1.468490 (0.862853) | 0.553448 / 4.584777 (-4.031329) | 3.464228 / 3.745712 (-0.281484) | 3.060957 / 5.269862 (-2.208905) | 1.387261 / 4.565676 (-3.178415) | 0.067989 / 0.424275 (-0.356286) | 0.012349 / 0.007607 (0.004741) | 0.575046 / 0.226044 (0.349001) | 5.740322 / 2.268929 (3.471394) | 2.925666 / 55.444624 (-52.518958) | 2.606535 / 6.876477 (-4.269942) | 2.658144 / 2.142072 (0.516072) | 0.655157 / 4.805227 (-4.150071) | 0.138520 / 6.500664 (-6.362144) | 0.069442 / 0.075469 (-0.006027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306523 / 1.841788 (-0.535265) | 14.400380 / 8.074308 (6.326072) | 14.231519 / 10.191392 (4.040127) | 0.146194 / 0.680424 (-0.534230) | 0.016632 / 0.534201 (-0.517569) | 0.361151 / 0.579283 (-0.218132) | 0.388838 / 0.434364 (-0.045526) | 0.419337 / 0.540337 (-0.121001) | 0.500483 / 1.386936 (-0.886453) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c0429e9806bf7065d03dc5858c039a30c5af716c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009430 / 0.011353 (-0.001923) | 0.006673 / 0.011008 (-0.004335) | 0.125151 / 0.038508 (0.086643) | 0.038258 / 0.023109 (0.015149) | 0.426383 / 0.275898 (0.150485) | 0.432327 / 0.323480 (0.108847) | 0.006964 / 0.007986 (-0.001022) | 0.005140 / 0.004328 (0.000811) | 0.100767 / 0.004250 (0.096517) | 0.058663 / 0.037052 (0.021610) | 0.424709 / 0.258489 (0.166220) | 0.453049 / 0.293841 (0.159208) | 0.051042 / 0.128546 (-0.077505) | 0.015291 / 0.075646 (-0.060355) | 0.456549 / 0.419271 (0.037278) | 0.067106 / 0.043533 (0.023573) | 0.408959 / 0.255139 (0.153820) | 0.445067 / 0.283200 (0.161867) | 0.115590 / 0.141683 (-0.026092) | 1.929439 / 1.452155 (0.477284) | 2.045709 / 1.492716 (0.552992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250726 / 0.018006 (0.232720) | 0.598976 / 0.000490 (0.598486) | 0.007542 / 0.000200 (0.007342) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030317 / 0.037411 (-0.007094) | 0.133177 / 0.014526 (0.118651) | 0.152761 / 0.176557 (-0.023795) | 0.233708 / 0.737135 (-0.503428) | 0.147303 / 0.296338 (-0.149036) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.633562 / 0.215209 (0.418353) | 6.235021 / 2.077655 (4.157366) | 2.652573 / 1.504120 (1.148454) | 2.223363 / 1.541195 (0.682168) | 2.231022 / 1.468490 (0.762531) | 0.942218 / 4.584777 (-3.642559) | 6.068661 / 3.745712 (2.322949) | 2.778604 / 5.269862 (-2.491257) | 1.787939 / 4.565676 (-2.777737) | 0.117749 / 0.424275 (-0.306526) | 0.015613 / 0.007607 (0.008006) | 0.810222 / 0.226044 (0.584177) | 7.931509 / 2.268929 (5.662581) | 3.260679 / 55.444624 (-52.183945) | 2.609085 / 6.876477 (-4.267391) | 2.867838 / 2.142072 (0.725766) | 1.144672 / 4.805227 (-3.660555) | 0.224379 / 6.500664 (-6.276285) | 0.084490 / 0.075469 (0.009021) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.650608 / 1.841788 (-0.191179) | 18.919748 / 8.074308 (10.845440) | 20.163162 / 10.191392 (9.971770) | 0.229427 / 0.680424 (-0.450997) | 0.033090 / 0.534201 (-0.501111) | 0.535549 / 0.579283 (-0.043734) | 0.658629 / 0.434364 (0.224265) | 0.631526 / 0.540337 (0.091189) | 0.748701 / 1.386936 (-0.638235) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009157 / 0.011353 (-0.002196) | 0.006153 / 0.011008 (-0.004856) | 0.106294 / 0.038508 (0.067786) | 0.040947 / 0.023109 (0.017837) | 0.493242 / 0.275898 (0.217344) | 0.563525 / 0.323480 (0.240045) | 0.007256 / 0.007986 (-0.000730) | 0.006757 / 0.004328 (0.002429) | 0.105151 / 0.004250 (0.100901) | 0.056262 / 0.037052 (0.019209) | 0.573341 / 0.258489 (0.314852) | 0.591125 / 0.293841 (0.297284) | 0.047935 / 0.128546 (-0.080611) | 0.015385 / 0.075646 (-0.060262) | 0.119457 / 0.419271 (-0.299814) | 0.066510 / 0.043533 (0.022977) | 0.485622 / 0.255139 (0.230483) | 0.540929 / 0.283200 (0.257730) | 0.132619 / 0.141683 (-0.009064) | 1.916905 / 1.452155 (0.464750) | 2.152722 / 1.492716 (0.660006) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294823 / 0.018006 (0.276817) | 0.569371 / 0.000490 (0.568882) | 0.000642 / 0.000200 (0.000442) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034321 / 0.037411 (-0.003090) | 0.134165 / 0.014526 (0.119639) | 0.157871 / 0.176557 (-0.018685) | 0.210753 / 0.737135 (-0.526382) | 0.152961 / 0.296338 (-0.143377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.686810 / 0.215209 (0.471601) | 6.890432 / 2.077655 (4.812778) | 3.182875 / 1.504120 (1.678755) | 2.770836 / 1.541195 (1.229641) | 2.790785 / 1.468490 (1.322295) | 0.938145 / 4.584777 (-3.646632) | 5.861093 / 3.745712 (2.115381) | 2.719862 / 5.269862 (-2.550000) | 1.760834 / 4.565676 (-2.804842) | 0.111317 / 0.424275 (-0.312958) | 0.015722 / 0.007607 (0.008115) | 0.863032 / 0.226044 (0.636988) | 8.482433 / 2.268929 (6.213504) | 3.892621 / 55.444624 (-51.552003) | 3.207370 / 6.876477 (-3.669106) | 3.344412 / 2.142072 (1.202339) | 1.133903 / 4.805227 (-3.671324) | 0.223456 / 6.500664 (-6.277209) | 0.084335 / 0.075469 (0.008866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.794116 / 1.841788 (-0.047672) | 19.077447 / 8.074308 (11.003139) | 23.102309 / 10.191392 (12.910917) | 0.268806 / 0.680424 (-0.411617) | 0.027709 / 0.534201 (-0.506492) | 0.540488 / 0.579283 (-0.038796) | 0.658478 / 0.434364 (0.224114) | 0.604769 / 0.540337 (0.064431) | 0.722768 / 1.386936 (-0.664168) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7e52021c66666e6953d5be0bd45a079e3ddb8c3f \"CML watermark\")\n"
] | 2023-06-01T15:23:10 | 2023-06-02T10:02:54 | 2023-06-02T09:53:11 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5921",
"html_url": "https://github.com/huggingface/datasets/pull/5921",
"diff_url": "https://github.com/huggingface/datasets/pull/5921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5921.patch",
"merged_at": "2023-06-02T09:53:11"
} | It was not reading the feature type from the parquet arrow schema | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5921/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5920/comments | https://api.github.com/repos/huggingface/datasets/issues/5920/events | https://github.com/huggingface/datasets/pull/5920 | 1,736,196,991 | PR_kwDODunzps5R5TRB | 5,920 | Optimize IterableDataset.from_file using ArrowExamplesIterable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007439 / 0.011353 (-0.003914) | 0.004884 / 0.011008 (-0.006124) | 0.098750 / 0.038508 (0.060242) | 0.040723 / 0.023109 (0.017613) | 0.347242 / 0.275898 (0.071344) | 0.381202 / 0.323480 (0.057722) | 0.006814 / 0.007986 (-0.001171) | 0.004543 / 0.004328 (0.000215) | 0.075338 / 0.004250 (0.071088) | 0.058976 / 0.037052 (0.021924) | 0.344746 / 0.258489 (0.086257) | 0.406761 / 0.293841 (0.112920) | 0.028961 / 0.128546 (-0.099585) | 0.009531 / 0.075646 (-0.066115) | 0.337324 / 0.419271 (-0.081947) | 0.051071 / 0.043533 (0.007538) | 0.341251 / 0.255139 (0.086112) | 0.362773 / 0.283200 (0.079573) | 0.109423 / 0.141683 (-0.032260) | 1.457420 / 1.452155 (0.005266) | 1.588824 / 1.492716 (0.096108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288620 / 0.018006 (0.270614) | 0.568975 / 0.000490 (0.568485) | 0.003350 / 0.000200 (0.003150) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028732 / 0.037411 (-0.008680) | 0.117820 / 0.014526 (0.103294) | 0.120180 / 0.176557 (-0.056376) | 0.178736 / 0.737135 (-0.558399) | 0.126399 / 0.296338 (-0.169939) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428357 / 0.215209 (0.213148) | 4.251989 / 2.077655 (2.174334) | 2.005239 / 1.504120 (0.501119) | 1.784009 / 1.541195 (0.242815) | 1.883763 / 1.468490 (0.415272) | 0.555429 / 4.584777 (-4.029348) | 3.868146 / 3.745712 (0.122434) | 2.081896 / 5.269862 (-3.187965) | 1.126047 / 4.565676 (-3.439629) | 0.069496 / 0.424275 (-0.354779) | 0.012926 / 0.007607 (0.005318) | 0.536989 / 0.226044 (0.310944) | 5.256052 / 2.268929 (2.987124) | 2.526802 / 55.444624 (-52.917822) | 2.233346 / 6.876477 (-4.643131) | 2.389063 / 2.142072 (0.246990) | 0.677107 / 4.805227 (-4.128120) | 0.147212 / 6.500664 (-6.353452) | 0.067061 / 0.075469 (-0.008408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210651 / 1.841788 (-0.631137) | 17.236898 / 8.074308 (9.162589) | 14.427301 / 10.191392 (4.235909) | 0.207194 / 0.680424 (-0.473229) | 0.018079 / 0.534201 (-0.516122) | 0.398355 / 0.579283 (-0.180929) | 0.462453 / 0.434364 (0.028089) | 0.484544 / 0.540337 (-0.055794) | 0.590119 / 1.386936 (-0.796817) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007392 / 0.011353 (-0.003961) | 0.005614 / 0.011008 (-0.005394) | 0.075587 / 0.038508 (0.037079) | 0.040429 / 0.023109 (0.017320) | 0.389901 / 0.275898 (0.114003) | 0.429466 / 0.323480 (0.105986) | 0.006790 / 0.007986 (-0.001196) | 0.006627 / 0.004328 (0.002299) | 0.075227 / 0.004250 (0.070976) | 0.060298 / 0.037052 (0.023246) | 0.391905 / 0.258489 (0.133416) | 0.449385 / 0.293841 (0.155544) | 0.028794 / 0.128546 (-0.099753) | 0.009461 / 0.075646 (-0.066185) | 0.083386 / 0.419271 (-0.335886) | 0.057968 / 0.043533 (0.014435) | 0.377327 / 0.255139 (0.122188) | 0.402825 / 0.283200 (0.119626) | 0.125477 / 0.141683 (-0.016206) | 1.462986 / 1.452155 (0.010832) | 1.595959 / 1.492716 (0.103243) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304179 / 0.018006 (0.286173) | 0.543113 / 0.000490 (0.542623) | 0.004136 / 0.000200 (0.003936) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032617 / 0.037411 (-0.004794) | 0.123596 / 0.014526 (0.109070) | 0.128714 / 0.176557 (-0.047842) | 0.176344 / 0.737135 (-0.560792) | 0.132525 / 0.296338 (-0.163813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446041 / 0.215209 (0.230832) | 4.438799 / 2.077655 (2.361144) | 2.210815 / 1.504120 (0.706695) | 2.052025 / 1.541195 (0.510830) | 2.204687 / 1.468490 (0.736197) | 0.535219 / 4.584777 (-4.049558) | 3.858407 / 3.745712 (0.112695) | 3.826043 / 5.269862 (-1.443819) | 1.334149 / 4.565676 (-3.231527) | 0.067454 / 0.424275 (-0.356821) | 0.012566 / 0.007607 (0.004958) | 0.551597 / 0.226044 (0.325553) | 5.520054 / 2.268929 (3.251126) | 2.817976 / 55.444624 (-52.626649) | 2.528074 / 6.876477 (-4.348403) | 2.622391 / 2.142072 (0.480319) | 0.657632 / 4.805227 (-4.147595) | 0.147039 / 6.500664 (-6.353625) | 0.069603 / 0.075469 (-0.005866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300140 / 1.841788 (-0.541648) | 17.303907 / 8.074308 (9.229599) | 15.657887 / 10.191392 (5.466495) | 0.168991 / 0.680424 (-0.511433) | 0.021332 / 0.534201 (-0.512869) | 0.487261 / 0.579283 (-0.092022) | 0.450073 / 0.434364 (0.015709) | 0.465865 / 0.540337 (-0.074473) | 0.565501 / 1.386936 (-0.821435) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f1723ab75a6b3a5e156ea0a41651e80e91fa9cc6 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006536 / 0.011353 (-0.004817) | 0.004254 / 0.011008 (-0.006755) | 0.095387 / 0.038508 (0.056878) | 0.032885 / 0.023109 (0.009776) | 0.298580 / 0.275898 (0.022682) | 0.319771 / 0.323480 (-0.003709) | 0.005510 / 0.007986 (-0.002476) | 0.003891 / 0.004328 (-0.000437) | 0.073763 / 0.004250 (0.069513) | 0.041625 / 0.037052 (0.004573) | 0.294896 / 0.258489 (0.036407) | 0.341308 / 0.293841 (0.047467) | 0.027898 / 0.128546 (-0.100648) | 0.008837 / 0.075646 (-0.066809) | 0.325055 / 0.419271 (-0.094216) | 0.050652 / 0.043533 (0.007119) | 0.298756 / 0.255139 (0.043617) | 0.318261 / 0.283200 (0.035061) | 0.098927 / 0.141683 (-0.042756) | 1.450356 / 1.452155 (-0.001798) | 1.508034 / 1.492716 (0.015318) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209009 / 0.018006 (0.191003) | 0.439154 / 0.000490 (0.438665) | 0.004299 / 0.000200 (0.004099) | 0.000142 / 0.000054 (0.000087) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025938 / 0.037411 (-0.011473) | 0.105954 / 0.014526 (0.091429) | 0.113858 / 0.176557 (-0.062698) | 0.168887 / 0.737135 (-0.568249) | 0.121292 / 0.296338 (-0.175046) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402050 / 0.215209 (0.186841) | 4.002310 / 2.077655 (1.924655) | 1.816190 / 1.504120 (0.312070) | 1.634404 / 1.541195 (0.093209) | 1.713632 / 1.468490 (0.245142) | 0.519633 / 4.584777 (-4.065144) | 3.740291 / 3.745712 (-0.005421) | 1.787602 / 5.269862 (-3.482260) | 1.038844 / 4.565676 (-3.526833) | 0.064973 / 0.424275 (-0.359302) | 0.012475 / 0.007607 (0.004868) | 0.498152 / 0.226044 (0.272108) | 4.970941 / 2.268929 (2.702013) | 2.287429 / 55.444624 (-53.157195) | 1.998050 / 6.876477 (-4.878427) | 2.091903 / 2.142072 (-0.050169) | 0.630363 / 4.805227 (-4.174864) | 0.138623 / 6.500664 (-6.362041) | 0.063293 / 0.075469 (-0.012176) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.201802 / 1.841788 (-0.639986) | 14.073836 / 8.074308 (5.999528) | 12.968665 / 10.191392 (2.777273) | 0.144653 / 0.680424 (-0.535771) | 0.017613 / 0.534201 (-0.516588) | 0.392067 / 0.579283 (-0.187216) | 0.416955 / 0.434364 (-0.017409) | 0.471492 / 0.540337 (-0.068845) | 0.554576 / 1.386936 (-0.832360) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006408 / 0.011353 (-0.004945) | 0.004452 / 0.011008 (-0.006556) | 0.073648 / 0.038508 (0.035140) | 0.032536 / 0.023109 (0.009427) | 0.358546 / 0.275898 (0.082648) | 0.387330 / 0.323480 (0.063850) | 0.005542 / 0.007986 (-0.002444) | 0.003882 / 0.004328 (-0.000447) | 0.073867 / 0.004250 (0.069617) | 0.044798 / 0.037052 (0.007746) | 0.362303 / 0.258489 (0.103814) | 0.400496 / 0.293841 (0.106655) | 0.028244 / 0.128546 (-0.100302) | 0.008931 / 0.075646 (-0.066715) | 0.080617 / 0.419271 (-0.338654) | 0.046575 / 0.043533 (0.003043) | 0.364283 / 0.255139 (0.109145) | 0.373215 / 0.283200 (0.090015) | 0.100080 / 0.141683 (-0.041603) | 1.430047 / 1.452155 (-0.022108) | 1.530957 / 1.492716 (0.038240) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221061 / 0.018006 (0.203055) | 0.441753 / 0.000490 (0.441263) | 0.003626 / 0.000200 (0.003426) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029509 / 0.037411 (-0.007902) | 0.109578 / 0.014526 (0.095053) | 0.121009 / 0.176557 (-0.055548) | 0.168950 / 0.737135 (-0.568185) | 0.124475 / 0.296338 (-0.171864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431355 / 0.215209 (0.216146) | 4.295507 / 2.077655 (2.217852) | 2.167514 / 1.504120 (0.663394) | 2.013073 / 1.541195 (0.471879) | 1.973730 / 1.468490 (0.505240) | 0.529778 / 4.584777 (-4.054999) | 3.794702 / 3.745712 (0.048989) | 3.062940 / 5.269862 (-2.206922) | 1.503426 / 4.565676 (-3.062251) | 0.066692 / 0.424275 (-0.357583) | 0.011682 / 0.007607 (0.004075) | 0.539311 / 0.226044 (0.313266) | 5.406342 / 2.268929 (3.137414) | 2.652709 / 55.444624 (-52.791916) | 2.260066 / 6.876477 (-4.616410) | 2.295752 / 2.142072 (0.153680) | 0.647199 / 4.805227 (-4.158029) | 0.142981 / 6.500664 (-6.357683) | 0.065082 / 0.075469 (-0.010387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279788 / 1.841788 (-0.562000) | 14.982845 / 8.074308 (6.908536) | 14.277166 / 10.191392 (4.085774) | 0.145082 / 0.680424 (-0.535342) | 0.017885 / 0.534201 (-0.516316) | 0.392071 / 0.579283 (-0.187212) | 0.420425 / 0.434364 (-0.013939) | 0.461244 / 0.540337 (-0.079093) | 0.559956 / 1.386936 (-0.826980) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#651d96c1c4083a206c65f11602712d75f1f0453d \"CML watermark\")\n"
] | 2023-06-01T12:14:36 | 2023-06-01T12:42:10 | 2023-06-01T12:35:14 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5920",
"html_url": "https://github.com/huggingface/datasets/pull/5920",
"diff_url": "https://github.com/huggingface/datasets/pull/5920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5920.patch",
"merged_at": "2023-06-01T12:35:14"
} | following https://github.com/huggingface/datasets/pull/5893 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5920/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5919/comments | https://api.github.com/repos/huggingface/datasets/issues/5919/events | https://github.com/huggingface/datasets/pull/5919 | 1,735,519,227 | PR_kwDODunzps5R2_EK | 5,919 | add support for storage_options for load_dataset API | {
"login": "janineguo",
"id": 59083384,
"node_id": "MDQ6VXNlcjU5MDgzMzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/59083384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janineguo",
"html_url": "https://github.com/janineguo",
"followers_url": "https://api.github.com/users/janineguo/followers",
"following_url": "https://api.github.com/users/janineguo/following{/other_user}",
"gists_url": "https://api.github.com/users/janineguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janineguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janineguo/subscriptions",
"organizations_url": "https://api.github.com/users/janineguo/orgs",
"repos_url": "https://api.github.com/users/janineguo/repos",
"events_url": "https://api.github.com/users/janineguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/janineguo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"hi @lhoestq,\r\nI saw some errors in my test and found all the failed reasons are `FileNotFoundError` about `test_load_streaming_private_dataset_with_zipped_data` and `test_load_dataset_private_zipped_images` in `test_load.py `, I run pytest on my own Wins and Ubuntu system all the test in `test_load.py ` are succeed. could you help me to check the test environment of our server?\r\n\r\n`2023-06-08T16:50:48.0828281Z FAILED tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data - FileNotFoundError: Couldn't find a dataset script at D:\\a\\datasets\\datasets\\__DUMMY_TRANSFORMERS_USER__\\repo_zipped_txt_data-16862429577813\\repo_zipped_txt_data-16862429577813.py or any data file in the same directory. Couldn't find '__DUMMY_TRANSFORMERS_USER__/repo_zipped_txt_data-16862429577813' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in __DUMMY_TRANSFORMERS_USER__/repo_zipped_txt_data-16862429577813`\r\n`2023-06-08T16:50:48.0830602Z FAILED tests/test_load.py::test_load_dataset_private_zipped_images[False-False] - FileNotFoundError: Couldn't find a dataset script at D:\\a\\datasets\\datasets\\__DUMMY_TRANSFORMERS_USER__\\repo_zipped_img_data-16862429594168\\repo_zipped_img_data-16862429594168.py or any data file in the same directory. Couldn't find '__DUMMY_TRANSFORMERS_USER__/repo_zipped_img_data-16862429594168' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in __DUMMY_TRANSFORMERS_USER__/repo_zipped_img_data-16862429594168`",
"I just re-ran the CI, hopefully it's fixed",
"_The documentation is not available anymore as the PR was closed or merged._",
"> I just re-ran the CI, hopefully it's fixed\r\n\r\nI just checked, still has the same error, maybe need someone to fix it",
"I think the issue comes from this PR somehow, since the CI fail is related to loading private repositories and this PR touches authentication related code. Let me check what's the issue, and I'll also review your PR later (sorry I don't have a ton of bandwidth atm)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5919). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq Hi sorry to bother you, the CI check_code_quality failed and it said `would reformat /home/runner/work/datasets/datasets/src/datasets/download/streaming_download_manager.py` but I cant see any changes when I run `python3 -m black --check tests src benchmarks metrics` and `python3 -m ruff tests src benchmarks metrics` on my own computer, is there any version requirements on the tools? I didn't specific the version.",
"I just ran `make style` and pushed the changes.\r\nYou can install the right versions of black and ruff using `pip install -e .[quality]` ;)",
"I am working on this issue right now https://github.com/huggingface/datasets/issues/6017 which is strongly connected to your PR, and I might end up cherry-picking some of your commits (keeping attribution of course !). Would you be ok with that ?",
"it's totally ok for me, I just wish the S3 File system could support streaming too.\r\n",
"\r\nI already adjust the code and test on my local Mac, you can check it now, and you can make any changes to it.",
"Closing this PR in favor of https://github.com/huggingface/datasets/pull/6028 which includes your contribution :)"
] | 2023-06-01T05:52:32 | 2023-07-18T06:14:32 | 2023-07-17T17:02:00 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5919",
"html_url": "https://github.com/huggingface/datasets/pull/5919",
"diff_url": "https://github.com/huggingface/datasets/pull/5919.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5919.patch",
"merged_at": null
} | to solve the issue in #5880
1. add s3 support in the link check step, previous we only check `http` and `https`,
2. change the parameter of `use_auth_token` to `download_config` to support both `storage_options` and `use_auth_token` parameter when trying to handle(list, open, read, etc,.) the remote files.
3. integrate the check part's duplicate code to make adding or deleting other sources easier. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5919/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5918/comments | https://api.github.com/repos/huggingface/datasets/issues/5918/events | https://github.com/huggingface/datasets/issues/5918 | 1,735,313,549 | I_kwDODunzps5nbsiN | 5,918 | File not found for audio dataset | {
"login": "RobertBaruch",
"id": 1783950,
"node_id": "MDQ6VXNlcjE3ODM5NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1783950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RobertBaruch",
"html_url": "https://github.com/RobertBaruch",
"followers_url": "https://api.github.com/users/RobertBaruch/followers",
"following_url": "https://api.github.com/users/RobertBaruch/following{/other_user}",
"gists_url": "https://api.github.com/users/RobertBaruch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RobertBaruch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RobertBaruch/subscriptions",
"organizations_url": "https://api.github.com/users/RobertBaruch/orgs",
"repos_url": "https://api.github.com/users/RobertBaruch/repos",
"events_url": "https://api.github.com/users/RobertBaruch/events{/privacy}",
"received_events_url": "https://api.github.com/users/RobertBaruch/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"load_dataset () did not work for loading local files either "
] | 2023-06-01T02:15:29 | 2023-06-11T06:02:25 | null | NONE | null | null | null | ### Describe the bug
After loading an audio dataset, and looking at a sample entry, the `path` element, which is supposed to be the path to the audio file, doesn't actually exist.
### Steps to reproduce the bug
Run bug.py:
```py
import os.path
from datasets import load_dataset
def run() -> None:
cv13 = load_dataset(
"mozilla-foundation/common_voice_13_0",
"hi",
split="train",
)
print(cv13[0])
audio_file = cv13[0]["path"]
if not os.path.exists(audio_file):
raise ValueError(f'File {audio_file} does not exist.')
if __name__ == "__main__":
run()
```
The result (on my machine):
```json
{'client_id': '0f018a99663f33afbb7d38aee281fb1afcfd07f9e7acd00383f604e1e17c38d6ed8adf1bd2ccbf927a52c5adefb8ac4b158ce27a7c2ed9581e71202eb302dfb3', 'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\common_voice_hi_26008353.mp3', 'audio': {'path': 'C:\\Users\\rober\\.cache\\huggingface\\datasets\\downloads\\extracted\\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\\common_voice_hi_26008353.mp3', 'array': array([ 6.46234854e-26, -1.35709319e-25, -8.07793567e-26, ...,
1.06425944e-07, 4.46417090e-08, 2.61451660e-09]), 'sampling_rate': 48000}, 'sentence': 'हमने उसका जन्मदिन मनाया।', 'up_votes': 2, 'down_votes': 0, 'age': '', 'gender': '', 'accent': '', 'locale': 'hi', 'segment': '' ', 'variant': ''}
```
```txt
Traceback (most recent call last):
File "F:\eo-reco\bug.py", line 18, in <module>
run()
File "F:\eo-reco\bug.py", line 15, in run
raise ValueError(f'File {audio_file} does not exist.')
ValueError: File C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\common_voice_hi_26008353.mp3 does not exist.
```
### Expected behavior
The `path` element points to the correct file, which happens to be:
```
C:\Users\rober\.cache\huggingface\datasets\downloads\extracted\8d1479bc09b4609bc2675bd02d6869a4d5e09f7e6616f540bd55eacef46c6e2b\hi_train_0\common_voice_hi_26008353.mp3
```
That is, there's an extra directory `hi_train_0` that is not in the `path` element.
### Environment info
- `datasets` version: 2.12.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
- | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5918/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5918/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5917/comments | https://api.github.com/repos/huggingface/datasets/issues/5917/events | https://github.com/huggingface/datasets/pull/5917 | 1,733,661,588 | PR_kwDODunzps5RwoRU | 5,917 | Refactor extensions | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008358 / 0.011353 (-0.002995) | 0.005673 / 0.011008 (-0.005335) | 0.124034 / 0.038508 (0.085526) | 0.037550 / 0.023109 (0.014441) | 0.331301 / 0.275898 (0.055403) | 0.383542 / 0.323480 (0.060062) | 0.006940 / 0.007986 (-0.001046) | 0.005959 / 0.004328 (0.001631) | 0.084670 / 0.004250 (0.080419) | 0.054214 / 0.037052 (0.017162) | 0.359897 / 0.258489 (0.101408) | 0.383260 / 0.293841 (0.089419) | 0.047642 / 0.128546 (-0.080904) | 0.013902 / 0.075646 (-0.061744) | 0.380232 / 0.419271 (-0.039040) | 0.077790 / 0.043533 (0.034257) | 0.376648 / 0.255139 (0.121509) | 0.387536 / 0.283200 (0.104336) | 0.104644 / 0.141683 (-0.037038) | 1.618560 / 1.452155 (0.166406) | 1.742569 / 1.492716 (0.249853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257218 / 0.018006 (0.239212) | 0.636801 / 0.000490 (0.636311) | 0.000634 / 0.000200 (0.000434) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037874 / 0.037411 (0.000462) | 0.107454 / 0.014526 (0.092928) | 0.117855 / 0.176557 (-0.058702) | 0.204067 / 0.737135 (-0.533068) | 0.134029 / 0.296338 (-0.162310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.583657 / 0.215209 (0.368447) | 5.761289 / 2.077655 (3.683635) | 2.280201 / 1.504120 (0.776081) | 2.033442 / 1.541195 (0.492247) | 2.035343 / 1.468490 (0.566853) | 0.868122 / 4.584777 (-3.716655) | 5.352591 / 3.745712 (1.606879) | 2.432814 / 5.269862 (-2.837047) | 1.560765 / 4.565676 (-3.004911) | 0.098793 / 0.424275 (-0.325482) | 0.017327 / 0.007607 (0.009720) | 0.734676 / 0.226044 (0.508631) | 7.070318 / 2.268929 (4.801390) | 2.972701 / 55.444624 (-52.471924) | 2.442189 / 6.876477 (-4.434288) | 2.604379 / 2.142072 (0.462307) | 1.028853 / 4.805227 (-3.776374) | 0.210390 / 6.500664 (-6.290274) | 0.069329 / 0.075469 (-0.006140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.469586 / 1.841788 (-0.372202) | 16.570305 / 8.074308 (8.495997) | 19.187845 / 10.191392 (8.996453) | 0.219162 / 0.680424 (-0.461262) | 0.026356 / 0.534201 (-0.507845) | 0.447370 / 0.579283 (-0.131913) | 0.555893 / 0.434364 (0.121529) | 0.574958 / 0.540337 (0.034621) | 0.639166 / 1.386936 (-0.747770) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008166 / 0.011353 (-0.003187) | 0.005577 / 0.011008 (-0.005431) | 0.103578 / 0.038508 (0.065070) | 0.040563 / 0.023109 (0.017454) | 0.441996 / 0.275898 (0.166098) | 0.483594 / 0.323480 (0.160114) | 0.007329 / 0.007986 (-0.000657) | 0.004546 / 0.004328 (0.000218) | 0.090471 / 0.004250 (0.086220) | 0.052740 / 0.037052 (0.015688) | 0.442197 / 0.258489 (0.183708) | 0.524310 / 0.293841 (0.230469) | 0.042487 / 0.128546 (-0.086060) | 0.012917 / 0.075646 (-0.062730) | 0.103992 / 0.419271 (-0.315280) | 0.060570 / 0.043533 (0.017037) | 0.441956 / 0.255139 (0.186817) | 0.477084 / 0.283200 (0.193885) | 0.103815 / 0.141683 (-0.037868) | 1.696963 / 1.452155 (0.244809) | 1.747849 / 1.492716 (0.255132) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292465 / 0.018006 (0.274458) | 0.571518 / 0.000490 (0.571028) | 0.000476 / 0.000200 (0.000276) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028697 / 0.037411 (-0.008714) | 0.111671 / 0.014526 (0.097145) | 0.138826 / 0.176557 (-0.037731) | 0.189697 / 0.737135 (-0.547439) | 0.125454 / 0.296338 (-0.170884) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.619273 / 0.215209 (0.404064) | 6.138669 / 2.077655 (4.061015) | 2.558622 / 1.504120 (1.054502) | 2.201550 / 1.541195 (0.660356) | 2.279034 / 1.468490 (0.810544) | 0.850752 / 4.584777 (-3.734025) | 5.438185 / 3.745712 (1.692473) | 2.529343 / 5.269862 (-2.740518) | 1.572178 / 4.565676 (-2.993499) | 0.100768 / 0.424275 (-0.323507) | 0.013902 / 0.007607 (0.006295) | 0.726660 / 0.226044 (0.500616) | 7.794918 / 2.268929 (5.525990) | 3.311695 / 55.444624 (-52.132930) | 2.729167 / 6.876477 (-4.147310) | 2.630984 / 2.142072 (0.488911) | 1.018534 / 4.805227 (-3.786693) | 0.194602 / 6.500664 (-6.306062) | 0.070876 / 0.075469 (-0.004593) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.573005 / 1.841788 (-0.268783) | 17.042710 / 8.074308 (8.968401) | 19.615320 / 10.191392 (9.423928) | 0.229405 / 0.680424 (-0.451019) | 0.027560 / 0.534201 (-0.506641) | 0.447984 / 0.579283 (-0.131299) | 0.598392 / 0.434364 (0.164028) | 0.571769 / 0.540337 (0.031431) | 0.653025 / 1.386936 (-0.733911) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9dca2ff89a8589595313e9535d16597ce10e3700 \"CML watermark\")\n"
] | 2023-05-31T08:33:02 | 2023-05-31T13:34:35 | 2023-05-31T13:25:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5917",
"html_url": "https://github.com/huggingface/datasets/pull/5917",
"diff_url": "https://github.com/huggingface/datasets/pull/5917.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5917.patch",
"merged_at": "2023-05-31T13:25:57"
} | Related to:
- #5850 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5917/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5916/comments | https://api.github.com/repos/huggingface/datasets/issues/5916/events | https://github.com/huggingface/datasets/pull/5916 | 1,732,456,392 | PR_kwDODunzps5RskTb | 5,916 | Unpin responses | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006113 / 0.011353 (-0.005239) | 0.004195 / 0.011008 (-0.006813) | 0.098103 / 0.038508 (0.059595) | 0.027970 / 0.023109 (0.004860) | 0.300992 / 0.275898 (0.025094) | 0.335402 / 0.323480 (0.011922) | 0.005079 / 0.007986 (-0.002906) | 0.003516 / 0.004328 (-0.000813) | 0.077311 / 0.004250 (0.073061) | 0.037863 / 0.037052 (0.000810) | 0.302638 / 0.258489 (0.044149) | 0.346554 / 0.293841 (0.052713) | 0.025218 / 0.128546 (-0.103328) | 0.008630 / 0.075646 (-0.067017) | 0.319748 / 0.419271 (-0.099523) | 0.049182 / 0.043533 (0.005650) | 0.306233 / 0.255139 (0.051094) | 0.331040 / 0.283200 (0.047840) | 0.089203 / 0.141683 (-0.052480) | 1.496104 / 1.452155 (0.043949) | 1.567878 / 1.492716 (0.075162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215774 / 0.018006 (0.197768) | 0.436810 / 0.000490 (0.436320) | 0.000307 / 0.000200 (0.000107) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024102 / 0.037411 (-0.013310) | 0.095459 / 0.014526 (0.080933) | 0.106564 / 0.176557 (-0.069992) | 0.169894 / 0.737135 (-0.567241) | 0.109152 / 0.296338 (-0.187186) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429066 / 0.215209 (0.213857) | 4.297385 / 2.077655 (2.219730) | 2.054854 / 1.504120 (0.550734) | 1.846844 / 1.541195 (0.305649) | 1.840807 / 1.468490 (0.372317) | 0.553193 / 4.584777 (-4.031584) | 3.366788 / 3.745712 (-0.378924) | 1.727337 / 5.269862 (-3.542525) | 0.994357 / 4.565676 (-3.571319) | 0.067790 / 0.424275 (-0.356485) | 0.012002 / 0.007607 (0.004395) | 0.533335 / 0.226044 (0.307291) | 5.341341 / 2.268929 (3.072412) | 2.543581 / 55.444624 (-52.901043) | 2.220374 / 6.876477 (-4.656103) | 2.321656 / 2.142072 (0.179583) | 0.654408 / 4.805227 (-4.150819) | 0.134693 / 6.500664 (-6.365971) | 0.066926 / 0.075469 (-0.008544) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209463 / 1.841788 (-0.632325) | 13.568221 / 8.074308 (5.493913) | 13.965418 / 10.191392 (3.774026) | 0.145049 / 0.680424 (-0.535375) | 0.016936 / 0.534201 (-0.517265) | 0.371587 / 0.579283 (-0.207696) | 0.386363 / 0.434364 (-0.048001) | 0.437137 / 0.540337 (-0.103201) | 0.514779 / 1.386936 (-0.872157) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006245 / 0.011353 (-0.005108) | 0.004232 / 0.011008 (-0.006776) | 0.075682 / 0.038508 (0.037174) | 0.027858 / 0.023109 (0.004749) | 0.425325 / 0.275898 (0.149427) | 0.466732 / 0.323480 (0.143253) | 0.005240 / 0.007986 (-0.002745) | 0.003506 / 0.004328 (-0.000823) | 0.075294 / 0.004250 (0.071044) | 0.041677 / 0.037052 (0.004624) | 0.426552 / 0.258489 (0.168063) | 0.469452 / 0.293841 (0.175611) | 0.025443 / 0.128546 (-0.103104) | 0.008526 / 0.075646 (-0.067120) | 0.082190 / 0.419271 (-0.337081) | 0.040906 / 0.043533 (-0.002626) | 0.428406 / 0.255139 (0.173267) | 0.446795 / 0.283200 (0.163595) | 0.093837 / 0.141683 (-0.047846) | 1.518639 / 1.452155 (0.066484) | 1.620214 / 1.492716 (0.127498) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223259 / 0.018006 (0.205253) | 0.425077 / 0.000490 (0.424588) | 0.001980 / 0.000200 (0.001780) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025813 / 0.037411 (-0.011599) | 0.103062 / 0.014526 (0.088536) | 0.108958 / 0.176557 (-0.067598) | 0.161591 / 0.737135 (-0.575544) | 0.112130 / 0.296338 (-0.184209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472843 / 0.215209 (0.257634) | 4.713281 / 2.077655 (2.635626) | 2.458216 / 1.504120 (0.954096) | 2.272467 / 1.541195 (0.731273) | 2.324456 / 1.468490 (0.855965) | 0.554686 / 4.584777 (-4.030091) | 3.445079 / 3.745712 (-0.300634) | 3.451896 / 5.269862 (-1.817966) | 1.431065 / 4.565676 (-3.134612) | 0.067868 / 0.424275 (-0.356407) | 0.012093 / 0.007607 (0.004486) | 0.573571 / 0.226044 (0.347526) | 5.820452 / 2.268929 (3.551523) | 2.934858 / 55.444624 (-52.509767) | 2.602719 / 6.876477 (-4.273758) | 2.645999 / 2.142072 (0.503927) | 0.660688 / 4.805227 (-4.144540) | 0.137490 / 6.500664 (-6.363174) | 0.068311 / 0.075469 (-0.007158) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.321709 / 1.841788 (-0.520079) | 14.592346 / 8.074308 (6.518038) | 14.520748 / 10.191392 (4.329356) | 0.132689 / 0.680424 (-0.547735) | 0.016422 / 0.534201 (-0.517779) | 0.370071 / 0.579283 (-0.209212) | 0.397091 / 0.434364 (-0.037273) | 0.431979 / 0.540337 (-0.108358) | 0.509965 / 1.386936 (-0.876971) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8bcd061ab2082a0862f30329bc52f6e0d321805c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006182 / 0.011353 (-0.005171) | 0.004153 / 0.011008 (-0.006855) | 0.095715 / 0.038508 (0.057207) | 0.032457 / 0.023109 (0.009347) | 0.314961 / 0.275898 (0.039063) | 0.353696 / 0.323480 (0.030216) | 0.005256 / 0.007986 (-0.002729) | 0.004870 / 0.004328 (0.000541) | 0.072442 / 0.004250 (0.068192) | 0.046102 / 0.037052 (0.009050) | 0.324410 / 0.258489 (0.065921) | 0.366861 / 0.293841 (0.073020) | 0.027088 / 0.128546 (-0.101458) | 0.008572 / 0.075646 (-0.067075) | 0.325988 / 0.419271 (-0.093284) | 0.049494 / 0.043533 (0.005961) | 0.311221 / 0.255139 (0.056082) | 0.359720 / 0.283200 (0.076521) | 0.095101 / 0.141683 (-0.046581) | 1.472821 / 1.452155 (0.020667) | 1.516157 / 1.492716 (0.023441) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210456 / 0.018006 (0.192450) | 0.439440 / 0.000490 (0.438950) | 0.003764 / 0.000200 (0.003564) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024076 / 0.037411 (-0.013335) | 0.104886 / 0.014526 (0.090360) | 0.114164 / 0.176557 (-0.062393) | 0.167289 / 0.737135 (-0.569847) | 0.116457 / 0.296338 (-0.179882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400039 / 0.215209 (0.184830) | 3.973243 / 2.077655 (1.895588) | 1.801991 / 1.504120 (0.297871) | 1.592017 / 1.541195 (0.050822) | 1.612564 / 1.468490 (0.144074) | 0.527475 / 4.584777 (-4.057302) | 3.676246 / 3.745712 (-0.069466) | 1.806423 / 5.269862 (-3.463438) | 1.176921 / 4.565676 (-3.388756) | 0.065902 / 0.424275 (-0.358373) | 0.012245 / 0.007607 (0.004638) | 0.490883 / 0.226044 (0.264838) | 4.905270 / 2.268929 (2.636341) | 2.218694 / 55.444624 (-53.225930) | 1.903074 / 6.876477 (-4.973403) | 1.979505 / 2.142072 (-0.162567) | 0.644415 / 4.805227 (-4.160812) | 0.142433 / 6.500664 (-6.358231) | 0.063564 / 0.075469 (-0.011905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193756 / 1.841788 (-0.648032) | 14.673103 / 8.074308 (6.598795) | 13.410951 / 10.191392 (3.219559) | 0.159175 / 0.680424 (-0.521249) | 0.017076 / 0.534201 (-0.517125) | 0.388880 / 0.579283 (-0.190403) | 0.409974 / 0.434364 (-0.024390) | 0.454494 / 0.540337 (-0.085844) | 0.556873 / 1.386936 (-0.830063) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006107 / 0.011353 (-0.005246) | 0.004433 / 0.011008 (-0.006575) | 0.073892 / 0.038508 (0.035384) | 0.032386 / 0.023109 (0.009277) | 0.370339 / 0.275898 (0.094441) | 0.388996 / 0.323480 (0.065516) | 0.005438 / 0.007986 (-0.002548) | 0.003875 / 0.004328 (-0.000454) | 0.073867 / 0.004250 (0.069617) | 0.048350 / 0.037052 (0.011298) | 0.380328 / 0.258489 (0.121839) | 0.411373 / 0.293841 (0.117532) | 0.028183 / 0.128546 (-0.100363) | 0.008924 / 0.075646 (-0.066723) | 0.082484 / 0.419271 (-0.336787) | 0.047321 / 0.043533 (0.003788) | 0.371702 / 0.255139 (0.116563) | 0.380535 / 0.283200 (0.097335) | 0.100772 / 0.141683 (-0.040911) | 1.475038 / 1.452155 (0.022883) | 1.564293 / 1.492716 (0.071577) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214589 / 0.018006 (0.196583) | 0.437193 / 0.000490 (0.436703) | 0.003676 / 0.000200 (0.003476) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027991 / 0.037411 (-0.009421) | 0.111154 / 0.014526 (0.096628) | 0.120365 / 0.176557 (-0.056191) | 0.173601 / 0.737135 (-0.563535) | 0.126244 / 0.296338 (-0.170094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442848 / 0.215209 (0.227639) | 4.398336 / 2.077655 (2.320681) | 2.217058 / 1.504120 (0.712938) | 2.011155 / 1.541195 (0.469960) | 2.123086 / 1.468490 (0.654596) | 0.525857 / 4.584777 (-4.058920) | 3.730191 / 3.745712 (-0.015521) | 3.517680 / 5.269862 (-1.752181) | 1.557940 / 4.565676 (-3.007736) | 0.066309 / 0.424275 (-0.357967) | 0.011788 / 0.007607 (0.004181) | 0.548506 / 0.226044 (0.322462) | 5.483615 / 2.268929 (3.214687) | 2.663784 / 55.444624 (-52.780840) | 2.325744 / 6.876477 (-4.550732) | 2.344179 / 2.142072 (0.202106) | 0.644217 / 4.805227 (-4.161010) | 0.141546 / 6.500664 (-6.359118) | 0.063730 / 0.075469 (-0.011739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296032 / 1.841788 (-0.545756) | 14.903729 / 8.074308 (6.829421) | 14.505409 / 10.191392 (4.314017) | 0.170478 / 0.680424 (-0.509946) | 0.017876 / 0.534201 (-0.516325) | 0.401047 / 0.579283 (-0.178236) | 0.417855 / 0.434364 (-0.016509) | 0.472138 / 0.540337 (-0.068200) | 0.570859 / 1.386936 (-0.816077) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5a4d530965eb35c66955ef89df79210c66b7f5e6 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008495 / 0.011353 (-0.002858) | 0.005322 / 0.011008 (-0.005686) | 0.125471 / 0.038508 (0.086962) | 0.034604 / 0.023109 (0.011495) | 0.419831 / 0.275898 (0.143933) | 0.415707 / 0.323480 (0.092227) | 0.007471 / 0.007986 (-0.000515) | 0.005441 / 0.004328 (0.001112) | 0.095412 / 0.004250 (0.091162) | 0.053865 / 0.037052 (0.016812) | 0.375257 / 0.258489 (0.116768) | 0.438114 / 0.293841 (0.144273) | 0.046183 / 0.128546 (-0.082363) | 0.013663 / 0.075646 (-0.061984) | 0.438317 / 0.419271 (0.019045) | 0.065665 / 0.043533 (0.022133) | 0.387640 / 0.255139 (0.132501) | 0.431350 / 0.283200 (0.148150) | 0.112841 / 0.141683 (-0.028842) | 1.778639 / 1.452155 (0.326484) | 1.891948 / 1.492716 (0.399232) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284371 / 0.018006 (0.266365) | 0.598247 / 0.000490 (0.597758) | 0.013674 / 0.000200 (0.013474) | 0.000483 / 0.000054 (0.000428) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032437 / 0.037411 (-0.004974) | 0.120547 / 0.014526 (0.106021) | 0.129845 / 0.176557 (-0.046711) | 0.203455 / 0.737135 (-0.533680) | 0.140039 / 0.296338 (-0.156300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.596549 / 0.215209 (0.381340) | 6.138766 / 2.077655 (4.061111) | 2.515506 / 1.504120 (1.011386) | 2.124472 / 1.541195 (0.583277) | 2.160812 / 1.468490 (0.692322) | 0.898965 / 4.584777 (-3.685812) | 5.588152 / 3.745712 (1.842440) | 2.717580 / 5.269862 (-2.552282) | 1.683641 / 4.565676 (-2.882036) | 0.108045 / 0.424275 (-0.316230) | 0.014089 / 0.007607 (0.006481) | 0.749567 / 0.226044 (0.523523) | 7.518051 / 2.268929 (5.249123) | 3.198238 / 55.444624 (-52.246386) | 2.575156 / 6.876477 (-4.301321) | 2.725818 / 2.142072 (0.583745) | 1.149338 / 4.805227 (-3.655889) | 0.220443 / 6.500664 (-6.280221) | 0.081452 / 0.075469 (0.005983) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.624462 / 1.841788 (-0.217325) | 18.204963 / 8.074308 (10.130655) | 21.379169 / 10.191392 (11.187777) | 0.248520 / 0.680424 (-0.431903) | 0.030121 / 0.534201 (-0.504080) | 0.499542 / 0.579283 (-0.079741) | 0.599783 / 0.434364 (0.165419) | 0.597642 / 0.540337 (0.057305) | 0.681948 / 1.386936 (-0.704988) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008431 / 0.011353 (-0.002921) | 0.006143 / 0.011008 (-0.004865) | 0.107531 / 0.038508 (0.069023) | 0.036308 / 0.023109 (0.013199) | 0.480555 / 0.275898 (0.204657) | 0.556407 / 0.323480 (0.232927) | 0.007614 / 0.007986 (-0.000372) | 0.004749 / 0.004328 (0.000421) | 0.105734 / 0.004250 (0.101484) | 0.051619 / 0.037052 (0.014567) | 0.514821 / 0.258489 (0.256332) | 0.562143 / 0.293841 (0.268302) | 0.042957 / 0.128546 (-0.085589) | 0.015142 / 0.075646 (-0.060505) | 0.143161 / 0.419271 (-0.276111) | 0.061910 / 0.043533 (0.018377) | 0.496923 / 0.255139 (0.241784) | 0.556302 / 0.283200 (0.273102) | 0.136700 / 0.141683 (-0.004983) | 1.886184 / 1.452155 (0.434029) | 2.004087 / 1.492716 (0.511371) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235530 / 0.018006 (0.217523) | 0.600796 / 0.000490 (0.600306) | 0.009074 / 0.000200 (0.008874) | 0.000203 / 0.000054 (0.000149) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036345 / 0.037411 (-0.001066) | 0.126112 / 0.014526 (0.111586) | 0.143369 / 0.176557 (-0.033188) | 0.211381 / 0.737135 (-0.525755) | 0.151095 / 0.296338 (-0.145243) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.695022 / 0.215209 (0.479813) | 6.685981 / 2.077655 (4.608326) | 3.104521 / 1.504120 (1.600401) | 2.758323 / 1.541195 (1.217128) | 2.706286 / 1.468490 (1.237796) | 0.941182 / 4.584777 (-3.643595) | 5.715839 / 3.745712 (1.970127) | 5.089636 / 5.269862 (-0.180226) | 2.594739 / 4.565676 (-1.970937) | 0.112621 / 0.424275 (-0.311655) | 0.014001 / 0.007607 (0.006394) | 0.812990 / 0.226044 (0.586945) | 8.060890 / 2.268929 (5.791961) | 3.832506 / 55.444624 (-51.612119) | 3.148051 / 6.876477 (-3.728425) | 3.110096 / 2.142072 (0.968023) | 1.105050 / 4.805227 (-3.700178) | 0.219835 / 6.500664 (-6.280829) | 0.078600 / 0.075469 (0.003131) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.707551 / 1.841788 (-0.134237) | 19.238194 / 8.074308 (11.163885) | 22.167076 / 10.191392 (11.975684) | 0.233458 / 0.680424 (-0.446966) | 0.025131 / 0.534201 (-0.509070) | 0.525241 / 0.579283 (-0.054042) | 0.649666 / 0.434364 (0.215303) | 0.602941 / 0.540337 (0.062603) | 0.718472 / 1.386936 (-0.668464) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ac3a42c525d91cb630273702a0c110a71c9bf54b \"CML watermark\")\n"
] | 2023-05-30T14:59:48 | 2023-05-30T18:03:10 | 2023-05-30T17:53:29 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5916",
"html_url": "https://github.com/huggingface/datasets/pull/5916",
"diff_url": "https://github.com/huggingface/datasets/pull/5916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5916.patch",
"merged_at": "2023-05-30T17:53:29"
} | Fix #5906 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5916/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5915/comments | https://api.github.com/repos/huggingface/datasets/issues/5915/events | https://github.com/huggingface/datasets/pull/5915 | 1,732,389,984 | PR_kwDODunzps5RsVzj | 5,915 | Raise error in `DatasetBuilder.as_dataset` when `file_format` is not `"arrow"` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006416 / 0.011353 (-0.004937) | 0.004278 / 0.011008 (-0.006731) | 0.097562 / 0.038508 (0.059054) | 0.029488 / 0.023109 (0.006379) | 0.308648 / 0.275898 (0.032750) | 0.339879 / 0.323480 (0.016399) | 0.005288 / 0.007986 (-0.002697) | 0.005033 / 0.004328 (0.000704) | 0.074666 / 0.004250 (0.070416) | 0.034888 / 0.037052 (-0.002164) | 0.309960 / 0.258489 (0.051471) | 0.344276 / 0.293841 (0.050435) | 0.025564 / 0.128546 (-0.102982) | 0.008579 / 0.075646 (-0.067067) | 0.319796 / 0.419271 (-0.099476) | 0.044786 / 0.043533 (0.001253) | 0.308888 / 0.255139 (0.053749) | 0.334001 / 0.283200 (0.050802) | 0.089917 / 0.141683 (-0.051766) | 1.456696 / 1.452155 (0.004541) | 1.542273 / 1.492716 (0.049557) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213236 / 0.018006 (0.195230) | 0.425139 / 0.000490 (0.424650) | 0.008831 / 0.000200 (0.008631) | 0.000209 / 0.000054 (0.000155) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023990 / 0.037411 (-0.013421) | 0.096787 / 0.014526 (0.082261) | 0.105783 / 0.176557 (-0.070774) | 0.167182 / 0.737135 (-0.569954) | 0.108896 / 0.296338 (-0.187442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419844 / 0.215209 (0.204635) | 4.201909 / 2.077655 (2.124254) | 1.910784 / 1.504120 (0.406664) | 1.685183 / 1.541195 (0.143988) | 1.716927 / 1.468490 (0.248437) | 0.548261 / 4.584777 (-4.036516) | 3.414168 / 3.745712 (-0.331544) | 1.695446 / 5.269862 (-3.574415) | 0.989668 / 4.565676 (-3.576008) | 0.067328 / 0.424275 (-0.356948) | 0.012084 / 0.007607 (0.004477) | 0.523799 / 0.226044 (0.297754) | 5.240589 / 2.268929 (2.971661) | 2.331618 / 55.444624 (-53.113007) | 1.996094 / 6.876477 (-4.880383) | 2.105450 / 2.142072 (-0.036623) | 0.654614 / 4.805227 (-4.150613) | 0.134721 / 6.500664 (-6.365943) | 0.066227 / 0.075469 (-0.009242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196266 / 1.841788 (-0.645521) | 13.990045 / 8.074308 (5.915737) | 13.928126 / 10.191392 (3.736734) | 0.142600 / 0.680424 (-0.537824) | 0.016462 / 0.534201 (-0.517739) | 0.363113 / 0.579283 (-0.216170) | 0.428590 / 0.434364 (-0.005773) | 0.452594 / 0.540337 (-0.087743) | 0.551678 / 1.386936 (-0.835258) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005992 / 0.011353 (-0.005361) | 0.004161 / 0.011008 (-0.006847) | 0.076098 / 0.038508 (0.037589) | 0.028559 / 0.023109 (0.005450) | 0.411696 / 0.275898 (0.135798) | 0.444519 / 0.323480 (0.121040) | 0.004965 / 0.007986 (-0.003021) | 0.003452 / 0.004328 (-0.000876) | 0.075107 / 0.004250 (0.070857) | 0.037305 / 0.037052 (0.000252) | 0.429728 / 0.258489 (0.171239) | 0.444313 / 0.293841 (0.150472) | 0.025278 / 0.128546 (-0.103268) | 0.008527 / 0.075646 (-0.067120) | 0.081502 / 0.419271 (-0.337770) | 0.041237 / 0.043533 (-0.002296) | 0.417848 / 0.255139 (0.162709) | 0.426615 / 0.283200 (0.143415) | 0.094641 / 0.141683 (-0.047041) | 1.525141 / 1.452155 (0.072987) | 1.615608 / 1.492716 (0.122892) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192867 / 0.018006 (0.174861) | 0.414979 / 0.000490 (0.414490) | 0.000815 / 0.000200 (0.000615) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025354 / 0.037411 (-0.012058) | 0.102085 / 0.014526 (0.087559) | 0.107930 / 0.176557 (-0.068626) | 0.160483 / 0.737135 (-0.576652) | 0.112341 / 0.296338 (-0.183997) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446938 / 0.215209 (0.231728) | 4.480057 / 2.077655 (2.402402) | 2.154825 / 1.504120 (0.650705) | 1.942774 / 1.541195 (0.401580) | 1.996418 / 1.468490 (0.527928) | 0.556728 / 4.584777 (-4.028049) | 3.441228 / 3.745712 (-0.304484) | 3.004179 / 5.269862 (-2.265683) | 1.314104 / 4.565676 (-3.251573) | 0.068670 / 0.424275 (-0.355606) | 0.011972 / 0.007607 (0.004365) | 0.556604 / 0.226044 (0.330560) | 5.561783 / 2.268929 (3.292855) | 2.631262 / 55.444624 (-52.813363) | 2.262143 / 6.876477 (-4.614333) | 2.364243 / 2.142072 (0.222170) | 0.660621 / 4.805227 (-4.144607) | 0.137371 / 6.500664 (-6.363293) | 0.069104 / 0.075469 (-0.006365) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.305706 / 1.841788 (-0.536081) | 14.015932 / 8.074308 (5.941624) | 14.353580 / 10.191392 (4.162187) | 0.146172 / 0.680424 (-0.534251) | 0.016699 / 0.534201 (-0.517502) | 0.357970 / 0.579283 (-0.221313) | 0.389067 / 0.434364 (-0.045297) | 0.415470 / 0.540337 (-0.124867) | 0.501359 / 1.386936 (-0.885577) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b2b837b4e7267db9e32d2613d8bf8d70d2ce0b47 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006800 / 0.011353 (-0.004552) | 0.004721 / 0.011008 (-0.006287) | 0.097760 / 0.038508 (0.059252) | 0.034192 / 0.023109 (0.011083) | 0.298240 / 0.275898 (0.022342) | 0.331119 / 0.323480 (0.007639) | 0.005826 / 0.007986 (-0.002160) | 0.003968 / 0.004328 (-0.000360) | 0.073833 / 0.004250 (0.069582) | 0.046288 / 0.037052 (0.009236) | 0.303018 / 0.258489 (0.044529) | 0.342163 / 0.293841 (0.048322) | 0.028504 / 0.128546 (-0.100042) | 0.009031 / 0.075646 (-0.066615) | 0.331617 / 0.419271 (-0.087655) | 0.060911 / 0.043533 (0.017379) | 0.304044 / 0.255139 (0.048905) | 0.328959 / 0.283200 (0.045759) | 0.113174 / 0.141683 (-0.028509) | 1.424652 / 1.452155 (-0.027502) | 1.531392 / 1.492716 (0.038676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206175 / 0.018006 (0.188169) | 0.435916 / 0.000490 (0.435426) | 0.002587 / 0.000200 (0.002387) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026996 / 0.037411 (-0.010415) | 0.106722 / 0.014526 (0.092196) | 0.117655 / 0.176557 (-0.058902) | 0.176969 / 0.737135 (-0.560166) | 0.122577 / 0.296338 (-0.173762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396086 / 0.215209 (0.180877) | 3.972465 / 2.077655 (1.894811) | 1.800798 / 1.504120 (0.296678) | 1.616747 / 1.541195 (0.075552) | 1.680711 / 1.468490 (0.212221) | 0.526479 / 4.584777 (-4.058298) | 3.791528 / 3.745712 (0.045816) | 2.989518 / 5.269862 (-2.280344) | 1.463221 / 4.565676 (-3.102455) | 0.065649 / 0.424275 (-0.358626) | 0.012155 / 0.007607 (0.004548) | 0.500241 / 0.226044 (0.274197) | 5.008895 / 2.268929 (2.739966) | 2.315288 / 55.444624 (-53.129336) | 1.959409 / 6.876477 (-4.917067) | 2.102371 / 2.142072 (-0.039701) | 0.639611 / 4.805227 (-4.165617) | 0.140101 / 6.500664 (-6.360563) | 0.063599 / 0.075469 (-0.011870) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206729 / 1.841788 (-0.635059) | 15.127250 / 8.074308 (7.052942) | 14.397228 / 10.191392 (4.205836) | 0.148802 / 0.680424 (-0.531622) | 0.017628 / 0.534201 (-0.516573) | 0.396150 / 0.579283 (-0.183133) | 0.435826 / 0.434364 (0.001462) | 0.471215 / 0.540337 (-0.069122) | 0.559413 / 1.386936 (-0.827523) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004520 / 0.011008 (-0.006488) | 0.074395 / 0.038508 (0.035887) | 0.033400 / 0.023109 (0.010291) | 0.388411 / 0.275898 (0.112513) | 0.396714 / 0.323480 (0.073234) | 0.005736 / 0.007986 (-0.002250) | 0.004038 / 0.004328 (-0.000291) | 0.073595 / 0.004250 (0.069345) | 0.045207 / 0.037052 (0.008155) | 0.378096 / 0.258489 (0.119607) | 0.417830 / 0.293841 (0.123989) | 0.028365 / 0.128546 (-0.100181) | 0.008887 / 0.075646 (-0.066760) | 0.080766 / 0.419271 (-0.338505) | 0.046923 / 0.043533 (0.003390) | 0.376190 / 0.255139 (0.121051) | 0.385875 / 0.283200 (0.102675) | 0.107542 / 0.141683 (-0.034141) | 1.409257 / 1.452155 (-0.042898) | 1.518475 / 1.492716 (0.025759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223299 / 0.018006 (0.205292) | 0.440640 / 0.000490 (0.440150) | 0.000397 / 0.000200 (0.000197) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031388 / 0.037411 (-0.006024) | 0.113078 / 0.014526 (0.098552) | 0.124398 / 0.176557 (-0.052159) | 0.173802 / 0.737135 (-0.563333) | 0.129555 / 0.296338 (-0.166783) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440220 / 0.215209 (0.225011) | 4.398052 / 2.077655 (2.320398) | 2.188396 / 1.504120 (0.684276) | 1.997811 / 1.541195 (0.456616) | 2.093338 / 1.468490 (0.624847) | 0.519597 / 4.584777 (-4.065180) | 3.885795 / 3.745712 (0.140083) | 2.896327 / 5.269862 (-2.373534) | 1.245785 / 4.565676 (-3.319891) | 0.065675 / 0.424275 (-0.358600) | 0.011729 / 0.007607 (0.004121) | 0.541526 / 0.226044 (0.315482) | 5.406763 / 2.268929 (3.137834) | 2.722914 / 55.444624 (-52.721711) | 2.471111 / 6.876477 (-4.405366) | 2.541488 / 2.142072 (0.399415) | 0.633566 / 4.805227 (-4.171661) | 0.139622 / 6.500664 (-6.361042) | 0.064220 / 0.075469 (-0.011249) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296097 / 1.841788 (-0.545690) | 15.095320 / 8.074308 (7.021012) | 14.300821 / 10.191392 (4.109429) | 0.145470 / 0.680424 (-0.534954) | 0.017496 / 0.534201 (-0.516705) | 0.400589 / 0.579283 (-0.178694) | 0.423091 / 0.434364 (-0.011273) | 0.468258 / 0.540337 (-0.072079) | 0.570873 / 1.386936 (-0.816063) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aee6c67034d6ff298b2153a2fcdab97f14ee6d66 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005918 / 0.011353 (-0.005435) | 0.004393 / 0.011008 (-0.006615) | 0.091677 / 0.038508 (0.053169) | 0.033546 / 0.023109 (0.010437) | 0.344682 / 0.275898 (0.068784) | 0.388906 / 0.323480 (0.065426) | 0.005412 / 0.007986 (-0.002574) | 0.004909 / 0.004328 (0.000580) | 0.082589 / 0.004250 (0.078339) | 0.045242 / 0.037052 (0.008190) | 0.339191 / 0.258489 (0.080702) | 0.349673 / 0.293841 (0.055832) | 0.026805 / 0.128546 (-0.101742) | 0.007529 / 0.075646 (-0.068117) | 0.319108 / 0.419271 (-0.100164) | 0.049482 / 0.043533 (0.005949) | 0.320013 / 0.255139 (0.064874) | 0.342059 / 0.283200 (0.058859) | 0.096623 / 0.141683 (-0.045060) | 1.458204 / 1.452155 (0.006049) | 1.571172 / 1.492716 (0.078455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235171 / 0.018006 (0.217165) | 0.479678 / 0.000490 (0.479188) | 0.006627 / 0.000200 (0.006427) | 0.000257 / 0.000054 (0.000202) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025716 / 0.037411 (-0.011696) | 0.107730 / 0.014526 (0.093204) | 0.111595 / 0.176557 (-0.064962) | 0.171316 / 0.737135 (-0.565819) | 0.118962 / 0.296338 (-0.177377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.376318 / 0.215209 (0.161109) | 4.039484 / 2.077655 (1.961829) | 1.811548 / 1.504120 (0.307428) | 1.646728 / 1.541195 (0.105533) | 1.688071 / 1.468490 (0.219581) | 0.551256 / 4.584777 (-4.033520) | 4.153931 / 3.745712 (0.408218) | 3.424154 / 5.269862 (-1.845707) | 1.734860 / 4.565676 (-2.830816) | 0.067753 / 0.424275 (-0.356522) | 0.012699 / 0.007607 (0.005092) | 0.505722 / 0.226044 (0.279677) | 4.997321 / 2.268929 (2.728392) | 2.258755 / 55.444624 (-53.185869) | 1.954382 / 6.876477 (-4.922095) | 1.967545 / 2.142072 (-0.174527) | 0.630489 / 4.805227 (-4.174738) | 0.138738 / 6.500664 (-6.361926) | 0.064907 / 0.075469 (-0.010562) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209634 / 1.841788 (-0.632154) | 15.055062 / 8.074308 (6.980754) | 12.721606 / 10.191392 (2.530214) | 0.164908 / 0.680424 (-0.515516) | 0.019528 / 0.534201 (-0.514673) | 0.400136 / 0.579283 (-0.179147) | 0.451640 / 0.434364 (0.017276) | 0.466272 / 0.540337 (-0.074065) | 0.553258 / 1.386936 (-0.833679) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006341 / 0.011353 (-0.005011) | 0.004617 / 0.011008 (-0.006391) | 0.077953 / 0.038508 (0.039445) | 0.031104 / 0.023109 (0.007995) | 0.360328 / 0.275898 (0.084430) | 0.408403 / 0.323480 (0.084923) | 0.005704 / 0.007986 (-0.002282) | 0.003588 / 0.004328 (-0.000741) | 0.071441 / 0.004250 (0.067190) | 0.043520 / 0.037052 (0.006468) | 0.375798 / 0.258489 (0.117309) | 0.400955 / 0.293841 (0.107114) | 0.028166 / 0.128546 (-0.100381) | 0.008578 / 0.075646 (-0.067068) | 0.086673 / 0.419271 (-0.332598) | 0.046424 / 0.043533 (0.002891) | 0.367276 / 0.255139 (0.112137) | 0.414550 / 0.283200 (0.131351) | 0.097355 / 0.141683 (-0.044328) | 1.465191 / 1.452155 (0.013036) | 1.555028 / 1.492716 (0.062312) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196642 / 0.018006 (0.178636) | 0.464221 / 0.000490 (0.463731) | 0.002726 / 0.000200 (0.002526) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028078 / 0.037411 (-0.009333) | 0.110762 / 0.014526 (0.096236) | 0.122212 / 0.176557 (-0.054344) | 0.164758 / 0.737135 (-0.572377) | 0.133969 / 0.296338 (-0.162370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448134 / 0.215209 (0.232925) | 4.339335 / 2.077655 (2.261680) | 2.129209 / 1.504120 (0.625089) | 1.957805 / 1.541195 (0.416611) | 1.994038 / 1.468490 (0.525548) | 0.497101 / 4.584777 (-4.087676) | 4.114432 / 3.745712 (0.368720) | 3.437305 / 5.269862 (-1.832556) | 1.692810 / 4.565676 (-2.872866) | 0.071077 / 0.424275 (-0.353198) | 0.012735 / 0.007607 (0.005128) | 0.534393 / 0.226044 (0.308348) | 5.217445 / 2.268929 (2.948517) | 2.594858 / 55.444624 (-52.849766) | 2.317464 / 6.876477 (-4.559012) | 2.337974 / 2.142072 (0.195902) | 0.622291 / 4.805227 (-4.182936) | 0.144934 / 6.500664 (-6.355730) | 0.068524 / 0.075469 (-0.006945) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310601 / 1.841788 (-0.531187) | 15.771527 / 8.074308 (7.697219) | 13.952032 / 10.191392 (3.760640) | 0.212473 / 0.680424 (-0.467951) | 0.017963 / 0.534201 (-0.516238) | 0.400755 / 0.579283 (-0.178528) | 0.439817 / 0.434364 (0.005453) | 0.472614 / 0.540337 (-0.067724) | 0.558410 / 1.386936 (-0.828526) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1b51429d02a0da1ff798873afe655309136c5689 \"CML watermark\")\n"
] | 2023-05-30T14:27:55 | 2023-05-31T13:31:21 | 2023-05-31T13:23:54 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5915",
"html_url": "https://github.com/huggingface/datasets/pull/5915",
"diff_url": "https://github.com/huggingface/datasets/pull/5915.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5915.patch",
"merged_at": "2023-05-31T13:23:54"
} | Raise an error in `DatasetBuilder.as_dataset` when `file_format != "arrow"` (and fix the docstring)
Fix #5874 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5915/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5914/comments | https://api.github.com/repos/huggingface/datasets/issues/5914/events | https://github.com/huggingface/datasets/issues/5914 | 1,731,483,996 | I_kwDODunzps5nNFlc | 5,914 | array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets | {
"login": "ravenouse",
"id": 85110830,
"node_id": "MDQ6VXNlcjg1MTEwODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/85110830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ravenouse",
"html_url": "https://github.com/ravenouse",
"followers_url": "https://api.github.com/users/ravenouse/followers",
"following_url": "https://api.github.com/users/ravenouse/following{/other_user}",
"gists_url": "https://api.github.com/users/ravenouse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ravenouse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ravenouse/subscriptions",
"organizations_url": "https://api.github.com/users/ravenouse/orgs",
"repos_url": "https://api.github.com/users/ravenouse/repos",
"events_url": "https://api.github.com/users/ravenouse/events{/privacy}",
"received_events_url": "https://api.github.com/users/ravenouse/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-05-30T04:25:00 | 2023-05-30T04:25:00 | null | NONE | null | null | null | ### Describe the bug
When using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message "array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size."
Detailed error message:
Traceback (most recent call last):
File "data_processing.py", line 26, in <module>
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split],writer_batch_size = 50)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2405, in map
desc=desc,
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2756, in _map_single
example = apply_function_on_filtered_inputs(example, i, offset=offset)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "data_processing.py", line 11, in prepare_dataset
audio = batch["audio"]
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 123, in __getitem__
value = decode_nested_example(self.features[key], value) if value is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/features.py", line 1260, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 156, in decode_example
array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 257, in _decode_non_mp3_path_like
array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 176, in load
y, sr_native = __soundfile_load(path, offset, duration, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 222, in __soundfile_load
y = sf_desc.read(frames=frame_duration, dtype=dtype, always_2d=False).T
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 891, in read
out = self._create_empty_array(frames, always_2d, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 1323, in _create_empty_array
return np.empty(shape, dtype, order='C')
ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
### Steps to reproduce the bug
```python
from datasets import load_dataset, DatasetDict
from transformers import WhisperFeatureExtractor
from transformers import WhisperTokenizer
samromur_children= load_dataset("language-and-voice-lab/samromur_children")
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="icelandic", task="transcribe")
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=16000).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["normalized_text"]).input_ids
return batch
cache_dict = {"train": "./cache/audio_train.cache", \
"validation": "./cache/audio_validation.cache", \
"test": "./cache/audio_test.cache"}
filter_cache_dict = {"train": "./cache/filter_train.arrow", \
"validation": "./cache/filter_validation.arrow", \
"test": "./cache/filter_test.arrow"}
print("before filtering")
print(samromur_children)
#filter the dataset to only include examples with more than 2 seconds of audio
samromur_children = samromur_children.filter(lambda example: example["audio"]["array"].shape[0] > 16000*2, cache_file_names=filter_cache_dict)
print("after filtering")
print(samromur_children)
processed_dataset = DatasetDict()
# processed_dataset = samromur_children.map(prepare_dataset, cache_file_names=cache_dict, num_proc=10,)
for split in ["train", "validation", "test"]:
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split])
```
### Expected behavior
The dataset is successfully processed and ready to train the model.
### Environment info
Python version: 3.7.13
datasets package version: 2.4.0
librosa package version: 0.10.0.post2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5914/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5913/comments | https://api.github.com/repos/huggingface/datasets/issues/5913/events | https://github.com/huggingface/datasets/issues/5913 | 1,731,427,484 | I_kwDODunzps5nM3yc | 5,913 | I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred. | {
"login": "cjt222",
"id": 17508662,
"node_id": "MDQ6VXNlcjE3NTA4NjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17508662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cjt222",
"html_url": "https://github.com/cjt222",
"followers_url": "https://api.github.com/users/cjt222/followers",
"following_url": "https://api.github.com/users/cjt222/following{/other_user}",
"gists_url": "https://api.github.com/users/cjt222/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cjt222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cjt222/subscriptions",
"organizations_url": "https://api.github.com/users/cjt222/orgs",
"repos_url": "https://api.github.com/users/cjt222/repos",
"events_url": "https://api.github.com/users/cjt222/events{/privacy}",
"received_events_url": "https://api.github.com/users/cjt222/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @cjt222.\r\n\r\nWhat is the structure of your JSON files. Please note that it is normally simpler if the data file format is JSON-Lines instead. ",
"> Thanks for reporting, @cjt222.\r\n> \r\n> What is the structure of your JSON files. Please note that it is normally simpler if the data file format is JSON-Lines instead.\r\n\r\nThanks! I have encountered similar problems. I modify the json format from list to line and works!"
] | 2023-05-30T02:55:26 | 2023-07-24T12:00:38 | 2023-07-24T12:00:38 | NONE | null | null | null | ### Describe the bug
File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
Downloading and preparing dataset json/default to /home/kas/diffusers/examples/dreambooth/cache_data/datasets/json/default-acf423d8c6ef99d0/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4...
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data files: 100%|██████████| 1/1 [00:00<00:00, 84.35it/s]
Extracting data files: 0%| | 0/1 [00:00<?, ?it/s] for _, table in generator:
File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 114, in _generate_tables
io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 258, in pyarrow._json.read_json
Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 27.72it/s]
Generating train split: 0 examples [00:00, ? examples/s] File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 125, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2390448764
### Steps to reproduce the bug
1、data_files = ["1.json", "2.json", "3.json"]
2、dataset = load_dataset('json', data_files=data_files)
### Expected behavior
Read the dataset normally.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.15.0-29-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5913/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5912/comments | https://api.github.com/repos/huggingface/datasets/issues/5912/events | https://github.com/huggingface/datasets/issues/5912 | 1,730,299,852 | I_kwDODunzps5nIkfM | 5,912 | Missing elements in `map` a batched dataset | {
"login": "sachinruk",
"id": 1410927,
"node_id": "MDQ6VXNlcjE0MTA5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachinruk",
"html_url": "https://github.com/sachinruk",
"followers_url": "https://api.github.com/users/sachinruk/followers",
"following_url": "https://api.github.com/users/sachinruk/following{/other_user}",
"gists_url": "https://api.github.com/users/sachinruk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachinruk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachinruk/subscriptions",
"organizations_url": "https://api.github.com/users/sachinruk/orgs",
"repos_url": "https://api.github.com/users/sachinruk/repos",
"events_url": "https://api.github.com/users/sachinruk/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachinruk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! in your code batching is **only used within** `map`, to process examples in batch. The dataset itself however is not batched and returns elements one by one.\r\n\r\nTo iterate on batches, you can do\r\n```python\r\nfor batch in dataset.iter(batch_size=8):\r\n ...\r\n```"
] | 2023-05-29T08:09:19 | 2023-07-26T15:48:15 | 2023-07-26T15:48:15 | NONE | null | null | null | ### Describe the bug
As outlined [here](https://discuss.huggingface.co/t/length-error-using-map-with-datasets/40969/3?u=sachin), the following collate function drops 5 out of possible 6 elements in the batch (it is 6 because out of the eight, two are bad links in laion). A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here.
The weirdest part is when inspecting the sizes of the tensors as shown below, both `tokenized_captions["input_ids"]` and `image_features` show the correct shapes. Simply the output only has one element (with the batch dimension squeezed out).
```python
class CollateFn:
def get_image(self, url):
try:
response = requests.get(url)
return Image.open(io.BytesIO(response.content)).convert("RGB")
except PIL.UnidentifiedImageError:
logger.info(f"Reading error: Could not transform f{url}")
return None
except requests.exceptions.ConnectionError:
logger.info(f"Connection error: Could not transform f{url}")
return None
def __call__(self, batch):
images = [self.get_image(url) for url in batch["url"]]
captions = [caption for caption, image in zip(batch["caption"], images) if image is not None]
images = [image for image in images if image is not None]
tokenized_captions = tokenizer(
captions,
padding="max_length",
truncation=True,
max_length=tokenizer.model_max_length,
return_tensors="pt",
)
image_features = torch.stack([torch.Tensor(feature_extractor(image)["pixel_values"][0]) for image in images])
# import pdb; pdb.set_trace()
return {"input_ids": tokenized_captions["input_ids"], "images": image_features}
collate_fn = CollateFn()
laion_ds = datasets.load_dataset("laion/laion400m", split="train", streaming=True)
laion_ds_batched = laion_ds.map(collate_fn, batched=True, batch_size=8, remove_columns=next(iter(laion_ds)).keys())
```
### Steps to reproduce the bug
A reproducible [kaggle kernel ](https://www.kaggle.com/sachin/laion-hf-dataset/edit) can be found here.
### Expected behavior
Would expect `next(iter(laion_ds_batched))` to produce two tensors of shape `(batch_size, 77)` and `batch_size, image_shape`.
### Environment info
datasets==2.12.0
python==3.10 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5912/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5910/comments | https://api.github.com/repos/huggingface/datasets/issues/5910/events | https://github.com/huggingface/datasets/issues/5910 | 1,728,909,790 | I_kwDODunzps5nDRHe | 5,910 | Cannot use both set_format and set_transform | {
"login": "ybouane",
"id": 14046002,
"node_id": "MDQ6VXNlcjE0MDQ2MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/14046002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ybouane",
"html_url": "https://github.com/ybouane",
"followers_url": "https://api.github.com/users/ybouane/followers",
"following_url": "https://api.github.com/users/ybouane/following{/other_user}",
"gists_url": "https://api.github.com/users/ybouane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ybouane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ybouane/subscriptions",
"organizations_url": "https://api.github.com/users/ybouane/orgs",
"repos_url": "https://api.github.com/users/ybouane/repos",
"events_url": "https://api.github.com/users/ybouane/events{/privacy}",
"received_events_url": "https://api.github.com/users/ybouane/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Currently, it's not possible to chain `set_format`/`set_transform` calls (plus, this is a breaking change if we decide to implement it), so I see two possible solutions:\r\n* using `set_format`/`set_transform` for the 1st transform and then passing the transformed example/batch to the 2nd transform\r\n* implementing and registering a custom formatter (the relevant code is [here](https://github.com/huggingface/datasets/tree/main/src/datasets/formatting))\r\n\r\nBtw, your example requires a single `set_format` call:\r\n```python\r\nds.set_format(\"torch\", columns=[\"image\"], output_all_columns=True, dtype=torch.double)\r\n```",
"Hey Mario,\r\nThanks, for getting back to me. the toDouble was just an example my real life case requires many more transforms.\r\n\r\nWhat do you mean by:\r\n> using set_format/set_transform for the 1st transform and then passing the transformed example/batch to the 2nd transform\r\n\r\nHow would that go, I thought you can't chain them?\r\n\r\nAs for the custom formatter, is it possible to reference an existing formatter, in my case `torch_formatter` inside of my custom formatter?\r\n\r\nmaybe I can inherit from it and just call `super.recursive_tensorize()`?",
"> How would that go, I thought you can't chain them?\r\n\r\nYes, they cannot be chained. This is what I meant:\r\n```python\r\nds.set_transform(first_transform)\r\n# calling the 2nd transform on each accessed batch\r\nsecond_transform(ds[2:3])\r\n```\r\n\r\n> As for the custom formatter, is it possible to reference an existing formatter, in my case torch_formatter inside of my custom formatter?\r\n>\r\n>maybe I can inherit from it and just call super.recursive_tensorize()?\r\n\r\nYes, subclassing makes the most sense.",
"Great, thank you for the details.",
"https://github.com/huggingface/datasets/issues/6012"
] | 2023-05-27T19:22:23 | 2023-07-09T21:40:54 | 2023-06-16T14:41:24 | NONE | null | null | null | ### Describe the bug
I need to process some data using the set_transform method but I also need the data to be formatted for pytorch before processing it.
I don't see anywhere in the documentation something that says that both methods cannot be used at the same time.
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset("mnist", split="train")
ds.set_format(type="torch")
def transform(entry):
return entry["image"].double()
ds.set_transform(transform)
print(ds[0])
```
### Expected behavior
It should print the pytorch tensor image as a double, but it errors because "entry" in the transform function doesn't receive a pytorch tensor to begin with, it receives a PIL Image -> entry.double() errors because entry isn't a pytorch tensor.
### Environment info
Latest versions.
### Note:
It would be at least handy to have access to a function that can do the dataset.set_format in the set_transform function.
Something like:
```
from datasets import load_dataset, do_format
ds = load_dataset("mnist", split="train")
def transform(entry):
entry = do_format(entry, type="torch")
return entry["image"].double()
ds.set_transform(transform)
print(ds[0])
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5910/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5910/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5909/comments | https://api.github.com/repos/huggingface/datasets/issues/5909/events | https://github.com/huggingface/datasets/pull/5909 | 1,728,900,068 | PR_kwDODunzps5Rgga6 | 5,909 | Use more efficient and idiomatic way to construct list. | {
"login": "ttsugriy",
"id": 172294,
"node_id": "MDQ6VXNlcjE3MjI5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/172294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttsugriy",
"html_url": "https://github.com/ttsugriy",
"followers_url": "https://api.github.com/users/ttsugriy/followers",
"following_url": "https://api.github.com/users/ttsugriy/following{/other_user}",
"gists_url": "https://api.github.com/users/ttsugriy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ttsugriy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ttsugriy/subscriptions",
"organizations_url": "https://api.github.com/users/ttsugriy/orgs",
"repos_url": "https://api.github.com/users/ttsugriy/repos",
"events_url": "https://api.github.com/users/ttsugriy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ttsugriy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008156 / 0.011353 (-0.003197) | 0.005563 / 0.011008 (-0.005445) | 0.118319 / 0.038508 (0.079810) | 0.044305 / 0.023109 (0.021195) | 0.366221 / 0.275898 (0.090323) | 0.407585 / 0.323480 (0.084105) | 0.006961 / 0.007986 (-0.001024) | 0.004841 / 0.004328 (0.000513) | 0.089949 / 0.004250 (0.085698) | 0.062197 / 0.037052 (0.025144) | 0.360721 / 0.258489 (0.102232) | 0.415332 / 0.293841 (0.121491) | 0.035709 / 0.128546 (-0.092837) | 0.010617 / 0.075646 (-0.065030) | 0.397454 / 0.419271 (-0.021817) | 0.063490 / 0.043533 (0.019958) | 0.374289 / 0.255139 (0.119150) | 0.382827 / 0.283200 (0.099628) | 0.121014 / 0.141683 (-0.020669) | 1.729933 / 1.452155 (0.277779) | 1.896222 / 1.492716 (0.403506) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254030 / 0.018006 (0.236023) | 0.491225 / 0.000490 (0.490736) | 0.018933 / 0.000200 (0.018734) | 0.000413 / 0.000054 (0.000358) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033085 / 0.037411 (-0.004327) | 0.132837 / 0.014526 (0.118311) | 0.143275 / 0.176557 (-0.033282) | 0.215800 / 0.737135 (-0.521335) | 0.149802 / 0.296338 (-0.146536) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474688 / 0.215209 (0.259479) | 4.743223 / 2.077655 (2.665569) | 2.163107 / 1.504120 (0.658988) | 1.946396 / 1.541195 (0.405201) | 2.057538 / 1.468490 (0.589047) | 0.618836 / 4.584777 (-3.965941) | 4.605934 / 3.745712 (0.860222) | 2.201537 / 5.269862 (-3.068324) | 1.275758 / 4.565676 (-3.289919) | 0.077782 / 0.424275 (-0.346493) | 0.014830 / 0.007607 (0.007223) | 0.593372 / 0.226044 (0.367328) | 5.927000 / 2.268929 (3.658072) | 2.687293 / 55.444624 (-52.757331) | 2.301797 / 6.876477 (-4.574679) | 2.489928 / 2.142072 (0.347856) | 0.756779 / 4.805227 (-4.048449) | 0.168065 / 6.500664 (-6.332600) | 0.077276 / 0.075469 (0.001807) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.608169 / 1.841788 (-0.233619) | 19.048790 / 8.074308 (10.974482) | 16.100228 / 10.191392 (5.908836) | 0.215346 / 0.680424 (-0.465077) | 0.022293 / 0.534201 (-0.511907) | 0.535899 / 0.579283 (-0.043384) | 0.533729 / 0.434364 (0.099365) | 0.562697 / 0.540337 (0.022360) | 0.764082 / 1.386936 (-0.622854) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010087 / 0.011353 (-0.001266) | 0.005357 / 0.011008 (-0.005651) | 0.092678 / 0.038508 (0.054170) | 0.041207 / 0.023109 (0.018098) | 0.437464 / 0.275898 (0.161566) | 0.527867 / 0.323480 (0.204387) | 0.006861 / 0.007986 (-0.001125) | 0.006131 / 0.004328 (0.001802) | 0.093741 / 0.004250 (0.089490) | 0.064142 / 0.037052 (0.027090) | 0.433577 / 0.258489 (0.175088) | 0.537148 / 0.293841 (0.243307) | 0.035339 / 0.128546 (-0.093207) | 0.010432 / 0.075646 (-0.065214) | 0.102838 / 0.419271 (-0.316434) | 0.057905 / 0.043533 (0.014372) | 0.437956 / 0.255139 (0.182817) | 0.509562 / 0.283200 (0.226362) | 0.120620 / 0.141683 (-0.021063) | 1.798686 / 1.452155 (0.346531) | 2.013290 / 1.492716 (0.520574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249067 / 0.018006 (0.231061) | 0.462219 / 0.000490 (0.461729) | 0.000476 / 0.000200 (0.000276) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033988 / 0.037411 (-0.003424) | 0.135863 / 0.014526 (0.121337) | 0.144082 / 0.176557 (-0.032474) | 0.201715 / 0.737135 (-0.535421) | 0.152079 / 0.296338 (-0.144259) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522820 / 0.215209 (0.307611) | 5.216723 / 2.077655 (3.139068) | 2.582355 / 1.504120 (1.078235) | 2.352799 / 1.541195 (0.811604) | 2.451943 / 1.468490 (0.983453) | 0.620381 / 4.584777 (-3.964396) | 4.537841 / 3.745712 (0.792129) | 2.206431 / 5.269862 (-3.063431) | 1.269865 / 4.565676 (-3.295811) | 0.078744 / 0.424275 (-0.345531) | 0.014375 / 0.007607 (0.006768) | 0.648215 / 0.226044 (0.422171) | 6.482809 / 2.268929 (4.213881) | 3.210670 / 55.444624 (-52.233954) | 2.847485 / 6.876477 (-4.028992) | 2.820946 / 2.142072 (0.678873) | 0.762711 / 4.805227 (-4.042516) | 0.171235 / 6.500664 (-6.329429) | 0.080230 / 0.075469 (0.004761) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.646840 / 1.841788 (-0.194948) | 19.400451 / 8.074308 (11.326142) | 16.758845 / 10.191392 (6.567453) | 0.171377 / 0.680424 (-0.509046) | 0.020400 / 0.534201 (-0.513801) | 0.467675 / 0.579283 (-0.111608) | 0.529745 / 0.434364 (0.095381) | 0.605989 / 0.540337 (0.065652) | 0.694659 / 1.386936 (-0.692277) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#006bf33ac5c308f9c70f4df4868abd539eb6c366 \"CML watermark\")\n",
"It's faster because all the items are the same object, but this also means modifying one of them will alter each unless these items are immutable, and they are in this case (tuples). So we should be careful when using this idiom."
] | 2023-05-27T18:54:47 | 2023-05-31T15:37:11 | 2023-05-31T13:28:29 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5909",
"html_url": "https://github.com/huggingface/datasets/pull/5909",
"diff_url": "https://github.com/huggingface/datasets/pull/5909.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5909.patch",
"merged_at": "2023-05-31T13:28:28"
} | Using `*` is ~2X faster according to [benchmark](https://colab.research.google.com/gist/ttsugriy/c964a2604edf70c41911b10335729b6a/for-vs-mult.ipynb) with just 4 patterns. This doesn't matter much since this tiny difference is not going to be noticeable, but why not? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5909/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5908/comments | https://api.github.com/repos/huggingface/datasets/issues/5908/events | https://github.com/huggingface/datasets/issues/5908 | 1,728,653,935 | I_kwDODunzps5nCSpv | 5,908 | Unbearably slow sorting on big mapped datasets | {
"login": "maximxlss",
"id": 29152154,
"node_id": "MDQ6VXNlcjI5MTUyMTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/29152154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximxlss",
"html_url": "https://github.com/maximxlss",
"followers_url": "https://api.github.com/users/maximxlss/followers",
"following_url": "https://api.github.com/users/maximxlss/following{/other_user}",
"gists_url": "https://api.github.com/users/maximxlss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximxlss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximxlss/subscriptions",
"organizations_url": "https://api.github.com/users/maximxlss/orgs",
"repos_url": "https://api.github.com/users/maximxlss/repos",
"events_url": "https://api.github.com/users/maximxlss/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximxlss/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! `shard` currently returns a slow dataset by default, with examples evenly distributed in the dataset.\r\n\r\nYou can get a fast dataset using `contiguous=True` (which should be the default imo):\r\n\r\n```python\r\ndataset = dataset.shard(10, 0, contiguous=True)\r\n```\r\n\r\nThis way you don't need to flatten_indices() and sort should be fast as well",
"@lhoestq \r\n\r\n> contiguous=True (which should be the default imo)\r\n\r\nFor `IterableDataset`, it's not possible to implement contiguous sharding without knowing the number of examples in advance, so setting the default value to `contiguous=True` would result in an inconsistency between `Dataset` and `IterableDataset` (when we add `IterableDataset.shard`)",
"Actually sharded iterable datasets are made of sub iterables that generally yield contiguous data no ? So in a way it's possible to shard an iterable dataset contiguously.\r\n\r\nIf the dataset is made of one shard it's indeed not possible to shard it contiguously though",
"> Actually sharded iterable datasets are made of sub iterables that generally yield contiguous data no ? So in a way it's possible to shard an iterable dataset contiguously.\r\n\r\nBut sharding an iterable dataset by sharding its `gen_kwargs` would still yield approximate shards(not equal to `Dataset.shard`), no? ",
"Yes indeed !",
"I understand the issue doesn't exist with non-mapped datasets, but if flattening is so much more efficient than sorting the indices, that's an issue in itself.\n\nThere are plenty of issues people posted for which the root cause turns out to be the same. It seems like mapped datasets are terribly inefficient. I think I saw some issue like that somewhere (about the mapped datasets in general), but can't find it now.\n\nMaybe indices should be flattened before any additional processing, then."
] | 2023-05-27T11:08:32 | 2023-06-13T17:45:10 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
For me, with ~40k lines, sorting took 3.5 seconds on a flattened dataset (including the flatten operation) and 22.7 seconds on a mapped dataset (right after sharding), which is about x5 slowdown. Moreover, it seems like it slows down exponentially with bigger datasets (wasn't able to sort 700k lines at all, with flattening takes about a minute).
### Steps to reproduce the bug
```Python
from datasets import load_dataset
import time
dataset = load_dataset("xnli", "en", split="train")
dataset = dataset.shard(10, 0)
print(len(dataset))
t = time.time()
# dataset = dataset.flatten_indices() # uncomment this line and it's fast
dataset = dataset.sort("label", reverse=True, load_from_cache_file=False)
print(f"finished in {time.time() - t:.4f} seconds")
```
### Expected behavior
Expect sorting to take the same or less time than flattening and then sorting.
### Environment info
- `datasets` version: 2.12.1.dev0 (same with 2.12.0 too)
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5908/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5907/comments | https://api.github.com/repos/huggingface/datasets/issues/5907/events | https://github.com/huggingface/datasets/pull/5907 | 1,728,648,560 | PR_kwDODunzps5RfqUU | 5,907 | Add `flatten_indices` to `DatasetDict` | {
"login": "maximxlss",
"id": 29152154,
"node_id": "MDQ6VXNlcjI5MTUyMTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/29152154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maximxlss",
"html_url": "https://github.com/maximxlss",
"followers_url": "https://api.github.com/users/maximxlss/followers",
"following_url": "https://api.github.com/users/maximxlss/following{/other_user}",
"gists_url": "https://api.github.com/users/maximxlss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maximxlss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maximxlss/subscriptions",
"organizations_url": "https://api.github.com/users/maximxlss/orgs",
"repos_url": "https://api.github.com/users/maximxlss/repos",
"events_url": "https://api.github.com/users/maximxlss/events{/privacy}",
"received_events_url": "https://api.github.com/users/maximxlss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006192 / 0.011353 (-0.005161) | 0.004410 / 0.011008 (-0.006598) | 0.095990 / 0.038508 (0.057482) | 0.032662 / 0.023109 (0.009553) | 0.322827 / 0.275898 (0.046929) | 0.352542 / 0.323480 (0.029062) | 0.005398 / 0.007986 (-0.002588) | 0.003926 / 0.004328 (-0.000403) | 0.075131 / 0.004250 (0.070880) | 0.046205 / 0.037052 (0.009153) | 0.330957 / 0.258489 (0.072468) | 0.360166 / 0.293841 (0.066325) | 0.027880 / 0.128546 (-0.100666) | 0.008813 / 0.075646 (-0.066833) | 0.327316 / 0.419271 (-0.091955) | 0.050071 / 0.043533 (0.006539) | 0.319939 / 0.255139 (0.064800) | 0.331593 / 0.283200 (0.048393) | 0.096745 / 0.141683 (-0.044938) | 1.445165 / 1.452155 (-0.006990) | 1.515538 / 1.492716 (0.022821) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209365 / 0.018006 (0.191358) | 0.437007 / 0.000490 (0.436518) | 0.003207 / 0.000200 (0.003007) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027261 / 0.037411 (-0.010151) | 0.105101 / 0.014526 (0.090575) | 0.117163 / 0.176557 (-0.059394) | 0.176237 / 0.737135 (-0.560898) | 0.122559 / 0.296338 (-0.173779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406792 / 0.215209 (0.191583) | 4.060831 / 2.077655 (1.983176) | 1.829691 / 1.504120 (0.325571) | 1.633155 / 1.541195 (0.091960) | 1.704817 / 1.468490 (0.236327) | 0.525325 / 4.584777 (-4.059452) | 3.752907 / 3.745712 (0.007194) | 1.857513 / 5.269862 (-3.412349) | 1.222237 / 4.565676 (-3.343439) | 0.065941 / 0.424275 (-0.358334) | 0.012498 / 0.007607 (0.004891) | 0.495009 / 0.226044 (0.268965) | 4.968074 / 2.268929 (2.699145) | 2.277898 / 55.444624 (-53.166727) | 1.936656 / 6.876477 (-4.939821) | 1.970698 / 2.142072 (-0.171374) | 0.635221 / 4.805227 (-4.170006) | 0.140539 / 6.500664 (-6.360125) | 0.064111 / 0.075469 (-0.011358) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238151 / 1.841788 (-0.603637) | 14.681262 / 8.074308 (6.606954) | 13.405525 / 10.191392 (3.214133) | 0.163225 / 0.680424 (-0.517199) | 0.017282 / 0.534201 (-0.516918) | 0.395526 / 0.579283 (-0.183757) | 0.429156 / 0.434364 (-0.005208) | 0.470806 / 0.540337 (-0.069531) | 0.571290 / 1.386936 (-0.815646) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006444 / 0.011353 (-0.004909) | 0.004388 / 0.011008 (-0.006621) | 0.075004 / 0.038508 (0.036496) | 0.032904 / 0.023109 (0.009795) | 0.375360 / 0.275898 (0.099462) | 0.413684 / 0.323480 (0.090204) | 0.005854 / 0.007986 (-0.002132) | 0.005504 / 0.004328 (0.001175) | 0.075049 / 0.004250 (0.070799) | 0.047973 / 0.037052 (0.010920) | 0.377943 / 0.258489 (0.119454) | 0.427039 / 0.293841 (0.133198) | 0.028248 / 0.128546 (-0.100298) | 0.008972 / 0.075646 (-0.066674) | 0.081848 / 0.419271 (-0.337424) | 0.047935 / 0.043533 (0.004402) | 0.377980 / 0.255139 (0.122841) | 0.407856 / 0.283200 (0.124656) | 0.103454 / 0.141683 (-0.038229) | 1.469051 / 1.452155 (0.016896) | 1.590657 / 1.492716 (0.097941) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192380 / 0.018006 (0.174374) | 0.440995 / 0.000490 (0.440505) | 0.004082 / 0.000200 (0.003882) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029584 / 0.037411 (-0.007828) | 0.110051 / 0.014526 (0.095525) | 0.121196 / 0.176557 (-0.055361) | 0.172249 / 0.737135 (-0.564886) | 0.125380 / 0.296338 (-0.170958) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435218 / 0.215209 (0.220009) | 4.354811 / 2.077655 (2.277156) | 2.102050 / 1.504120 (0.597930) | 1.913454 / 1.541195 (0.372260) | 1.974624 / 1.468490 (0.506134) | 0.529975 / 4.584777 (-4.054802) | 3.801605 / 3.745712 (0.055893) | 3.162408 / 5.269862 (-2.107454) | 1.599576 / 4.565676 (-2.966101) | 0.066710 / 0.424275 (-0.357565) | 0.012158 / 0.007607 (0.004551) | 0.549187 / 0.226044 (0.323142) | 5.489930 / 2.268929 (3.221002) | 2.646787 / 55.444624 (-52.797837) | 2.311915 / 6.876477 (-4.564562) | 2.335645 / 2.142072 (0.193572) | 0.641067 / 4.805227 (-4.164160) | 0.142227 / 6.500664 (-6.358437) | 0.065303 / 0.075469 (-0.010166) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283209 / 1.841788 (-0.558579) | 15.241809 / 8.074308 (7.167501) | 14.131471 / 10.191392 (3.940079) | 0.143921 / 0.680424 (-0.536503) | 0.017497 / 0.534201 (-0.516704) | 0.402236 / 0.579283 (-0.177047) | 0.418917 / 0.434364 (-0.015447) | 0.461745 / 0.540337 (-0.078593) | 0.560212 / 1.386936 (-0.826724) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7098922130cabfbfa6b8a3885ff2e6f032d6203d \"CML watermark\")\n"
] | 2023-05-27T10:55:44 | 2023-06-01T11:46:35 | 2023-06-01T11:39:36 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5907",
"html_url": "https://github.com/huggingface/datasets/pull/5907",
"diff_url": "https://github.com/huggingface/datasets/pull/5907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5907.patch",
"merged_at": "2023-06-01T11:39:35"
} | Add `flatten_indices` to `DatasetDict` for convinience | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5907/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5906/comments | https://api.github.com/repos/huggingface/datasets/issues/5906/events | https://github.com/huggingface/datasets/issues/5906 | 1,728,171,113 | I_kwDODunzps5nAcxp | 5,906 | Could you unpin responses version? | {
"login": "kenimou",
"id": 47789026,
"node_id": "MDQ6VXNlcjQ3Nzg5MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/47789026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kenimou",
"html_url": "https://github.com/kenimou",
"followers_url": "https://api.github.com/users/kenimou/followers",
"following_url": "https://api.github.com/users/kenimou/following{/other_user}",
"gists_url": "https://api.github.com/users/kenimou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kenimou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kenimou/subscriptions",
"organizations_url": "https://api.github.com/users/kenimou/orgs",
"repos_url": "https://api.github.com/users/kenimou/repos",
"events_url": "https://api.github.com/users/kenimou/events{/privacy}",
"received_events_url": "https://api.github.com/users/kenimou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-05-26T20:02:14 | 2023-05-30T17:53:31 | 2023-05-30T17:53:31 | NONE | null | null | null | ### Describe the bug
Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version.
### Steps to reproduce the bug
could not install this library due to dependency conflict.
### Expected behavior
can install datasets
### Environment info
linux 64 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5906/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5905/comments | https://api.github.com/repos/huggingface/datasets/issues/5905/events | https://github.com/huggingface/datasets/issues/5905 | 1,727,541,392 | I_kwDODunzps5m-DCQ | 5,905 | Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"We plan to improve this eventually (see https://github.com/huggingface/datasets/issues/5454 and https://github.com/huggingface/datasets/issues/5380).\r\n\r\n> Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe something can be done there.\r\nIf not, I could do it using a plain Pytorch dataset. Then I would need to convert it to a datasets' dataset to get all the features of datasets. Is it something possible ?\r\n\r\nYes, by creating a mapped dataset that stores audio URLs. Indexing a dataset in such format only downloads and decodes the bytes of the accessed samples (without storing them on disk).\r\n\r\nYou can do the following to create this dataset:\r\n```python\r\n\r\ndef gen():\r\n # Generator that yields (audio URL, text) pairs as dict\r\n ...\r\n yield {\"audio\": \"audio_url\", \"text\": \"some text\"}\r\n\r\nfeatures = Features({\"audio\": datasets.Audio(), \"text\": datasets.Value(\"string\")})\r\nds = Dataset.from_generator(gen, features=features)\r\nds[2:5] # downloads and decodes the samples each time they are accessed\r\n```"
] | 2023-05-26T12:33:02 | 2023-06-15T13:34:18 | null | CONTRIBUTOR | null | null | null | ### Feature request
I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset.
### Motivation
I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally intensive audio processing to do. As a result I want to load data from my remote when it is needed and perform all processing on the fly.
I am currently using the iterable dataset feature of _datasets_. It does everything I need with one exception. My issue is that when resuming training at a step n, we have to download all the data and perform the processing of steps < n, just to get the iterable at the right step. In my case it takes almost as long as training for the same steps, which make resuming training from a checkpoint useless in practice.
I understand that the nature of iterators make it probably nearly impossible to quickly resume training.
I thought about a possible solution nonetheless :
I could in fact index my large dataset and make it a mapped dataset. Then I could use set_transform to perform the processing on the fly. Finally, if I'm not mistaken, the _accelerate_ package allows to [skip steps efficiently](https://github.com/huggingface/accelerate/blob/a73898027a211c3f6dc4460351b0ec246aa824aa/src/accelerate/data_loader.py#L827) for a mapped dataset.
Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe something can be done there.
If not, I could do it using a plain _Pytorch_ dataset. Then I would need to convert it to a _datasets_' dataset to get all the features of _datasets_. Is it something possible ?
### Your contribution
I could provide a PR to allow lazy loading of mapped dataset or the conversion of a mapped _Pytorch_ dataset into a _Datasets_ dataset if you think it is an useful new feature. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5905/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5904/comments | https://api.github.com/repos/huggingface/datasets/issues/5904/events | https://github.com/huggingface/datasets/pull/5904 | 1,727,415,626 | PR_kwDODunzps5Rbfks | 5,904 | Validate name parameter in make_file_instructions | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007401 / 0.011353 (-0.003952) | 0.005198 / 0.011008 (-0.005810) | 0.112317 / 0.038508 (0.073809) | 0.038406 / 0.023109 (0.015297) | 0.358008 / 0.275898 (0.082110) | 0.395350 / 0.323480 (0.071870) | 0.006201 / 0.007986 (-0.001785) | 0.004368 / 0.004328 (0.000039) | 0.087718 / 0.004250 (0.083467) | 0.055299 / 0.037052 (0.018247) | 0.350481 / 0.258489 (0.091992) | 0.419876 / 0.293841 (0.126035) | 0.032459 / 0.128546 (-0.096087) | 0.010635 / 0.075646 (-0.065011) | 0.383282 / 0.419271 (-0.035989) | 0.059241 / 0.043533 (0.015708) | 0.365101 / 0.255139 (0.109962) | 0.378144 / 0.283200 (0.094944) | 0.114287 / 0.141683 (-0.027396) | 1.680870 / 1.452155 (0.228715) | 1.788183 / 1.492716 (0.295467) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242919 / 0.018006 (0.224913) | 0.489850 / 0.000490 (0.489360) | 0.011408 / 0.000200 (0.011208) | 0.000444 / 0.000054 (0.000389) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030742 / 0.037411 (-0.006669) | 0.123092 / 0.014526 (0.108566) | 0.138246 / 0.176557 (-0.038311) | 0.207299 / 0.737135 (-0.529836) | 0.142647 / 0.296338 (-0.153691) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472553 / 0.215209 (0.257344) | 4.671763 / 2.077655 (2.594108) | 2.119986 / 1.504120 (0.615866) | 1.891851 / 1.541195 (0.350656) | 1.979094 / 1.468490 (0.510604) | 0.617956 / 4.584777 (-3.966821) | 4.969418 / 3.745712 (1.223706) | 4.672083 / 5.269862 (-0.597779) | 2.119049 / 4.565676 (-2.446627) | 0.077466 / 0.424275 (-0.346809) | 0.014434 / 0.007607 (0.006827) | 0.580746 / 0.226044 (0.354701) | 5.805458 / 2.268929 (3.536530) | 2.622498 / 55.444624 (-52.822126) | 2.259499 / 6.876477 (-4.616978) | 2.362078 / 2.142072 (0.220006) | 0.719911 / 4.805227 (-4.085317) | 0.164939 / 6.500664 (-6.335725) | 0.074762 / 0.075469 (-0.000707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.496709 / 1.841788 (-0.345079) | 18.247499 / 8.074308 (10.173191) | 15.397075 / 10.191392 (5.205683) | 0.181163 / 0.680424 (-0.499261) | 0.022604 / 0.534201 (-0.511597) | 0.462791 / 0.579283 (-0.116492) | 0.504473 / 0.434364 (0.070109) | 0.582254 / 0.540337 (0.041917) | 0.673849 / 1.386936 (-0.713087) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007633 / 0.011353 (-0.003720) | 0.004859 / 0.011008 (-0.006149) | 0.091194 / 0.038508 (0.052686) | 0.038255 / 0.023109 (0.015146) | 0.460972 / 0.275898 (0.185074) | 0.470441 / 0.323480 (0.146961) | 0.006482 / 0.007986 (-0.001504) | 0.004500 / 0.004328 (0.000172) | 0.089998 / 0.004250 (0.085748) | 0.055470 / 0.037052 (0.018418) | 0.459188 / 0.258489 (0.200699) | 0.491255 / 0.293841 (0.197414) | 0.032200 / 0.128546 (-0.096346) | 0.010372 / 0.075646 (-0.065274) | 0.097429 / 0.419271 (-0.321843) | 0.052469 / 0.043533 (0.008936) | 0.452492 / 0.255139 (0.197353) | 0.475210 / 0.283200 (0.192010) | 0.116976 / 0.141683 (-0.024707) | 1.752742 / 1.452155 (0.300587) | 1.849535 / 1.492716 (0.356819) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229822 / 0.018006 (0.211816) | 0.472259 / 0.000490 (0.471770) | 0.000455 / 0.000200 (0.000255) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033796 / 0.037411 (-0.003615) | 0.136151 / 0.014526 (0.121625) | 0.144015 / 0.176557 (-0.032542) | 0.199337 / 0.737135 (-0.537798) | 0.150024 / 0.296338 (-0.146315) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522737 / 0.215209 (0.307528) | 5.165223 / 2.077655 (3.087568) | 2.630334 / 1.504120 (1.126214) | 2.392383 / 1.541195 (0.851188) | 2.488966 / 1.468490 (1.020476) | 0.608981 / 4.584777 (-3.975796) | 4.711545 / 3.745712 (0.965833) | 2.121537 / 5.269862 (-3.148325) | 1.205477 / 4.565676 (-3.360199) | 0.078277 / 0.424275 (-0.345998) | 0.014175 / 0.007607 (0.006568) | 0.640720 / 0.226044 (0.414675) | 6.391173 / 2.268929 (4.122245) | 3.265131 / 55.444624 (-52.179493) | 2.939188 / 6.876477 (-3.937289) | 2.919217 / 2.142072 (0.777145) | 0.745095 / 4.805227 (-4.060132) | 0.164065 / 6.500664 (-6.336599) | 0.076993 / 0.075469 (0.001524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.539971 / 1.841788 (-0.301817) | 18.597296 / 8.074308 (10.522988) | 16.899330 / 10.191392 (6.707938) | 0.169005 / 0.680424 (-0.511419) | 0.020447 / 0.534201 (-0.513754) | 0.465862 / 0.579283 (-0.113421) | 0.522819 / 0.434364 (0.088455) | 0.547111 / 0.540337 (0.006773) | 0.657777 / 1.386936 (-0.729159) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#56aff9ecb4e565eb95faad525558914648cc22f1 \"CML watermark\")\n"
] | 2023-05-26T11:12:46 | 2023-05-31T07:43:32 | 2023-05-31T07:34:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5904",
"html_url": "https://github.com/huggingface/datasets/pull/5904",
"diff_url": "https://github.com/huggingface/datasets/pull/5904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5904.patch",
"merged_at": "2023-05-31T07:34:57"
} | Validate `name` parameter in `make_file_instructions`.
This way users get more informative error messages, instead of:
```stacktrace
.../huggingface/datasets/src/datasets/arrow_reader.py in make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path)
110 name2len = {info.name: info.num_examples for info in split_infos}
111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos}
--> 112 name2filenames = {
113 info.name: filenames_for_dataset_split(
114 path=prefix_path,
.../huggingface/datasets/src/datasets/arrow_reader.py in <dictcomp>(.0)
111 name2shard_lengths = {info.name: info.shard_lengths for info in split_infos}
112 name2filenames = {
--> 113 info.name: filenames_for_dataset_split(
114 path=prefix_path,
115 dataset_name=name,
.../huggingface/datasets/src/datasets/naming.py in filenames_for_dataset_split(path, dataset_name, split, filetype_suffix, shard_lengths)
68
69 def filenames_for_dataset_split(path, dataset_name, split, filetype_suffix=None, shard_lengths=None):
---> 70 prefix = filename_prefix_for_split(dataset_name, split)
71 prefix = os.path.join(path, prefix)
72
.../huggingface/datasets/src/datasets/naming.py in filename_prefix_for_split(name, split)
52
53 def filename_prefix_for_split(name, split):
---> 54 if os.path.basename(name) != name:
55 raise ValueError(f"Should be a dataset name, not a path: {name}")
56 if not re.match(_split_re, split):
.../lib/python3.9/posixpath.py in basename(p)
140 def basename(p):
141 """Returns the final component of a pathname"""
--> 142 p = os.fspath(p)
143 sep = _get_sep(p)
144 i = p.rfind(sep) + 1
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
Related to #5895. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5904/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5903/comments | https://api.github.com/repos/huggingface/datasets/issues/5903/events | https://github.com/huggingface/datasets/pull/5903 | 1,727,372,549 | PR_kwDODunzps5RbV82 | 5,903 | Relax `ci.yml` trigger for `pull_request` based on modified paths | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Also this could be extended to the rest of the GitHub Action `yml` files, so let me know whether you want me to have a look into it! 🤗",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5903). All of your documentation changes will be reflected on that endpoint."
] | 2023-05-26T10:46:52 | 2023-05-26T10:51:37 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5903",
"html_url": "https://github.com/huggingface/datasets/pull/5903",
"diff_url": "https://github.com/huggingface/datasets/pull/5903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5903.patch",
"merged_at": null
} | ## What's in this PR?
As of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect/impact on the `ci.yml` outcome. So this PR controls the paths that trigger the `ci.yml` to avoid wasting resources when not needed.
## What's pending in this PR?
I would like to confirm whether this should affect both `push` and `pull_request`, since just modifications in those files won't change the `ci.yml` outcome, so maybe it's worth skipping it too in the `push` trigger. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5903/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5902/comments | https://api.github.com/repos/huggingface/datasets/issues/5902/events | https://github.com/huggingface/datasets/pull/5902 | 1,727,342,194 | PR_kwDODunzps5RbPS9 | 5,902 | Fix `Overview.ipynb` & detach Jupyter Notebooks from `datasets` repository | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Random fact: previous run was showing that the Hub was hosting 13336 datasets, while the most recent run shows 36662 👀🎉",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks! \r\n\r\nHowever, I think we should stop linking this notebook and use the notebook version of the Quickstart doc page instead of it for easier maintenance (we would have the \"Open in Colab\" button in the Quickstart doc as Transformers [does](https://huggingface.co/docs/transformers/quicktour)). \r\n\r\n@stevhliu should be able to help with this. If I'm not mistaken, this can be done by adding the `[[open in colab]]` marker to the doc page.\r\n\r\nAlso, if some useful info from the Overview notebook is not in the docs, feel free to add it so we don't lose it 🙂.",
"Cool, makes sense @mariosasko, then I'll check both notebooks and see whether there's something in `Overview.ipynb` worth including in the `docs/source/quickstart.mdx` and remove `Overview.ipynb` and update references in favour of `docs/source/quickstart.mdx`\r\n\r\nAre you OK if I do that @stevhliu @mariosasko? Thanks 🤗 ",
"For the moment I've just updated the `quickstart.mdx` to be more similar to [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx), but regarding the `Overview.ipynb` notebook I was planning to create a PR in https://github.com/huggingface/notebooks to add it there, does that make sense @stevhliu? And then to create a `README.md` in this repository in `notebooks/` as `transformers` does to point to the related notebooks hosted in https://github.com/huggingface/notebooks, WDYT? 🤗 ",
"Hi @stevhliu thanks for the feedback! Already applied your suggestions, I'll also add the pointers to both audio and image datasets in the \"What's next\" section.\r\n\r\nBesides that, let me know if I can help with the notebook being hosted in `huggingface/notebooks` instead, and I'll happily do so!",
"Thanks a lot for the detailed feedback @mariosasko, I'll apply the changes today!",
"> Besides that, let me know if I can help with the notebook being hosted in `huggingface/notebooks` instead, and I'll happily do so!\r\n\r\nAwesome! If you're up for it, I think you can go ahead and open a PR with the changes I've outlined [here](https://github.com/huggingface/datasets/pull/5902#pullrequestreview-1475236887) to add the notebook building workflow. ",
"Hi @stevhliu @mariosasko, sorry for the delay I had a busy week, I'll tackle this either today or tomorrow to ideally close it before the weekend, thanks again for the help and guidance 😄 ",
"Hi guys @stevhliu @mariosasko sorry for the delay! I've resolved all the comments and applied your reviews 👍🏻 Let me know if this works and we can finally close this PR, thanks for the help in the meantime!",
"> Thanks for iterating on this and wrapping it up! 🤗\r\n\r\nNo need to! Always a pleasure to collaborate with you guys 🤗 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009814 / 0.011353 (-0.001539) | 0.004632 / 0.011008 (-0.006376) | 0.103059 / 0.038508 (0.064551) | 0.090277 / 0.023109 (0.067167) | 0.389344 / 0.275898 (0.113446) | 0.464536 / 0.323480 (0.141056) | 0.008196 / 0.007986 (0.000210) | 0.003872 / 0.004328 (-0.000457) | 0.081912 / 0.004250 (0.077662) | 0.073197 / 0.037052 (0.036145) | 0.407545 / 0.258489 (0.149056) | 0.458035 / 0.293841 (0.164194) | 0.037485 / 0.128546 (-0.091061) | 0.010141 / 0.075646 (-0.065505) | 0.365998 / 0.419271 (-0.053273) | 0.065218 / 0.043533 (0.021685) | 0.414091 / 0.255139 (0.158952) | 0.435617 / 0.283200 (0.152417) | 0.028850 / 0.141683 (-0.112833) | 1.883510 / 1.452155 (0.431355) | 1.979986 / 1.492716 (0.487269) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236623 / 0.018006 (0.218616) | 0.467128 / 0.000490 (0.466638) | 0.008273 / 0.000200 (0.008074) | 0.000699 / 0.000054 (0.000645) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033061 / 0.037411 (-0.004350) | 0.101381 / 0.014526 (0.086856) | 0.110862 / 0.176557 (-0.065695) | 0.180982 / 0.737135 (-0.556154) | 0.113791 / 0.296338 (-0.182548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450805 / 0.215209 (0.235596) | 4.478374 / 2.077655 (2.400719) | 2.190814 / 1.504120 (0.686694) | 1.976726 / 1.541195 (0.435532) | 2.078527 / 1.468490 (0.610037) | 0.569150 / 4.584777 (-4.015627) | 4.557790 / 3.745712 (0.812078) | 3.794964 / 5.269862 (-1.474898) | 2.555689 / 4.565676 (-2.009987) | 0.067380 / 0.424275 (-0.356896) | 0.008741 / 0.007607 (0.001134) | 0.536913 / 0.226044 (0.310868) | 5.364588 / 2.268929 (3.095659) | 2.725602 / 55.444624 (-52.719022) | 2.332012 / 6.876477 (-4.544465) | 2.560550 / 2.142072 (0.418477) | 0.672490 / 4.805227 (-4.132738) | 0.153629 / 6.500664 (-6.347035) | 0.070583 / 0.075469 (-0.004886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620083 / 1.841788 (-0.221704) | 23.094248 / 8.074308 (15.019939) | 17.797625 / 10.191392 (7.606233) | 0.167993 / 0.680424 (-0.512430) | 0.021151 / 0.534201 (-0.513050) | 0.470216 / 0.579283 (-0.109067) | 0.515492 / 0.434364 (0.081128) | 0.666359 / 0.540337 (0.126021) | 0.772928 / 1.386936 (-0.614008) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007853 / 0.011353 (-0.003500) | 0.004627 / 0.011008 (-0.006381) | 0.079803 / 0.038508 (0.041295) | 0.091562 / 0.023109 (0.068453) | 0.488537 / 0.275898 (0.212639) | 0.579207 / 0.323480 (0.255728) | 0.006579 / 0.007986 (-0.001406) | 0.003946 / 0.004328 (-0.000382) | 0.080224 / 0.004250 (0.075973) | 0.074499 / 0.037052 (0.037446) | 0.488292 / 0.258489 (0.229803) | 0.569246 / 0.293841 (0.275405) | 0.039994 / 0.128546 (-0.088553) | 0.012867 / 0.075646 (-0.062780) | 0.092563 / 0.419271 (-0.326709) | 0.061656 / 0.043533 (0.018124) | 0.488271 / 0.255139 (0.233132) | 0.550651 / 0.283200 (0.267451) | 0.032078 / 0.141683 (-0.109605) | 1.874440 / 1.452155 (0.422286) | 1.973480 / 1.492716 (0.480763) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238789 / 0.018006 (0.220782) | 0.460237 / 0.000490 (0.459748) | 0.000500 / 0.000200 (0.000300) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034961 / 0.037411 (-0.002450) | 0.102696 / 0.014526 (0.088170) | 0.117772 / 0.176557 (-0.058784) | 0.183865 / 0.737135 (-0.553270) | 0.119216 / 0.296338 (-0.177122) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528894 / 0.215209 (0.313685) | 5.303954 / 2.077655 (3.226300) | 2.897505 / 1.504120 (1.393385) | 2.475898 / 1.541195 (0.934703) | 2.553479 / 1.468490 (1.084988) | 0.625847 / 4.584777 (-3.958930) | 4.656595 / 3.745712 (0.910882) | 3.745170 / 5.269862 (-1.524691) | 2.470922 / 4.565676 (-2.094755) | 0.066908 / 0.424275 (-0.357367) | 0.009172 / 0.007607 (0.001565) | 0.572695 / 0.226044 (0.346650) | 5.753428 / 2.268929 (3.484499) | 3.033226 / 55.444624 (-52.411398) | 2.677280 / 6.876477 (-4.199197) | 2.908857 / 2.142072 (0.766785) | 0.681595 / 4.805227 (-4.123632) | 0.154602 / 6.500664 (-6.346062) | 0.072608 / 0.075469 (-0.002861) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.738550 / 1.841788 (-0.103237) | 25.090637 / 8.074308 (17.016329) | 18.371478 / 10.191392 (8.180086) | 0.207357 / 0.680424 (-0.473067) | 0.023396 / 0.534201 (-0.510805) | 0.505663 / 0.579283 (-0.073620) | 0.503137 / 0.434364 (0.068773) | 0.598015 / 0.540337 (0.057678) | 0.714122 / 1.386936 (-0.672814) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#971e33ec81b1013654e845b1c2e33cb43cda5558 \"CML watermark\")\n",
"Just as a heads up @mariosasko, the `quickstart.ipynb` Jupyter Notebook has been built at https://github.com/huggingface/notebooks/blob/main/datasets_doc/en/quickstart.ipynb, while the URLs in here point to https://github.com/huggingface/notebooks/blob/main/datasets_doc/quickstart.ipynb instead, should we update that?"
] | 2023-05-26T10:25:01 | 2023-07-25T13:50:06 | 2023-07-25T13:38:33 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5902",
"html_url": "https://github.com/huggingface/datasets/pull/5902",
"diff_url": "https://github.com/huggingface/datasets/pull/5902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5902.patch",
"merged_at": "2023-07-25T13:38:33"
} | ## What's in this PR?
This PR solves #5887 since there was a mismatch between the tokenizer and the model used, since the tokenizer was `bert-base-cased` while the model was `distilbert-base-case` both for the PyTorch and TensorFlow alternatives. Since DistilBERT doesn't use/need the `token_type_ids`, the `**batch` was failing, as the batch contained `input_ids`, `attention_mask`, `token_type_ids`, `start_positions` and `end_positions`, and `token_type_ids` was not required.
Besides that, at the end `seqeval` was being used to evaluate the model predictions, and just `evaluate` was being installed, so I've also included the `seqeval` installation.
Finally, I've re-run everything in Google Colab, and every cell was successfully executed!
## What was done on top of the original PR?
Based on the comments from @mariosasko and @stevhliu, I've updated the contents of this PR to also review the `quickstart.mdx` and update what was needed, besides that, we may eventually move the `Overview.ipynb` dataset to `huggingface/notebooks` following @stevhliu suggestions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5902/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5902/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5901/comments | https://api.github.com/repos/huggingface/datasets/issues/5901/events | https://github.com/huggingface/datasets/pull/5901 | 1,727,179,016 | PR_kwDODunzps5Rarux | 5,901 | Make prepare_split more robust if errors in metadata dataset_info splits | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008809 / 0.011353 (-0.002544) | 0.005641 / 0.011008 (-0.005367) | 0.124986 / 0.038508 (0.086477) | 0.037311 / 0.023109 (0.014202) | 0.388915 / 0.275898 (0.113017) | 0.430123 / 0.323480 (0.106643) | 0.007447 / 0.007986 (-0.000538) | 0.009593 / 0.004328 (0.005264) | 0.099148 / 0.004250 (0.094898) | 0.052393 / 0.037052 (0.015341) | 0.399779 / 0.258489 (0.141290) | 0.439109 / 0.293841 (0.145268) | 0.043409 / 0.128546 (-0.085137) | 0.016286 / 0.075646 (-0.059360) | 0.431198 / 0.419271 (0.011927) | 0.064932 / 0.043533 (0.021400) | 0.390650 / 0.255139 (0.135511) | 0.432883 / 0.283200 (0.149684) | 0.110978 / 0.141683 (-0.030705) | 1.796121 / 1.452155 (0.343967) | 1.960097 / 1.492716 (0.467381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286292 / 0.018006 (0.268286) | 0.659495 / 0.000490 (0.659005) | 0.008294 / 0.000200 (0.008094) | 0.000485 / 0.000054 (0.000431) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029325 / 0.037411 (-0.008086) | 0.125454 / 0.014526 (0.110928) | 0.136459 / 0.176557 (-0.040097) | 0.221075 / 0.737135 (-0.516060) | 0.140281 / 0.296338 (-0.156058) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602401 / 0.215209 (0.387192) | 6.124553 / 2.077655 (4.046898) | 2.453141 / 1.504120 (0.949021) | 2.038611 / 1.541195 (0.497416) | 2.073611 / 1.468490 (0.605121) | 0.938040 / 4.584777 (-3.646737) | 5.755972 / 3.745712 (2.010260) | 4.450935 / 5.269862 (-0.818926) | 2.337219 / 4.565676 (-2.228457) | 0.107118 / 0.424275 (-0.317157) | 0.015201 / 0.007607 (0.007594) | 0.785833 / 0.226044 (0.559788) | 7.732984 / 2.268929 (5.464055) | 3.236892 / 55.444624 (-52.207733) | 2.696402 / 6.876477 (-4.180074) | 2.805036 / 2.142072 (0.662964) | 1.108612 / 4.805227 (-3.696616) | 0.221067 / 6.500664 (-6.279597) | 0.085538 / 0.075469 (0.010068) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.600311 / 1.841788 (-0.241476) | 18.528118 / 8.074308 (10.453810) | 21.107199 / 10.191392 (10.915807) | 0.219489 / 0.680424 (-0.460934) | 0.028927 / 0.534201 (-0.505274) | 0.503446 / 0.579283 (-0.075837) | 0.619833 / 0.434364 (0.185469) | 0.582454 / 0.540337 (0.042117) | 0.709154 / 1.386936 (-0.677782) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008516 / 0.011353 (-0.002837) | 0.006090 / 0.011008 (-0.004918) | 0.104574 / 0.038508 (0.066066) | 0.042676 / 0.023109 (0.019566) | 0.458623 / 0.275898 (0.182725) | 0.568479 / 0.323480 (0.244999) | 0.008374 / 0.007986 (0.000389) | 0.004677 / 0.004328 (0.000349) | 0.105946 / 0.004250 (0.101695) | 0.055256 / 0.037052 (0.018204) | 0.511036 / 0.258489 (0.252547) | 0.598383 / 0.293841 (0.304542) | 0.043612 / 0.128546 (-0.084934) | 0.014707 / 0.075646 (-0.060940) | 0.116350 / 0.419271 (-0.302921) | 0.061413 / 0.043533 (0.017880) | 0.477785 / 0.255139 (0.222646) | 0.542643 / 0.283200 (0.259443) | 0.120431 / 0.141683 (-0.021252) | 1.994083 / 1.452155 (0.541928) | 2.100600 / 1.492716 (0.607883) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298480 / 0.018006 (0.280474) | 0.601921 / 0.000490 (0.601432) | 0.000445 / 0.000200 (0.000245) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034784 / 0.037411 (-0.002627) | 0.133555 / 0.014526 (0.119029) | 0.138541 / 0.176557 (-0.038015) | 0.203114 / 0.737135 (-0.534021) | 0.153477 / 0.296338 (-0.142861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.780484 / 0.215209 (0.565275) | 7.150876 / 2.077655 (5.073222) | 3.168590 / 1.504120 (1.664470) | 2.698746 / 1.541195 (1.157552) | 2.695678 / 1.468490 (1.227188) | 1.037706 / 4.584777 (-3.547071) | 5.672631 / 3.745712 (1.926918) | 2.798137 / 5.269862 (-2.471725) | 1.738588 / 4.565676 (-2.827088) | 0.111160 / 0.424275 (-0.313115) | 0.013878 / 0.007607 (0.006271) | 0.800191 / 0.226044 (0.574146) | 8.546676 / 2.268929 (6.277748) | 4.116852 / 55.444624 (-51.327773) | 3.331271 / 6.876477 (-3.545206) | 3.307410 / 2.142072 (1.165337) | 1.191019 / 4.805227 (-3.614208) | 0.248953 / 6.500664 (-6.251711) | 0.086632 / 0.075469 (0.011162) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.795057 / 1.841788 (-0.046730) | 18.038785 / 8.074308 (9.964476) | 21.865566 / 10.191392 (11.674174) | 0.211058 / 0.680424 (-0.469366) | 0.026956 / 0.534201 (-0.507245) | 0.518855 / 0.579283 (-0.060428) | 0.618105 / 0.434364 (0.183741) | 0.569227 / 0.540337 (0.028889) | 0.705431 / 1.386936 (-0.681505) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#074925b9b7c1dfd33b8675aa99c07cc26375665c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008900 / 0.011353 (-0.002453) | 0.005726 / 0.011008 (-0.005283) | 0.131747 / 0.038508 (0.093239) | 0.040585 / 0.023109 (0.017476) | 0.420531 / 0.275898 (0.144633) | 0.459430 / 0.323480 (0.135950) | 0.007642 / 0.007986 (-0.000344) | 0.006750 / 0.004328 (0.002421) | 0.099147 / 0.004250 (0.094897) | 0.055852 / 0.037052 (0.018799) | 0.423653 / 0.258489 (0.165164) | 0.453304 / 0.293841 (0.159463) | 0.045247 / 0.128546 (-0.083300) | 0.016034 / 0.075646 (-0.059612) | 0.443115 / 0.419271 (0.023843) | 0.078853 / 0.043533 (0.035320) | 0.417508 / 0.255139 (0.162369) | 0.440936 / 0.283200 (0.157736) | 0.115603 / 0.141683 (-0.026080) | 1.844610 / 1.452155 (0.392456) | 1.998497 / 1.492716 (0.505781) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272622 / 0.018006 (0.254616) | 0.598045 / 0.000490 (0.597556) | 0.007088 / 0.000200 (0.006888) | 0.000159 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032976 / 0.037411 (-0.004436) | 0.143970 / 0.014526 (0.129444) | 0.142172 / 0.176557 (-0.034384) | 0.216747 / 0.737135 (-0.520389) | 0.146004 / 0.296338 (-0.150334) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.687507 / 0.215209 (0.472298) | 6.549524 / 2.077655 (4.471870) | 2.924142 / 1.504120 (1.420022) | 2.504471 / 1.541195 (0.963277) | 2.496280 / 1.468490 (1.027790) | 0.959054 / 4.584777 (-3.625723) | 5.851742 / 3.745712 (2.106030) | 4.983357 / 5.269862 (-0.286504) | 2.627403 / 4.565676 (-1.938274) | 0.112955 / 0.424275 (-0.311320) | 0.016206 / 0.007607 (0.008599) | 0.819158 / 0.226044 (0.593114) | 8.416949 / 2.268929 (6.148020) | 3.776765 / 55.444624 (-51.667859) | 3.002397 / 6.876477 (-3.874080) | 3.158852 / 2.142072 (1.016779) | 1.197099 / 4.805227 (-3.608129) | 0.280654 / 6.500664 (-6.220010) | 0.099471 / 0.075469 (0.024002) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.687007 / 1.841788 (-0.154781) | 19.411976 / 8.074308 (11.337668) | 22.053482 / 10.191392 (11.862090) | 0.228038 / 0.680424 (-0.452386) | 0.028226 / 0.534201 (-0.505975) | 0.527695 / 0.579283 (-0.051588) | 0.635911 / 0.434364 (0.201547) | 0.618205 / 0.540337 (0.077868) | 0.735164 / 1.386936 (-0.651772) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009450 / 0.011353 (-0.001903) | 0.006566 / 0.011008 (-0.004442) | 0.108919 / 0.038508 (0.070411) | 0.050010 / 0.023109 (0.026900) | 0.505168 / 0.275898 (0.229270) | 0.552190 / 0.323480 (0.228710) | 0.007569 / 0.007986 (-0.000417) | 0.006807 / 0.004328 (0.002478) | 0.116621 / 0.004250 (0.112371) | 0.060374 / 0.037052 (0.023321) | 0.515165 / 0.258489 (0.256676) | 0.572125 / 0.293841 (0.278284) | 0.046561 / 0.128546 (-0.081986) | 0.016159 / 0.075646 (-0.059487) | 0.114568 / 0.419271 (-0.304704) | 0.064689 / 0.043533 (0.021157) | 0.497870 / 0.255139 (0.242731) | 0.567332 / 0.283200 (0.284132) | 0.126254 / 0.141683 (-0.015429) | 1.954074 / 1.452155 (0.501919) | 2.057682 / 1.492716 (0.564966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.013857 / 0.018006 (-0.004149) | 0.601561 / 0.000490 (0.601071) | 0.002897 / 0.000200 (0.002697) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038480 / 0.037411 (0.001069) | 0.142480 / 0.014526 (0.127954) | 0.160479 / 0.176557 (-0.016077) | 0.217942 / 0.737135 (-0.519194) | 0.159908 / 0.296338 (-0.136431) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.697926 / 0.215209 (0.482717) | 6.869754 / 2.077655 (4.792100) | 3.125463 / 1.504120 (1.621343) | 2.729123 / 1.541195 (1.187928) | 2.855747 / 1.468490 (1.387257) | 1.015345 / 4.584777 (-3.569432) | 5.839176 / 3.745712 (2.093463) | 5.019678 / 5.269862 (-0.250184) | 2.080489 / 4.565676 (-2.485187) | 0.118884 / 0.424275 (-0.305391) | 0.021381 / 0.007607 (0.013774) | 0.877847 / 0.226044 (0.651803) | 8.714561 / 2.268929 (6.445633) | 3.933399 / 55.444624 (-51.511226) | 3.281809 / 6.876477 (-3.594668) | 3.330342 / 2.142072 (1.188269) | 1.235005 / 4.805227 (-3.570222) | 0.239686 / 6.500664 (-6.260978) | 0.093546 / 0.075469 (0.018077) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.787916 / 1.841788 (-0.053872) | 20.094828 / 8.074308 (12.020520) | 22.902101 / 10.191392 (12.710709) | 0.249315 / 0.680424 (-0.431109) | 0.028058 / 0.534201 (-0.506143) | 0.524960 / 0.579283 (-0.054323) | 0.643881 / 0.434364 (0.209517) | 0.621203 / 0.540337 (0.080866) | 0.723337 / 1.386936 (-0.663599) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#074925b9b7c1dfd33b8675aa99c07cc26375665c \"CML watermark\")\n"
] | 2023-05-26T08:48:22 | 2023-06-02T06:06:38 | 2023-06-01T13:39:40 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5901",
"html_url": "https://github.com/huggingface/datasets/pull/5901",
"diff_url": "https://github.com/huggingface/datasets/pull/5901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5901.patch",
"merged_at": "2023-06-01T13:39:39"
} | This PR uses `split_generator.split_info` as default value for `split_info` if any exception is raised while trying to get `split_generator.name` from `self.info.splits` (this may happen if there is any error in the metadata dataset_info splits).
Please note that `split_info` is only used by the logger.
Fix #5895 if passed `verification_mode="no_checks"`:
```python
ds = load_dataset(
"ArmelR/stack-exchange-instruction",
data_dir="data/finetune",
split="train",
verification_mode="no_checks",
revision="c609f1caade5cfbf3b9fe9cfa17d7cb000b457bd",
)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5901/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5900/comments | https://api.github.com/repos/huggingface/datasets/issues/5900/events | https://github.com/huggingface/datasets/pull/5900 | 1,727,129,617 | PR_kwDODunzps5RahTR | 5,900 | Fix minor typo in docs loading.mdx | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006763 / 0.011353 (-0.004589) | 0.004548 / 0.011008 (-0.006460) | 0.095631 / 0.038508 (0.057123) | 0.034046 / 0.023109 (0.010936) | 0.298064 / 0.275898 (0.022166) | 0.330391 / 0.323480 (0.006911) | 0.006058 / 0.007986 (-0.001928) | 0.004163 / 0.004328 (-0.000165) | 0.073260 / 0.004250 (0.069010) | 0.048885 / 0.037052 (0.011832) | 0.304651 / 0.258489 (0.046162) | 0.345882 / 0.293841 (0.052042) | 0.028061 / 0.128546 (-0.100485) | 0.008823 / 0.075646 (-0.066823) | 0.325620 / 0.419271 (-0.093651) | 0.064480 / 0.043533 (0.020948) | 0.303373 / 0.255139 (0.048234) | 0.321672 / 0.283200 (0.038472) | 0.116353 / 0.141683 (-0.025330) | 1.442327 / 1.452155 (-0.009827) | 1.567553 / 1.492716 (0.074837) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213042 / 0.018006 (0.195035) | 0.457646 / 0.000490 (0.457156) | 0.003989 / 0.000200 (0.003789) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028068 / 0.037411 (-0.009344) | 0.114791 / 0.014526 (0.100265) | 0.120870 / 0.176557 (-0.055686) | 0.183006 / 0.737135 (-0.554130) | 0.126772 / 0.296338 (-0.169567) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406438 / 0.215209 (0.191229) | 4.041890 / 2.077655 (1.964235) | 1.839967 / 1.504120 (0.335847) | 1.646857 / 1.541195 (0.105662) | 1.729372 / 1.468490 (0.260882) | 0.525540 / 4.584777 (-4.059237) | 3.809996 / 3.745712 (0.064284) | 1.842598 / 5.269862 (-3.427263) | 1.062815 / 4.565676 (-3.502862) | 0.065301 / 0.424275 (-0.358974) | 0.012027 / 0.007607 (0.004420) | 0.505459 / 0.226044 (0.279415) | 5.051177 / 2.268929 (2.782248) | 2.354368 / 55.444624 (-53.090256) | 2.035482 / 6.876477 (-4.840995) | 2.120493 / 2.142072 (-0.021579) | 0.642233 / 4.805227 (-4.162994) | 0.141690 / 6.500664 (-6.358974) | 0.063933 / 0.075469 (-0.011536) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.186261 / 1.841788 (-0.655527) | 14.919653 / 8.074308 (6.845345) | 14.534003 / 10.191392 (4.342611) | 0.183165 / 0.680424 (-0.497259) | 0.017581 / 0.534201 (-0.516620) | 0.397284 / 0.579283 (-0.181999) | 0.431363 / 0.434364 (-0.003001) | 0.510774 / 0.540337 (-0.029564) | 0.614421 / 1.386936 (-0.772516) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006682 / 0.011353 (-0.004671) | 0.004558 / 0.011008 (-0.006450) | 0.076272 / 0.038508 (0.037764) | 0.034285 / 0.023109 (0.011176) | 0.395594 / 0.275898 (0.119696) | 0.402702 / 0.323480 (0.079222) | 0.006093 / 0.007986 (-0.001893) | 0.005538 / 0.004328 (0.001209) | 0.075797 / 0.004250 (0.071547) | 0.051638 / 0.037052 (0.014585) | 0.396071 / 0.258489 (0.137582) | 0.409282 / 0.293841 (0.115441) | 0.028193 / 0.128546 (-0.100354) | 0.008827 / 0.075646 (-0.066819) | 0.083182 / 0.419271 (-0.336089) | 0.047605 / 0.043533 (0.004072) | 0.391148 / 0.255139 (0.136009) | 0.386784 / 0.283200 (0.103584) | 0.115303 / 0.141683 (-0.026380) | 1.463666 / 1.452155 (0.011512) | 1.566147 / 1.492716 (0.073431) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213846 / 0.018006 (0.195839) | 0.454769 / 0.000490 (0.454279) | 0.004767 / 0.000200 (0.004567) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030369 / 0.037411 (-0.007042) | 0.115585 / 0.014526 (0.101059) | 0.125181 / 0.176557 (-0.051376) | 0.179247 / 0.737135 (-0.557888) | 0.129336 / 0.296338 (-0.167003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446040 / 0.215209 (0.230831) | 4.462644 / 2.077655 (2.384989) | 2.254511 / 1.504120 (0.750392) | 2.062679 / 1.541195 (0.521484) | 2.180766 / 1.468490 (0.712276) | 0.530928 / 4.584777 (-4.053849) | 3.781392 / 3.745712 (0.035680) | 3.522539 / 5.269862 (-1.747322) | 1.506960 / 4.565676 (-3.058717) | 0.067101 / 0.424275 (-0.357174) | 0.012011 / 0.007607 (0.004404) | 0.546407 / 0.226044 (0.320362) | 5.429894 / 2.268929 (3.160965) | 2.702244 / 55.444624 (-52.742381) | 2.367559 / 6.876477 (-4.508917) | 2.556032 / 2.142072 (0.413960) | 0.639690 / 4.805227 (-4.165538) | 0.144538 / 6.500664 (-6.356126) | 0.067822 / 0.075469 (-0.007647) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284977 / 1.841788 (-0.556811) | 15.546489 / 8.074308 (7.472181) | 14.747519 / 10.191392 (4.556127) | 0.160044 / 0.680424 (-0.520380) | 0.017746 / 0.534201 (-0.516454) | 0.390140 / 0.579283 (-0.189143) | 0.420342 / 0.434364 (-0.014021) | 0.459788 / 0.540337 (-0.080549) | 0.556360 / 1.386936 (-0.830576) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d646afbac7ea3dc0996fa2cb6ffd8a98e158e742 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006493 / 0.011353 (-0.004860) | 0.004532 / 0.011008 (-0.006476) | 0.096509 / 0.038508 (0.058001) | 0.033084 / 0.023109 (0.009974) | 0.297802 / 0.275898 (0.021904) | 0.345880 / 0.323480 (0.022400) | 0.005461 / 0.007986 (-0.002525) | 0.005282 / 0.004328 (0.000954) | 0.073719 / 0.004250 (0.069469) | 0.045035 / 0.037052 (0.007983) | 0.295504 / 0.258489 (0.037015) | 0.345400 / 0.293841 (0.051559) | 0.027880 / 0.128546 (-0.100666) | 0.008804 / 0.075646 (-0.066842) | 0.328017 / 0.419271 (-0.091255) | 0.050169 / 0.043533 (0.006637) | 0.299642 / 0.255139 (0.044503) | 0.313573 / 0.283200 (0.030374) | 0.103359 / 0.141683 (-0.038323) | 1.482145 / 1.452155 (0.029990) | 1.554584 / 1.492716 (0.061867) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212860 / 0.018006 (0.194853) | 0.444823 / 0.000490 (0.444334) | 0.003014 / 0.000200 (0.002815) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026906 / 0.037411 (-0.010506) | 0.108056 / 0.014526 (0.093530) | 0.118721 / 0.176557 (-0.057835) | 0.176646 / 0.737135 (-0.560489) | 0.123285 / 0.296338 (-0.173053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430157 / 0.215209 (0.214948) | 4.279362 / 2.077655 (2.201707) | 1.999732 / 1.504120 (0.495612) | 1.803787 / 1.541195 (0.262592) | 1.868322 / 1.468490 (0.399832) | 0.529314 / 4.584777 (-4.055463) | 3.785101 / 3.745712 (0.039389) | 2.812608 / 5.269862 (-2.457254) | 1.373460 / 4.565676 (-3.192216) | 0.066208 / 0.424275 (-0.358067) | 0.012173 / 0.007607 (0.004566) | 0.528716 / 0.226044 (0.302672) | 5.295003 / 2.268929 (3.026074) | 2.450188 / 55.444624 (-52.994437) | 2.114560 / 6.876477 (-4.761917) | 2.268468 / 2.142072 (0.126395) | 0.651706 / 4.805227 (-4.153521) | 0.142185 / 6.500664 (-6.358479) | 0.064862 / 0.075469 (-0.010607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.184933 / 1.841788 (-0.656854) | 14.503903 / 8.074308 (6.429595) | 13.928965 / 10.191392 (3.737573) | 0.156788 / 0.680424 (-0.523636) | 0.017320 / 0.534201 (-0.516881) | 0.391366 / 0.579283 (-0.187918) | 0.416261 / 0.434364 (-0.018103) | 0.461951 / 0.540337 (-0.078387) | 0.553496 / 1.386936 (-0.833440) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006623 / 0.011353 (-0.004730) | 0.004617 / 0.011008 (-0.006392) | 0.075579 / 0.038508 (0.037071) | 0.033863 / 0.023109 (0.010754) | 0.357097 / 0.275898 (0.081199) | 0.396177 / 0.323480 (0.072697) | 0.005712 / 0.007986 (-0.002274) | 0.004232 / 0.004328 (-0.000097) | 0.074669 / 0.004250 (0.070418) | 0.048253 / 0.037052 (0.011201) | 0.362453 / 0.258489 (0.103964) | 0.405423 / 0.293841 (0.111582) | 0.028709 / 0.128546 (-0.099837) | 0.008884 / 0.075646 (-0.066763) | 0.083042 / 0.419271 (-0.336230) | 0.048074 / 0.043533 (0.004541) | 0.355314 / 0.255139 (0.100175) | 0.372536 / 0.283200 (0.089336) | 0.111548 / 0.141683 (-0.030135) | 1.466353 / 1.452155 (0.014198) | 1.555077 / 1.492716 (0.062361) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217016 / 0.018006 (0.199010) | 0.450145 / 0.000490 (0.449655) | 0.001910 / 0.000200 (0.001711) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029787 / 0.037411 (-0.007624) | 0.115282 / 0.014526 (0.100756) | 0.121962 / 0.176557 (-0.054595) | 0.173424 / 0.737135 (-0.563711) | 0.127519 / 0.296338 (-0.168819) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438211 / 0.215209 (0.223002) | 4.346352 / 2.077655 (2.268697) | 2.140197 / 1.504120 (0.636077) | 1.957890 / 1.541195 (0.416696) | 2.044300 / 1.468490 (0.575810) | 0.527958 / 4.584777 (-4.056819) | 3.805079 / 3.745712 (0.059367) | 2.601763 / 5.269862 (-2.668098) | 1.359469 / 4.565676 (-3.206208) | 0.065358 / 0.424275 (-0.358917) | 0.011571 / 0.007607 (0.003964) | 0.538513 / 0.226044 (0.312469) | 5.363508 / 2.268929 (3.094580) | 2.640495 / 55.444624 (-52.804129) | 2.335930 / 6.876477 (-4.540547) | 2.407782 / 2.142072 (0.265710) | 0.641637 / 4.805227 (-4.163590) | 0.142196 / 6.500664 (-6.358468) | 0.065041 / 0.075469 (-0.010428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296031 / 1.841788 (-0.545757) | 14.950424 / 8.074308 (6.876115) | 14.371304 / 10.191392 (4.179912) | 0.148157 / 0.680424 (-0.532267) | 0.017506 / 0.534201 (-0.516695) | 0.392037 / 0.579283 (-0.187246) | 0.423238 / 0.434364 (-0.011126) | 0.464608 / 0.540337 (-0.075730) | 0.563876 / 1.386936 (-0.823060) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#04b1d0371408beb0c7bc587a69c382bd8d0bec36 \"CML watermark\")\n"
] | 2023-05-26T08:10:54 | 2023-05-26T09:34:15 | 2023-05-26T09:25:12 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5900",
"html_url": "https://github.com/huggingface/datasets/pull/5900",
"diff_url": "https://github.com/huggingface/datasets/pull/5900.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5900.patch",
"merged_at": "2023-05-26T09:25:12"
} | Minor fix. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5900/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5899/comments | https://api.github.com/repos/huggingface/datasets/issues/5899/events | https://github.com/huggingface/datasets/pull/5899 | 1,726,279,011 | PR_kwDODunzps5RXods | 5,899 | canonicalize data dir in config ID hash | {
"login": "kylrth",
"id": 5044802,
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylrth",
"html_url": "https://github.com/kylrth",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"repos_url": "https://api.github.com/users/kylrth/repos",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009137 / 0.011353 (-0.002216) | 0.006119 / 0.011008 (-0.004889) | 0.136530 / 0.038508 (0.098022) | 0.038434 / 0.023109 (0.015325) | 0.427900 / 0.275898 (0.152002) | 0.449757 / 0.323480 (0.126277) | 0.007673 / 0.007986 (-0.000313) | 0.007147 / 0.004328 (0.002818) | 0.108029 / 0.004250 (0.103778) | 0.055072 / 0.037052 (0.018020) | 0.439245 / 0.258489 (0.180756) | 0.477285 / 0.293841 (0.183444) | 0.044838 / 0.128546 (-0.083708) | 0.020814 / 0.075646 (-0.054832) | 0.436098 / 0.419271 (0.016826) | 0.067459 / 0.043533 (0.023926) | 0.427470 / 0.255139 (0.172331) | 0.443260 / 0.283200 (0.160060) | 0.125466 / 0.141683 (-0.016216) | 1.996756 / 1.452155 (0.544601) | 2.100679 / 1.492716 (0.607962) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278407 / 0.018006 (0.260401) | 0.625855 / 0.000490 (0.625365) | 0.005544 / 0.000200 (0.005344) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033495 / 0.037411 (-0.003916) | 0.134718 / 0.014526 (0.120192) | 0.150151 / 0.176557 (-0.026406) | 0.221385 / 0.737135 (-0.515751) | 0.150932 / 0.296338 (-0.145406) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668845 / 0.215209 (0.453636) | 6.678436 / 2.077655 (4.600781) | 2.714074 / 1.504120 (1.209954) | 2.275784 / 1.541195 (0.734589) | 2.332852 / 1.468490 (0.864361) | 1.014877 / 4.584777 (-3.569900) | 6.086455 / 3.745712 (2.340743) | 2.990029 / 5.269862 (-2.279832) | 1.862236 / 4.565676 (-2.703441) | 0.122179 / 0.424275 (-0.302096) | 0.015706 / 0.007607 (0.008099) | 0.873473 / 0.226044 (0.647429) | 8.580109 / 2.268929 (6.311180) | 3.458360 / 55.444624 (-51.986264) | 2.738801 / 6.876477 (-4.137676) | 2.918428 / 2.142072 (0.776356) | 1.224910 / 4.805227 (-3.580317) | 0.243006 / 6.500664 (-6.257658) | 0.087121 / 0.075469 (0.011652) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.757802 / 1.841788 (-0.083986) | 19.447999 / 8.074308 (11.373691) | 24.518157 / 10.191392 (14.326765) | 0.245013 / 0.680424 (-0.435411) | 0.032290 / 0.534201 (-0.501911) | 0.542043 / 0.579283 (-0.037240) | 0.708154 / 0.434364 (0.273790) | 0.660584 / 0.540337 (0.120247) | 0.794868 / 1.386936 (-0.592068) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009496 / 0.011353 (-0.001857) | 0.005842 / 0.011008 (-0.005166) | 0.112813 / 0.038508 (0.074305) | 0.039120 / 0.023109 (0.016011) | 0.489717 / 0.275898 (0.213819) | 0.532586 / 0.323480 (0.209107) | 0.007681 / 0.007986 (-0.000304) | 0.005337 / 0.004328 (0.001009) | 0.107244 / 0.004250 (0.102994) | 0.056847 / 0.037052 (0.019794) | 0.499447 / 0.258489 (0.240958) | 0.548995 / 0.293841 (0.255154) | 0.058047 / 0.128546 (-0.070499) | 0.015468 / 0.075646 (-0.060179) | 0.124600 / 0.419271 (-0.294671) | 0.060940 / 0.043533 (0.017407) | 0.488370 / 0.255139 (0.233231) | 0.518540 / 0.283200 (0.235341) | 0.124147 / 0.141683 (-0.017536) | 1.902922 / 1.452155 (0.450767) | 2.033519 / 1.492716 (0.540803) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.319527 / 0.018006 (0.301521) | 0.629641 / 0.000490 (0.629152) | 0.000721 / 0.000200 (0.000521) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033150 / 0.037411 (-0.004262) | 0.134250 / 0.014526 (0.119724) | 0.161273 / 0.176557 (-0.015283) | 0.211471 / 0.737135 (-0.525664) | 0.155326 / 0.296338 (-0.141012) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.705244 / 0.215209 (0.490035) | 7.043040 / 2.077655 (4.965386) | 3.308948 / 1.504120 (1.804828) | 2.885050 / 1.541195 (1.343855) | 2.810260 / 1.468490 (1.341770) | 1.027095 / 4.584777 (-3.557682) | 6.111398 / 3.745712 (2.365686) | 5.385545 / 5.269862 (0.115684) | 2.521668 / 4.565676 (-2.044009) | 0.122419 / 0.424275 (-0.301856) | 0.016376 / 0.007607 (0.008768) | 0.830856 / 0.226044 (0.604811) | 8.952199 / 2.268929 (6.683271) | 4.207875 / 55.444624 (-51.236749) | 3.346624 / 6.876477 (-3.529853) | 3.395316 / 2.142072 (1.253244) | 1.351816 / 4.805227 (-3.453411) | 0.303056 / 6.500664 (-6.197608) | 0.098713 / 0.075469 (0.023244) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.841903 / 1.841788 (0.000116) | 20.472125 / 8.074308 (12.397817) | 23.433200 / 10.191392 (13.241808) | 0.242599 / 0.680424 (-0.437825) | 0.030701 / 0.534201 (-0.503500) | 0.541614 / 0.579283 (-0.037669) | 0.657827 / 0.434364 (0.223463) | 0.652448 / 0.540337 (0.112111) | 0.773743 / 1.386936 (-0.613193) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#02ee418831aba68d0be93227bce8b3f42ef8980f \"CML watermark\")\n"
] | 2023-05-25T18:17:10 | 2023-06-02T16:02:15 | 2023-06-02T15:52:04 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5899",
"html_url": "https://github.com/huggingface/datasets/pull/5899",
"diff_url": "https://github.com/huggingface/datasets/pull/5899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5899.patch",
"merged_at": "2023-06-02T15:52:04"
} | fixes #5871
The second commit is optional but improves readability. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5899/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5898/comments | https://api.github.com/repos/huggingface/datasets/issues/5898/events | https://github.com/huggingface/datasets/issues/5898 | 1,726,190,481 | I_kwDODunzps5m45OR | 5,898 | Loading The flores data set for specific language | {
"login": "106AbdulBasit",
"id": 36159918,
"node_id": "MDQ6VXNlcjM2MTU5OTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/36159918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/106AbdulBasit",
"html_url": "https://github.com/106AbdulBasit",
"followers_url": "https://api.github.com/users/106AbdulBasit/followers",
"following_url": "https://api.github.com/users/106AbdulBasit/following{/other_user}",
"gists_url": "https://api.github.com/users/106AbdulBasit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/106AbdulBasit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/106AbdulBasit/subscriptions",
"organizations_url": "https://api.github.com/users/106AbdulBasit/orgs",
"repos_url": "https://api.github.com/users/106AbdulBasit/repos",
"events_url": "https://api.github.com/users/106AbdulBasit/events{/privacy}",
"received_events_url": "https://api.github.com/users/106AbdulBasit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"got that the syntax is like this\r\n\r\ndataset = load_dataset(\"facebook/flores\", \"ace_Arab\")"
] | 2023-05-25T17:08:55 | 2023-05-25T17:21:38 | 2023-05-25T17:21:37 | NONE | null | null | null | ### Describe the bug
I am trying to load the Flores data set
the code which is given is
```
from datasets import load_dataset
dataset = load_dataset("facebook/flores")
```
This gives the error of config name
""ValueError: Config name is missing"
Now if I add some config it gives me the some error
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
How I can load the data of the specific language ?
Couldn't find any tutorial
any one can help me out?
### Steps to reproduce the bug
step one load the data set
`from datasets import load_dataset
dataset = load_dataset("facebook/flores")`
it gives the error of config
once config is given
it gives the error of
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
### Expected behavior
Data set should be loaded but I am receiving error
### Environment info
Datasets , python , | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5898/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5897/comments | https://api.github.com/repos/huggingface/datasets/issues/5897/events | https://github.com/huggingface/datasets/pull/5897 | 1,726,135,494 | PR_kwDODunzps5RXJaY | 5,897 | Fix `FixedSizeListArray` casting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006213 / 0.011353 (-0.005140) | 0.004230 / 0.011008 (-0.006778) | 0.098014 / 0.038508 (0.059506) | 0.028659 / 0.023109 (0.005550) | 0.303272 / 0.275898 (0.027374) | 0.337186 / 0.323480 (0.013706) | 0.005126 / 0.007986 (-0.002860) | 0.003563 / 0.004328 (-0.000765) | 0.075295 / 0.004250 (0.071045) | 0.036836 / 0.037052 (-0.000216) | 0.309612 / 0.258489 (0.051123) | 0.346484 / 0.293841 (0.052643) | 0.025714 / 0.128546 (-0.102832) | 0.008562 / 0.075646 (-0.067085) | 0.323475 / 0.419271 (-0.095796) | 0.044072 / 0.043533 (0.000539) | 0.308261 / 0.255139 (0.053122) | 0.330903 / 0.283200 (0.047703) | 0.091805 / 0.141683 (-0.049878) | 1.517011 / 1.452155 (0.064856) | 1.570815 / 1.492716 (0.078099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211265 / 0.018006 (0.193259) | 0.438860 / 0.000490 (0.438370) | 0.001127 / 0.000200 (0.000927) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023337 / 0.037411 (-0.014074) | 0.096243 / 0.014526 (0.081717) | 0.103529 / 0.176557 (-0.073028) | 0.161171 / 0.737135 (-0.575964) | 0.105904 / 0.296338 (-0.190435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417042 / 0.215209 (0.201833) | 4.155067 / 2.077655 (2.077412) | 1.879657 / 1.504120 (0.375537) | 1.669341 / 1.541195 (0.128146) | 1.717623 / 1.468490 (0.249133) | 0.556246 / 4.584777 (-4.028531) | 3.484535 / 3.745712 (-0.261177) | 1.728845 / 5.269862 (-3.541017) | 0.997477 / 4.565676 (-3.568199) | 0.068355 / 0.424275 (-0.355920) | 0.012445 / 0.007607 (0.004837) | 0.519023 / 0.226044 (0.292978) | 5.173506 / 2.268929 (2.904577) | 2.332435 / 55.444624 (-53.112190) | 1.986348 / 6.876477 (-4.890129) | 2.076885 / 2.142072 (-0.065187) | 0.656738 / 4.805227 (-4.148489) | 0.135308 / 6.500664 (-6.365356) | 0.065486 / 0.075469 (-0.009984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208874 / 1.841788 (-0.632914) | 13.994200 / 8.074308 (5.919892) | 14.160978 / 10.191392 (3.969586) | 0.146009 / 0.680424 (-0.534415) | 0.016573 / 0.534201 (-0.517628) | 0.356082 / 0.579283 (-0.223202) | 0.387766 / 0.434364 (-0.046598) | 0.419130 / 0.540337 (-0.121208) | 0.508634 / 1.386936 (-0.878302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006238 / 0.011353 (-0.005115) | 0.004221 / 0.011008 (-0.006788) | 0.075155 / 0.038508 (0.036646) | 0.028491 / 0.023109 (0.005382) | 0.355606 / 0.275898 (0.079708) | 0.388986 / 0.323480 (0.065506) | 0.005941 / 0.007986 (-0.002044) | 0.003510 / 0.004328 (-0.000819) | 0.074905 / 0.004250 (0.070655) | 0.039111 / 0.037052 (0.002059) | 0.358492 / 0.258489 (0.100003) | 0.398763 / 0.293841 (0.104922) | 0.025535 / 0.128546 (-0.103012) | 0.008580 / 0.075646 (-0.067067) | 0.080461 / 0.419271 (-0.338811) | 0.041381 / 0.043533 (-0.002152) | 0.355498 / 0.255139 (0.100359) | 0.379163 / 0.283200 (0.095963) | 0.096450 / 0.141683 (-0.045233) | 1.503248 / 1.452155 (0.051093) | 1.595616 / 1.492716 (0.102900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238065 / 0.018006 (0.220058) | 0.422800 / 0.000490 (0.422311) | 0.002274 / 0.000200 (0.002074) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025746 / 0.037411 (-0.011665) | 0.103319 / 0.014526 (0.088793) | 0.112155 / 0.176557 (-0.064401) | 0.163034 / 0.737135 (-0.574101) | 0.113377 / 0.296338 (-0.182962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440522 / 0.215209 (0.225313) | 4.398123 / 2.077655 (2.320468) | 2.143538 / 1.504120 (0.639418) | 1.946084 / 1.541195 (0.404890) | 1.996556 / 1.468490 (0.528066) | 0.550108 / 4.584777 (-4.034669) | 3.455774 / 3.745712 (-0.289938) | 2.862474 / 5.269862 (-2.407387) | 1.213446 / 4.565676 (-3.352230) | 0.067987 / 0.424275 (-0.356288) | 0.012413 / 0.007607 (0.004806) | 0.543990 / 0.226044 (0.317945) | 5.454807 / 2.268929 (3.185879) | 2.669195 / 55.444624 (-52.775429) | 2.332948 / 6.876477 (-4.543528) | 2.383870 / 2.142072 (0.241797) | 0.652017 / 4.805227 (-4.153210) | 0.135508 / 6.500664 (-6.365156) | 0.068238 / 0.075469 (-0.007231) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322669 / 1.841788 (-0.519118) | 14.368136 / 8.074308 (6.293828) | 14.167431 / 10.191392 (3.976039) | 0.159371 / 0.680424 (-0.521052) | 0.016638 / 0.534201 (-0.517563) | 0.357106 / 0.579283 (-0.222177) | 0.392491 / 0.434364 (-0.041873) | 0.419458 / 0.540337 (-0.120880) | 0.504662 / 1.386936 (-0.882274) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bf764819ba6754cb7edf15899db517be0548676f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006296 / 0.011353 (-0.005057) | 0.004185 / 0.011008 (-0.006823) | 0.096170 / 0.038508 (0.057662) | 0.029212 / 0.023109 (0.006102) | 0.315356 / 0.275898 (0.039458) | 0.335214 / 0.323480 (0.011734) | 0.005108 / 0.007986 (-0.002877) | 0.003634 / 0.004328 (-0.000694) | 0.074186 / 0.004250 (0.069936) | 0.038716 / 0.037052 (0.001663) | 0.311041 / 0.258489 (0.052551) | 0.341202 / 0.293841 (0.047361) | 0.025584 / 0.128546 (-0.102962) | 0.008499 / 0.075646 (-0.067148) | 0.318660 / 0.419271 (-0.100611) | 0.043745 / 0.043533 (0.000212) | 0.314824 / 0.255139 (0.059685) | 0.328117 / 0.283200 (0.044917) | 0.093425 / 0.141683 (-0.048258) | 1.478732 / 1.452155 (0.026578) | 1.531743 / 1.492716 (0.039027) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203484 / 0.018006 (0.185478) | 0.416131 / 0.000490 (0.415641) | 0.007352 / 0.000200 (0.007152) | 0.000211 / 0.000054 (0.000156) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022908 / 0.037411 (-0.014503) | 0.098641 / 0.014526 (0.084115) | 0.103426 / 0.176557 (-0.073131) | 0.161658 / 0.737135 (-0.575477) | 0.106506 / 0.296338 (-0.189832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430781 / 0.215209 (0.215572) | 4.315677 / 2.077655 (2.238022) | 2.022302 / 1.504120 (0.518182) | 1.832043 / 1.541195 (0.290849) | 1.789302 / 1.468490 (0.320812) | 0.560484 / 4.584777 (-4.024293) | 3.448204 / 3.745712 (-0.297508) | 1.725016 / 5.269862 (-3.544846) | 1.002649 / 4.565676 (-3.563027) | 0.068480 / 0.424275 (-0.355795) | 0.012617 / 0.007607 (0.005010) | 0.532291 / 0.226044 (0.306246) | 5.319352 / 2.268929 (3.050423) | 2.520730 / 55.444624 (-52.923894) | 2.213881 / 6.876477 (-4.662596) | 2.352477 / 2.142072 (0.210404) | 0.662516 / 4.805227 (-4.142711) | 0.136481 / 6.500664 (-6.364183) | 0.066597 / 0.075469 (-0.008872) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224537 / 1.841788 (-0.617251) | 13.849920 / 8.074308 (5.775612) | 14.026358 / 10.191392 (3.834966) | 0.131018 / 0.680424 (-0.549405) | 0.016756 / 0.534201 (-0.517445) | 0.358091 / 0.579283 (-0.221192) | 0.397709 / 0.434364 (-0.036655) | 0.450024 / 0.540337 (-0.090314) | 0.542609 / 1.386936 (-0.844327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006179 / 0.011353 (-0.005174) | 0.004145 / 0.011008 (-0.006863) | 0.077482 / 0.038508 (0.038974) | 0.028005 / 0.023109 (0.004896) | 0.400010 / 0.275898 (0.124112) | 0.408206 / 0.323480 (0.084726) | 0.005049 / 0.007986 (-0.002937) | 0.003608 / 0.004328 (-0.000721) | 0.076841 / 0.004250 (0.072590) | 0.036714 / 0.037052 (-0.000338) | 0.406020 / 0.258489 (0.147531) | 0.412392 / 0.293841 (0.118551) | 0.025626 / 0.128546 (-0.102920) | 0.008560 / 0.075646 (-0.067087) | 0.084088 / 0.419271 (-0.335183) | 0.039707 / 0.043533 (-0.003826) | 0.396909 / 0.255139 (0.141770) | 0.403623 / 0.283200 (0.120424) | 0.095137 / 0.141683 (-0.046546) | 1.515670 / 1.452155 (0.063515) | 1.568379 / 1.492716 (0.075662) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181802 / 0.018006 (0.163795) | 0.408778 / 0.000490 (0.408289) | 0.000393 / 0.000200 (0.000193) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025940 / 0.037411 (-0.011471) | 0.099992 / 0.014526 (0.085466) | 0.106280 / 0.176557 (-0.070276) | 0.161729 / 0.737135 (-0.575406) | 0.108625 / 0.296338 (-0.187713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459802 / 0.215209 (0.244593) | 4.603002 / 2.077655 (2.525347) | 2.406851 / 1.504120 (0.902732) | 2.265422 / 1.541195 (0.724227) | 2.306305 / 1.468490 (0.837815) | 0.553903 / 4.584777 (-4.030874) | 3.482052 / 3.745712 (-0.263660) | 2.969855 / 5.269862 (-2.300007) | 1.309285 / 4.565676 (-3.256391) | 0.068130 / 0.424275 (-0.356145) | 0.012189 / 0.007607 (0.004582) | 0.571299 / 0.226044 (0.345254) | 5.711420 / 2.268929 (3.442492) | 2.716748 / 55.444624 (-52.727876) | 2.369869 / 6.876477 (-4.506608) | 2.544240 / 2.142072 (0.402167) | 0.659955 / 4.805227 (-4.145272) | 0.136684 / 6.500664 (-6.363980) | 0.068962 / 0.075469 (-0.006507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297659 / 1.841788 (-0.544129) | 14.012758 / 8.074308 (5.938449) | 14.324644 / 10.191392 (4.133252) | 0.144894 / 0.680424 (-0.535530) | 0.016751 / 0.534201 (-0.517450) | 0.361547 / 0.579283 (-0.217736) | 0.396595 / 0.434364 (-0.037769) | 0.422375 / 0.540337 (-0.117962) | 0.508209 / 1.386936 (-0.878727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ba5f81357b53099b1bedfbb277211dba3952257b \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006303 / 0.011353 (-0.005050) | 0.004043 / 0.011008 (-0.006965) | 0.096239 / 0.038508 (0.057731) | 0.029608 / 0.023109 (0.006498) | 0.321058 / 0.275898 (0.045160) | 0.367066 / 0.323480 (0.043587) | 0.005236 / 0.007986 (-0.002749) | 0.003342 / 0.004328 (-0.000987) | 0.074407 / 0.004250 (0.070157) | 0.038810 / 0.037052 (0.001757) | 0.332597 / 0.258489 (0.074108) | 0.363562 / 0.293841 (0.069721) | 0.025460 / 0.128546 (-0.103086) | 0.008426 / 0.075646 (-0.067221) | 0.316998 / 0.419271 (-0.102273) | 0.043621 / 0.043533 (0.000088) | 0.338043 / 0.255139 (0.082904) | 0.366441 / 0.283200 (0.083241) | 0.092061 / 0.141683 (-0.049622) | 1.461531 / 1.452155 (0.009376) | 1.538047 / 1.492716 (0.045331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206796 / 0.018006 (0.188790) | 0.517959 / 0.000490 (0.517469) | 0.002745 / 0.000200 (0.002545) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022902 / 0.037411 (-0.014510) | 0.097901 / 0.014526 (0.083375) | 0.103664 / 0.176557 (-0.072893) | 0.163516 / 0.737135 (-0.573619) | 0.108561 / 0.296338 (-0.187778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418964 / 0.215209 (0.203755) | 4.159113 / 2.077655 (2.081458) | 1.843946 / 1.504120 (0.339827) | 1.641083 / 1.541195 (0.099888) | 1.686848 / 1.468490 (0.218358) | 0.554583 / 4.584777 (-4.030194) | 3.409862 / 3.745712 (-0.335850) | 2.647904 / 5.269862 (-2.621958) | 1.355424 / 4.565676 (-3.210253) | 0.068229 / 0.424275 (-0.356046) | 0.012217 / 0.007607 (0.004610) | 0.515895 / 0.226044 (0.289851) | 5.144920 / 2.268929 (2.875991) | 2.298046 / 55.444624 (-53.146579) | 1.964735 / 6.876477 (-4.911741) | 2.075580 / 2.142072 (-0.066492) | 0.657104 / 4.805227 (-4.148123) | 0.134759 / 6.500664 (-6.365905) | 0.067545 / 0.075469 (-0.007924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.233075 / 1.841788 (-0.608713) | 13.896762 / 8.074308 (5.822454) | 14.055143 / 10.191392 (3.863751) | 0.145507 / 0.680424 (-0.534917) | 0.016702 / 0.534201 (-0.517499) | 0.365157 / 0.579283 (-0.214126) | 0.385842 / 0.434364 (-0.048522) | 0.459993 / 0.540337 (-0.080344) | 0.547115 / 1.386936 (-0.839821) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006174 / 0.011353 (-0.005179) | 0.004191 / 0.011008 (-0.006817) | 0.078311 / 0.038508 (0.039803) | 0.028038 / 0.023109 (0.004928) | 0.360056 / 0.275898 (0.084158) | 0.398081 / 0.323480 (0.074602) | 0.005069 / 0.007986 (-0.002916) | 0.003464 / 0.004328 (-0.000864) | 0.077858 / 0.004250 (0.073608) | 0.039420 / 0.037052 (0.002367) | 0.361743 / 0.258489 (0.103254) | 0.404829 / 0.293841 (0.110988) | 0.025604 / 0.128546 (-0.102943) | 0.008573 / 0.075646 (-0.067074) | 0.084944 / 0.419271 (-0.334328) | 0.042652 / 0.043533 (-0.000881) | 0.368549 / 0.255139 (0.113410) | 0.385682 / 0.283200 (0.102482) | 0.099085 / 0.141683 (-0.042598) | 1.495815 / 1.452155 (0.043661) | 1.548168 / 1.492716 (0.055452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193737 / 0.018006 (0.175730) | 0.421871 / 0.000490 (0.421381) | 0.002306 / 0.000200 (0.002106) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025928 / 0.037411 (-0.011483) | 0.103410 / 0.014526 (0.088885) | 0.107931 / 0.176557 (-0.068626) | 0.157127 / 0.737135 (-0.580008) | 0.111892 / 0.296338 (-0.184446) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477562 / 0.215209 (0.262353) | 4.772711 / 2.077655 (2.695056) | 2.458725 / 1.504120 (0.954605) | 2.269871 / 1.541195 (0.728676) | 2.365502 / 1.468490 (0.897012) | 0.556182 / 4.584777 (-4.028595) | 3.408016 / 3.745712 (-0.337697) | 1.730639 / 5.269862 (-3.539222) | 1.000973 / 4.565676 (-3.564704) | 0.068293 / 0.424275 (-0.355982) | 0.012119 / 0.007607 (0.004512) | 0.581281 / 0.226044 (0.355236) | 5.811930 / 2.268929 (3.543001) | 2.890337 / 55.444624 (-52.554288) | 2.592156 / 6.876477 (-4.284321) | 2.687764 / 2.142072 (0.545691) | 0.664282 / 4.805227 (-4.140946) | 0.136029 / 6.500664 (-6.364635) | 0.067493 / 0.075469 (-0.007976) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330723 / 1.841788 (-0.511064) | 14.379172 / 8.074308 (6.304864) | 14.153286 / 10.191392 (3.961894) | 0.142942 / 0.680424 (-0.537482) | 0.016698 / 0.534201 (-0.517503) | 0.361044 / 0.579283 (-0.218239) | 0.393174 / 0.434364 (-0.041190) | 0.423107 / 0.540337 (-0.117231) | 0.514299 / 1.386936 (-0.872637) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1cb02285358ab4be6386e0a2aae40d267ff561fc \"CML watermark\")\n"
] | 2023-05-25T16:26:33 | 2023-05-26T12:22:04 | 2023-05-26T11:57:16 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5897",
"html_url": "https://github.com/huggingface/datasets/pull/5897",
"diff_url": "https://github.com/huggingface/datasets/pull/5897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5897.patch",
"merged_at": "2023-05-26T11:57:16"
} | Fix cast on sliced `FixedSizeListArray`s.
Fix #5866 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5897/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5896/comments | https://api.github.com/repos/huggingface/datasets/issues/5896/events | https://github.com/huggingface/datasets/issues/5896 | 1,726,022,500 | I_kwDODunzps5m4QNk | 5,896 | HuggingFace does not cache downloaded files aggressively/early enough | {
"login": "geajack",
"id": 2124157,
"node_id": "MDQ6VXNlcjIxMjQxNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/geajack",
"html_url": "https://github.com/geajack",
"followers_url": "https://api.github.com/users/geajack/followers",
"following_url": "https://api.github.com/users/geajack/following{/other_user}",
"gists_url": "https://api.github.com/users/geajack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/geajack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geajack/subscriptions",
"organizations_url": "https://api.github.com/users/geajack/orgs",
"repos_url": "https://api.github.com/users/geajack/repos",
"events_url": "https://api.github.com/users/geajack/events{/privacy}",
"received_events_url": "https://api.github.com/users/geajack/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-05-25T15:14:36 | 2023-05-25T15:14:36 | null | NONE | null | null | null | ### Describe the bug
I wrote the following script:
```
import datasets
dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]")
```
I ran it and spent 90 minutes downloading a 20GB file. Then I saw:
```
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20.3G/20.3G [1:30:29<00:00, 3.73MB/s]
Traceback (most recent call last):
File "/home/jack/Code/Projects/Transformers/Codebase/main.py", line 5, in <module>
dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]")
File "/home/jack/.local/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset
builder_instance.download_and_prepare(
File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 883, in download_and_prepare
self._save_info()
File "/home/jack/.local/lib/python3.10/site-packages/datasets/builder.py", line 2037, in _save_info
import apache_beam as beam
ModuleNotFoundError: No module named 'apache_beam'
```
And the 20GB of data was seemingly instantly gone forever, because when I ran the script again, it had to do the download again.
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
datasets 2.10.1
Python 3.10 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5896/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5895/comments | https://api.github.com/repos/huggingface/datasets/issues/5895/events | https://github.com/huggingface/datasets/issues/5895 | 1,725,467,252 | I_kwDODunzps5m2Ip0 | 5,895 | The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset | {
"login": "DongHande",
"id": 45357817,
"node_id": "MDQ6VXNlcjQ1MzU3ODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/45357817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DongHande",
"html_url": "https://github.com/DongHande",
"followers_url": "https://api.github.com/users/DongHande/followers",
"following_url": "https://api.github.com/users/DongHande/following{/other_user}",
"gists_url": "https://api.github.com/users/DongHande/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DongHande/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DongHande/subscriptions",
"organizations_url": "https://api.github.com/users/DongHande/orgs",
"repos_url": "https://api.github.com/users/DongHande/repos",
"events_url": "https://api.github.com/users/DongHande/events{/privacy}",
"received_events_url": "https://api.github.com/users/DongHande/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @DongHande.\r\n\r\nI think the issue is caused by the metadata in the dataset card: in the header of the `README.md`, they state that the dataset has 4 splits (\"finetune\", \"reward\", \"rl\", \"evaluation\"). \r\n```yaml\r\n splits:\r\n - name: finetune\r\n num_bytes: 6674567576\r\n num_examples: 3000000\r\n - name: reward\r\n num_bytes: 6674341521\r\n num_examples: 3000000\r\n - name: rl\r\n num_bytes: 6679279968\r\n num_examples: 3000000\r\n - name: evaluation\r\n num_bytes: 4022714493\r\n num_examples: 1807695\r\n```\r\n\r\n\r\nI guess the user wanted to define these as configs, instead of splits. This is not yet supported for no-script datasets, but will be soon supported. See:\r\n- #5331\r\n\r\nI think we should contact the dataset author to inform about the issue with the split names, as you already did: https://huggingface.co/datasets/ArmelR/stack-exchange-instruction/discussions/1\r\nLet's continue the discussion there!",
"Thank you! It has been fixed. "
] | 2023-05-25T09:39:06 | 2023-05-29T02:32:12 | 2023-05-29T02:32:12 | NONE | null | null | null | ### Describe the bug
When I load the ArmelR/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset.
When I use the script "datasets.load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)", it fails. But it succeeds when I add the "streaming = True" parameter.
The website of the dataset is https://huggingface.co/datasets/ArmelR/stack-exchange-instruction/ .
The traceback logs are as below:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 1706, in _prepare_split
split_info = self.info.splits[split_generator.name]
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/splits.py", line 530, in __getitem__
instructions = make_file_instructions(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 112, in make_file_instructions
name2filenames = {
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 113, in <dictcomp>
info.name: filenames_for_dataset_split(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 70, in filenames_for_dataset_split
prefix = filename_prefix_for_split(dataset_name, split)
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 54, in filename_prefix_for_split
if os.path.basename(name) != name:
File "/home/xxx/miniconda3/envs/code/lib/python3.9/posixpath.py", line 142, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
### Steps to reproduce the bug
1. import datasets library function: ```from datasets import load_dataset```
2. load dataset: ```ds=load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)```
### Expected behavior
The dataset can be loaded successfully without the streaming setting.
### Environment info
Linux,
python=3.9
datasets=2.12.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5895/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5894/comments | https://api.github.com/repos/huggingface/datasets/issues/5894/events | https://github.com/huggingface/datasets/pull/5894 | 1,724,774,910 | PR_kwDODunzps5RSjot | 5,894 | Force overwrite existing filesystem protocol | {
"login": "baskrahmer",
"id": 24520725,
"node_id": "MDQ6VXNlcjI0NTIwNzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/24520725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baskrahmer",
"html_url": "https://github.com/baskrahmer",
"followers_url": "https://api.github.com/users/baskrahmer/followers",
"following_url": "https://api.github.com/users/baskrahmer/following{/other_user}",
"gists_url": "https://api.github.com/users/baskrahmer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baskrahmer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baskrahmer/subscriptions",
"organizations_url": "https://api.github.com/users/baskrahmer/orgs",
"repos_url": "https://api.github.com/users/baskrahmer/repos",
"events_url": "https://api.github.com/users/baskrahmer/events{/privacy}",
"received_events_url": "https://api.github.com/users/baskrahmer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009139 / 0.011353 (-0.002214) | 0.005634 / 0.011008 (-0.005374) | 0.129587 / 0.038508 (0.091079) | 0.038298 / 0.023109 (0.015189) | 0.428149 / 0.275898 (0.152251) | 0.443744 / 0.323480 (0.120264) | 0.007501 / 0.007986 (-0.000485) | 0.005999 / 0.004328 (0.001671) | 0.100796 / 0.004250 (0.096546) | 0.053236 / 0.037052 (0.016184) | 0.423868 / 0.258489 (0.165379) | 0.460110 / 0.293841 (0.166269) | 0.041255 / 0.128546 (-0.087291) | 0.013790 / 0.075646 (-0.061856) | 0.438398 / 0.419271 (0.019127) | 0.063086 / 0.043533 (0.019553) | 0.414826 / 0.255139 (0.159687) | 0.460652 / 0.283200 (0.177453) | 0.121223 / 0.141683 (-0.020460) | 1.754430 / 1.452155 (0.302275) | 1.900037 / 1.492716 (0.407320) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.027222 / 0.018006 (0.009216) | 0.617666 / 0.000490 (0.617176) | 0.022443 / 0.000200 (0.022243) | 0.000820 / 0.000054 (0.000766) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030397 / 0.037411 (-0.007014) | 0.125732 / 0.014526 (0.111206) | 0.149805 / 0.176557 (-0.026752) | 0.234048 / 0.737135 (-0.503087) | 0.143108 / 0.296338 (-0.153231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.631189 / 0.215209 (0.415980) | 6.182871 / 2.077655 (4.105216) | 2.635730 / 1.504120 (1.131610) | 2.231429 / 1.541195 (0.690235) | 2.438360 / 1.468490 (0.969870) | 0.861170 / 4.584777 (-3.723607) | 5.785984 / 3.745712 (2.040272) | 2.758358 / 5.269862 (-2.511504) | 1.678095 / 4.565676 (-2.887582) | 0.105961 / 0.424275 (-0.318314) | 0.013659 / 0.007607 (0.006052) | 0.762943 / 0.226044 (0.536898) | 7.774399 / 2.268929 (5.505471) | 3.319027 / 55.444624 (-52.125598) | 2.700248 / 6.876477 (-4.176229) | 3.008581 / 2.142072 (0.866509) | 1.122522 / 4.805227 (-3.682705) | 0.214832 / 6.500664 (-6.285832) | 0.085281 / 0.075469 (0.009811) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.647610 / 1.841788 (-0.194177) | 18.178316 / 8.074308 (10.104008) | 21.199177 / 10.191392 (11.007785) | 0.247063 / 0.680424 (-0.433361) | 0.030443 / 0.534201 (-0.503758) | 0.512527 / 0.579283 (-0.066757) | 0.640758 / 0.434364 (0.206394) | 0.639986 / 0.540337 (0.099649) | 0.760113 / 1.386936 (-0.626823) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008293 / 0.011353 (-0.003060) | 0.005360 / 0.011008 (-0.005648) | 0.102932 / 0.038508 (0.064424) | 0.037457 / 0.023109 (0.014347) | 0.444114 / 0.275898 (0.168216) | 0.512855 / 0.323480 (0.189375) | 0.007030 / 0.007986 (-0.000956) | 0.004954 / 0.004328 (0.000625) | 0.095757 / 0.004250 (0.091507) | 0.051239 / 0.037052 (0.014187) | 0.471118 / 0.258489 (0.212629) | 0.517764 / 0.293841 (0.223923) | 0.041953 / 0.128546 (-0.086593) | 0.013748 / 0.075646 (-0.061898) | 0.118089 / 0.419271 (-0.301182) | 0.060159 / 0.043533 (0.016626) | 0.466011 / 0.255139 (0.210872) | 0.489180 / 0.283200 (0.205980) | 0.123250 / 0.141683 (-0.018433) | 1.714738 / 1.452155 (0.262584) | 1.838571 / 1.492716 (0.345855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267792 / 0.018006 (0.249785) | 0.624313 / 0.000490 (0.623824) | 0.007315 / 0.000200 (0.007115) | 0.000136 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033751 / 0.037411 (-0.003661) | 0.122819 / 0.014526 (0.108293) | 0.148270 / 0.176557 (-0.028286) | 0.198581 / 0.737135 (-0.538554) | 0.144845 / 0.296338 (-0.151494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620631 / 0.215209 (0.405422) | 6.224665 / 2.077655 (4.147010) | 2.856592 / 1.504120 (1.352473) | 2.525089 / 1.541195 (0.983894) | 2.600198 / 1.468490 (1.131708) | 0.872038 / 4.584777 (-3.712739) | 5.571650 / 3.745712 (1.825937) | 5.907643 / 5.269862 (0.637782) | 2.348770 / 4.565676 (-2.216906) | 0.111665 / 0.424275 (-0.312610) | 0.013886 / 0.007607 (0.006278) | 0.762154 / 0.226044 (0.536109) | 7.792686 / 2.268929 (5.523758) | 3.601122 / 55.444624 (-51.843503) | 2.939412 / 6.876477 (-3.937064) | 2.973430 / 2.142072 (0.831358) | 1.065016 / 4.805227 (-3.740211) | 0.221701 / 6.500664 (-6.278963) | 0.088157 / 0.075469 (0.012688) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.771061 / 1.841788 (-0.070727) | 18.826926 / 8.074308 (10.752618) | 21.283830 / 10.191392 (11.092438) | 0.239233 / 0.680424 (-0.441191) | 0.026159 / 0.534201 (-0.508042) | 0.487074 / 0.579283 (-0.092209) | 0.623241 / 0.434364 (0.188877) | 0.600506 / 0.540337 (0.060169) | 0.691271 / 1.386936 (-0.695665) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1bbe2c3496498a6415765b517ac4bc600a02ad06 \"CML watermark\")\n"
] | 2023-05-24T21:41:53 | 2023-05-25T06:52:08 | 2023-05-25T06:42:33 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5894",
"html_url": "https://github.com/huggingface/datasets/pull/5894",
"diff_url": "https://github.com/huggingface/datasets/pull/5894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5894.patch",
"merged_at": "2023-05-25T06:42:33"
} | Fix #5876 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5894/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5893/comments | https://api.github.com/repos/huggingface/datasets/issues/5893/events | https://github.com/huggingface/datasets/pull/5893 | 1,722,519,056 | PR_kwDODunzps5RK40K | 5,893 | Load cached dataset as iterable | {
"login": "mariusz-jachimowicz-83",
"id": 10278877,
"node_id": "MDQ6VXNlcjEwMjc4ODc3",
"avatar_url": "https://avatars.githubusercontent.com/u/10278877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariusz-jachimowicz-83",
"html_url": "https://github.com/mariusz-jachimowicz-83",
"followers_url": "https://api.github.com/users/mariusz-jachimowicz-83/followers",
"following_url": "https://api.github.com/users/mariusz-jachimowicz-83/following{/other_user}",
"gists_url": "https://api.github.com/users/mariusz-jachimowicz-83/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariusz-jachimowicz-83/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariusz-jachimowicz-83/subscriptions",
"organizations_url": "https://api.github.com/users/mariusz-jachimowicz-83/orgs",
"repos_url": "https://api.github.com/users/mariusz-jachimowicz-83/repos",
"events_url": "https://api.github.com/users/mariusz-jachimowicz-83/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariusz-jachimowicz-83/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Could you please look into that and review?",
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq I refactored the code. Could you please check is it what you requested?",
"@lhoestq Thanks for a review. Excellent tips. All tips applied. ",
"I think there is just PythonFormatter that needs to be imported in the test file and we should be good to merge",
"@lhoestq that is weird. I have linter error when I do it.",
"@lhoestq Now it should work properly.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006152 / 0.011353 (-0.005201) | 0.004169 / 0.011008 (-0.006839) | 0.097968 / 0.038508 (0.059460) | 0.028325 / 0.023109 (0.005216) | 0.308958 / 0.275898 (0.033060) | 0.341832 / 0.323480 (0.018352) | 0.005098 / 0.007986 (-0.002887) | 0.004721 / 0.004328 (0.000393) | 0.075067 / 0.004250 (0.070817) | 0.040514 / 0.037052 (0.003462) | 0.308355 / 0.258489 (0.049866) | 0.351063 / 0.293841 (0.057222) | 0.025261 / 0.128546 (-0.103285) | 0.008483 / 0.075646 (-0.067163) | 0.321219 / 0.419271 (-0.098052) | 0.058258 / 0.043533 (0.014725) | 0.312572 / 0.255139 (0.057433) | 0.330667 / 0.283200 (0.047467) | 0.091047 / 0.141683 (-0.050635) | 1.536541 / 1.452155 (0.084387) | 1.606566 / 1.492716 (0.113850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213234 / 0.018006 (0.195228) | 0.494801 / 0.000490 (0.494311) | 0.003764 / 0.000200 (0.003564) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023653 / 0.037411 (-0.013758) | 0.097176 / 0.014526 (0.082650) | 0.102961 / 0.176557 (-0.073595) | 0.164285 / 0.737135 (-0.572851) | 0.107586 / 0.296338 (-0.188753) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421402 / 0.215209 (0.206193) | 4.195828 / 2.077655 (2.118174) | 1.884664 / 1.504120 (0.380544) | 1.679750 / 1.541195 (0.138556) | 1.719725 / 1.468490 (0.251235) | 0.552290 / 4.584777 (-4.032486) | 3.386337 / 3.745712 (-0.359375) | 1.771527 / 5.269862 (-3.498334) | 1.133327 / 4.565676 (-3.432349) | 0.067911 / 0.424275 (-0.356364) | 0.012572 / 0.007607 (0.004965) | 0.518004 / 0.226044 (0.291960) | 5.192381 / 2.268929 (2.923453) | 2.316032 / 55.444624 (-53.128592) | 1.993264 / 6.876477 (-4.883212) | 2.071009 / 2.142072 (-0.071063) | 0.655062 / 4.805227 (-4.150165) | 0.135488 / 6.500664 (-6.365177) | 0.067273 / 0.075469 (-0.008196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.217731 / 1.841788 (-0.624056) | 13.812927 / 8.074308 (5.738619) | 13.137886 / 10.191392 (2.946494) | 0.143102 / 0.680424 (-0.537322) | 0.016884 / 0.534201 (-0.517317) | 0.370106 / 0.579283 (-0.209178) | 0.392349 / 0.434364 (-0.042015) | 0.424501 / 0.540337 (-0.115837) | 0.509830 / 1.386936 (-0.877106) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006210 / 0.011353 (-0.005142) | 0.004215 / 0.011008 (-0.006793) | 0.076129 / 0.038508 (0.037621) | 0.027825 / 0.023109 (0.004716) | 0.403973 / 0.275898 (0.128075) | 0.441089 / 0.323480 (0.117609) | 0.005420 / 0.007986 (-0.002566) | 0.004870 / 0.004328 (0.000542) | 0.075558 / 0.004250 (0.071308) | 0.039464 / 0.037052 (0.002411) | 0.404329 / 0.258489 (0.145840) | 0.447213 / 0.293841 (0.153372) | 0.025877 / 0.128546 (-0.102669) | 0.008660 / 0.075646 (-0.066987) | 0.081849 / 0.419271 (-0.337422) | 0.044551 / 0.043533 (0.001018) | 0.379102 / 0.255139 (0.123963) | 0.403104 / 0.283200 (0.119905) | 0.094754 / 0.141683 (-0.046929) | 1.460772 / 1.452155 (0.008617) | 1.569531 / 1.492716 (0.076815) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183923 / 0.018006 (0.165917) | 0.420708 / 0.000490 (0.420219) | 0.002091 / 0.000200 (0.001891) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026180 / 0.037411 (-0.011231) | 0.101529 / 0.014526 (0.087003) | 0.108739 / 0.176557 (-0.067818) | 0.160702 / 0.737135 (-0.576433) | 0.111739 / 0.296338 (-0.184600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448671 / 0.215209 (0.233462) | 4.469287 / 2.077655 (2.391632) | 2.244335 / 1.504120 (0.740215) | 2.107495 / 1.541195 (0.566301) | 2.224763 / 1.468490 (0.756272) | 0.554006 / 4.584777 (-4.030771) | 3.390109 / 3.745712 (-0.355603) | 1.744189 / 5.269862 (-3.525673) | 1.008515 / 4.565676 (-3.557161) | 0.067904 / 0.424275 (-0.356371) | 0.012243 / 0.007607 (0.004636) | 0.557635 / 0.226044 (0.331590) | 5.610383 / 2.268929 (3.341454) | 2.687326 / 55.444624 (-52.757298) | 2.405262 / 6.876477 (-4.471214) | 2.527300 / 2.142072 (0.385227) | 0.662282 / 4.805227 (-4.142945) | 0.136225 / 6.500664 (-6.364439) | 0.068136 / 0.075469 (-0.007334) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310791 / 1.841788 (-0.530997) | 14.370381 / 8.074308 (6.296072) | 14.122675 / 10.191392 (3.931283) | 0.152302 / 0.680424 (-0.528122) | 0.016624 / 0.534201 (-0.517577) | 0.359395 / 0.579283 (-0.219888) | 0.392131 / 0.434364 (-0.042233) | 0.423796 / 0.540337 (-0.116542) | 0.511387 / 1.386936 (-0.875549) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d6a61a1af1502677a6f2333896a6ffeede9ca21b \"CML watermark\")\n"
] | 2023-05-23T17:40:35 | 2023-06-01T11:58:24 | 2023-06-01T11:51:29 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5893",
"html_url": "https://github.com/huggingface/datasets/pull/5893",
"diff_url": "https://github.com/huggingface/datasets/pull/5893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5893.patch",
"merged_at": "2023-06-01T11:51:29"
} | To be used to train models it allows to load an IterableDataset from the cached Arrow file.
See https://github.com/huggingface/datasets/issues/5481 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5893/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5892/comments | https://api.github.com/repos/huggingface/datasets/issues/5892/events | https://github.com/huggingface/datasets/issues/5892 | 1,722,503,824 | I_kwDODunzps5mq1KQ | 5,892 | User access requests with manual review do not notify the dataset owner | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"cc @SBrandeis",
"I think this has been addressed.\r\n\r\nPlease open a new issue if you are still not getting notified."
] | 2023-05-23T17:27:46 | 2023-07-21T13:55:37 | 2023-07-21T13:55:36 | CONTRIBUTOR | null | null | null | ### Describe the bug
When a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that particular dataset's Settings pane.
### Steps to reproduce the bug
1. Enable a dataset's user access requests
2. Set to Manual Review
3. Ask another HF user to request access to the dataset
4. Dataset owner is not notified
### Expected behavior
The dataset owner should receive some kind of notification, perhaps in their HF site inbox, or by email, when a dataset access request is made and manual review is enabled.
### Environment info
n/a | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5892/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5891/comments | https://api.github.com/repos/huggingface/datasets/issues/5891/events | https://github.com/huggingface/datasets/pull/5891 | 1,722,384,135 | PR_kwDODunzps5RKchn | 5,891 | Make split slicing consisten with list slicing | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5891). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006916 / 0.011353 (-0.004437) | 0.004749 / 0.011008 (-0.006259) | 0.096086 / 0.038508 (0.057578) | 0.035448 / 0.023109 (0.012338) | 0.299645 / 0.275898 (0.023747) | 0.331279 / 0.323480 (0.007799) | 0.006018 / 0.007986 (-0.001968) | 0.004210 / 0.004328 (-0.000118) | 0.072998 / 0.004250 (0.068747) | 0.050082 / 0.037052 (0.013030) | 0.297714 / 0.258489 (0.039225) | 0.365523 / 0.293841 (0.071682) | 0.028081 / 0.128546 (-0.100465) | 0.009072 / 0.075646 (-0.066574) | 0.327628 / 0.419271 (-0.091643) | 0.051165 / 0.043533 (0.007633) | 0.295091 / 0.255139 (0.039952) | 0.320052 / 0.283200 (0.036852) | 0.109841 / 0.141683 (-0.031842) | 1.467867 / 1.452155 (0.015712) | 1.572600 / 1.492716 (0.079884) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281490 / 0.018006 (0.263484) | 0.499259 / 0.000490 (0.498770) | 0.000691 / 0.000200 (0.000491) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027548 / 0.037411 (-0.009863) | 0.106592 / 0.014526 (0.092066) | 0.118654 / 0.176557 (-0.057902) | 0.174313 / 0.737135 (-0.562822) | 0.124491 / 0.296338 (-0.171848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399674 / 0.215209 (0.184465) | 3.984092 / 2.077655 (1.906437) | 1.790935 / 1.504120 (0.286815) | 1.593612 / 1.541195 (0.052417) | 1.694595 / 1.468490 (0.226105) | 0.517588 / 4.584777 (-4.067189) | 3.724353 / 3.745712 (-0.021359) | 3.244807 / 5.269862 (-2.025054) | 1.602929 / 4.565676 (-2.962748) | 0.065334 / 0.424275 (-0.358941) | 0.012259 / 0.007607 (0.004652) | 0.501355 / 0.226044 (0.275311) | 4.996546 / 2.268929 (2.727618) | 2.279333 / 55.444624 (-53.165291) | 1.940126 / 6.876477 (-4.936351) | 2.122945 / 2.142072 (-0.019128) | 0.626104 / 4.805227 (-4.179123) | 0.141278 / 6.500664 (-6.359386) | 0.064522 / 0.075469 (-0.010947) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195351 / 1.841788 (-0.646436) | 15.258932 / 8.074308 (7.184624) | 14.627623 / 10.191392 (4.436231) | 0.266897 / 0.680424 (-0.413527) | 0.017557 / 0.534201 (-0.516644) | 0.392932 / 0.579283 (-0.186351) | 0.416409 / 0.434364 (-0.017955) | 0.469100 / 0.540337 (-0.071237) | 0.556247 / 1.386936 (-0.830689) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006880 / 0.011353 (-0.004473) | 0.004837 / 0.011008 (-0.006171) | 0.074518 / 0.038508 (0.036010) | 0.034204 / 0.023109 (0.011095) | 0.365100 / 0.275898 (0.089202) | 0.394976 / 0.323480 (0.071496) | 0.006364 / 0.007986 (-0.001621) | 0.004269 / 0.004328 (-0.000060) | 0.073531 / 0.004250 (0.069281) | 0.051334 / 0.037052 (0.014281) | 0.373904 / 0.258489 (0.115415) | 0.413662 / 0.293841 (0.119821) | 0.028779 / 0.128546 (-0.099767) | 0.009292 / 0.075646 (-0.066354) | 0.081574 / 0.419271 (-0.337698) | 0.046531 / 0.043533 (0.002998) | 0.368995 / 0.255139 (0.113856) | 0.376938 / 0.283200 (0.093739) | 0.112576 / 0.141683 (-0.029107) | 1.458880 / 1.452155 (0.006725) | 1.550918 / 1.492716 (0.058202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.319521 / 0.018006 (0.301515) | 0.510146 / 0.000490 (0.509656) | 0.000438 / 0.000200 (0.000238) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033082 / 0.037411 (-0.004329) | 0.118009 / 0.014526 (0.103483) | 0.127108 / 0.176557 (-0.049448) | 0.176600 / 0.737135 (-0.560535) | 0.133790 / 0.296338 (-0.162549) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437360 / 0.215209 (0.222151) | 4.367426 / 2.077655 (2.289771) | 2.193646 / 1.504120 (0.689526) | 2.025002 / 1.541195 (0.483808) | 2.142347 / 1.468490 (0.673856) | 0.525497 / 4.584777 (-4.059280) | 3.751275 / 3.745712 (0.005563) | 1.912271 / 5.269862 (-3.357590) | 1.087286 / 4.565676 (-3.478390) | 0.066328 / 0.424275 (-0.357947) | 0.011904 / 0.007607 (0.004297) | 0.545870 / 0.226044 (0.319825) | 5.434481 / 2.268929 (3.165552) | 2.719745 / 55.444624 (-52.724880) | 2.445001 / 6.876477 (-4.431476) | 2.500205 / 2.142072 (0.358133) | 0.645735 / 4.805227 (-4.159492) | 0.144210 / 6.500664 (-6.356455) | 0.065688 / 0.075469 (-0.009781) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273522 / 1.841788 (-0.568265) | 15.771778 / 8.074308 (7.697470) | 14.685261 / 10.191392 (4.493869) | 0.176523 / 0.680424 (-0.503900) | 0.017877 / 0.534201 (-0.516324) | 0.392687 / 0.579283 (-0.186596) | 0.449992 / 0.434364 (0.015628) | 0.462851 / 0.540337 (-0.077487) | 0.560178 / 1.386936 (-0.826758) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0fa3ef6eba906ee1214e0596d15a78fc358909f4 \"CML watermark\")\n"
] | 2023-05-23T16:04:33 | 2023-05-23T16:11:12 | null | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5891",
"html_url": "https://github.com/huggingface/datasets/pull/5891",
"diff_url": "https://github.com/huggingface/datasets/pull/5891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5891.patch",
"merged_at": null
} | Fix #1774, fix #5875
TODO: a test | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5891/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5889/comments | https://api.github.com/repos/huggingface/datasets/issues/5889/events | https://github.com/huggingface/datasets/issues/5889 | 1,722,373,618 | I_kwDODunzps5mqVXy | 5,889 | Token Alignment for input and output data over train and test batch/dataset. | {
"login": "akesh1235",
"id": 125154243,
"node_id": "U_kgDOB3Wzww",
"avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akesh1235",
"html_url": "https://github.com/akesh1235",
"followers_url": "https://api.github.com/users/akesh1235/followers",
"following_url": "https://api.github.com/users/akesh1235/following{/other_user}",
"gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions",
"organizations_url": "https://api.github.com/users/akesh1235/orgs",
"repos_url": "https://api.github.com/users/akesh1235/repos",
"events_url": "https://api.github.com/users/akesh1235/events{/privacy}",
"received_events_url": "https://api.github.com/users/akesh1235/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-05-23T15:58:55 | 2023-05-23T15:58:55 | null | NONE | null | null | null | `data`
> DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: 4500
})
test: Dataset({
features: ['input', 'output'],
num_rows: 500
})
})
**# input (in-correct sentence)**
`data['train'][0]['input']`
**>>** 'We are meet sunday 10am12pmET in Crown Heights Brooklyn New York'
**# output (correct sentence)**
`data['train'][0]['output']`
**>>** 'We meet Sundays 10am-12pmET in Crown Heights, Brooklyn, New York.'
**I Want to align the output tokens with input**
```
`# tokenize both inputs and targets
def tokenize_fn(batch):
# tokenize the input sequence first
# this populates input_ids, attention_mask, etc.
tokenized_inputs = tokenizer(
batch['input']
)
labels_batch = tokenizer.tokenize(batch['output']) # original targets
aligned_labels_batch = []
for i, labels in enumerate(labels_batch):
word_ids = tokenized_inputs[i].word_ids()
aligned_labels_batch.append(align_targets(labels, word_ids)) # align_targets is another user defined function which is been called here
# recall: the 'target' must be stored in key called 'labels'
tokenized_inputs['labels'] = aligned_labels_batch
return tokenized_inputs`
```
```
data.map(
tokenize_fn,
batched=True,
remove_columns=data['train'].column_names,
)
```
When this user defined function is mapped to every records of train and test batch am getting following error:
**1.** **raise DatasetTransformationNotAllowedError(
3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."**
**2.** **TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]** | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5889/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5887/comments | https://api.github.com/repos/huggingface/datasets/issues/5887/events | https://github.com/huggingface/datasets/issues/5887 | 1,722,166,382 | I_kwDODunzps5mpixu | 5,887 | HuggingsFace dataset example give error | {
"login": "donhuvy",
"id": 1328316,
"node_id": "MDQ6VXNlcjEzMjgzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1328316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donhuvy",
"html_url": "https://github.com/donhuvy",
"followers_url": "https://api.github.com/users/donhuvy/followers",
"following_url": "https://api.github.com/users/donhuvy/following{/other_user}",
"gists_url": "https://api.github.com/users/donhuvy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donhuvy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donhuvy/subscriptions",
"organizations_url": "https://api.github.com/users/donhuvy/orgs",
"repos_url": "https://api.github.com/users/donhuvy/repos",
"events_url": "https://api.github.com/users/donhuvy/events{/privacy}",
"received_events_url": "https://api.github.com/users/donhuvy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Nice catch @donhuvy, that's because some models don't need the `token_type_ids`, as in this case, as the example is using `distilbert-base-cased`, and according to the DistilBert documentation at https://huggingface.co/transformers/v3.0.2/model_doc/distilbert.html, `DistilBert doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])`. `token_type_ids` are neither required in some other well known models such as RoBERTa. \r\n\r\nHere the issue comes due to a mismatch between the tokenizer and the model, as the Colab is using a BERT tokenizer (`bert-base-cased`), while the model is a DistilBERT (`distilbert-base-cased`), so aligning the tokenizer and the model solves it!",
"#self-assign",
"@donhuvy I've created https://github.com/huggingface/datasets/pull/5902 to solve it! 🤗",
"This has been addressed in #5902.\r\n\r\nThe Quicktour notebook is deprecated now - please use the notebook version of the [Quickstart doc page](https://huggingface.co/docs/datasets/main/en/quickstart) instead (\"Open in Colab\" button)."
] | 2023-05-23T14:09:05 | 2023-07-25T14:01:01 | 2023-07-25T14:01:00 | NONE | null | null | null | ### Describe the bug
![image](https://github.com/huggingface/datasets/assets/1328316/1f4f0086-3db9-4c79-906b-05a375357cce)
![image](https://github.com/huggingface/datasets/assets/1328316/733ebd3d-89b9-4ece-b80a-00ab5b0a4122)
### Steps to reproduce the bug
Use link as reference document written https://colab.research.google.com/github/huggingface/datasets/blob/main/notebooks/Overview.ipynb#scrollTo=biqDH9vpvSVz
```python
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
if i > 5:
break
```
Error
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-44-7040b885f382>](https://localhost:8080/#) in <cell line: 5>()
5 for i, batch in enumerate(dataloader):
6 batch.to(device)
----> 7 outputs = model(**batch)
8 loss = outputs.loss
9 loss.backward()
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: DistilBertForQuestionAnswering.forward() got an unexpected keyword argument 'token_type_ids'
```
https://github.com/huggingface/datasets/assets/1328316/5d8b1d61-9337-4d59-8423-4f37f834c156
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free (my Google Drive just empty about 200 MB, but I don't think it cause problem) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5887/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5886/comments | https://api.github.com/repos/huggingface/datasets/issues/5886/events | https://github.com/huggingface/datasets/issues/5886 | 1,721,070,225 | I_kwDODunzps5mlXKR | 5,886 | Use work-stealing algorithm when parallel computing | {
"login": "1014661165",
"id": 46060451,
"node_id": "MDQ6VXNlcjQ2MDYwNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/46060451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1014661165",
"html_url": "https://github.com/1014661165",
"followers_url": "https://api.github.com/users/1014661165/followers",
"following_url": "https://api.github.com/users/1014661165/following{/other_user}",
"gists_url": "https://api.github.com/users/1014661165/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1014661165/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1014661165/subscriptions",
"organizations_url": "https://api.github.com/users/1014661165/orgs",
"repos_url": "https://api.github.com/users/1014661165/repos",
"events_url": "https://api.github.com/users/1014661165/events{/privacy}",
"received_events_url": "https://api.github.com/users/1014661165/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Alternatively we could set the number of shards to be a factor than the number of processes (current they're equal) - this way it will be less likely to end up with a shard that is significantly slower than all the other ones."
] | 2023-05-23T03:08:44 | 2023-05-24T15:30:09 | null | NONE | null | null | null | ### Feature request
when i used Dataset.map api to process data concurrently, i found that
it gets slower and slower as it gets closer to completion. Then i read the source code of arrow_dataset.py and found that it shard the dataset and use multiprocessing pool to execute each shard.It may cause the slowest task to drag out the entire program's execution time,especially when processing huge dataset.
### Motivation
using work-stealing algorithm instead of sharding and parallel computing to optimize performance.
### Your contribution
just an idea. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5886/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5885/comments | https://api.github.com/repos/huggingface/datasets/issues/5885/events | https://github.com/huggingface/datasets/pull/5885 | 1,720,954,440 | PR_kwDODunzps5RFjTL | 5,885 | Modify `is_remote_filesystem` to return True for FUSE-mounted paths | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5885). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq would you or another maintainer be able to review please? :)",
"Why you do need to support FUSE mounted paths ?\r\n\r\n`datasets` uses data that live on disk for fast lookups - FUSE mounted disks would lead to poor performance and I wouldn't recomment using it.",
"Fuse is commonly used to mount remote file systems (e.g. S3, DBFS) as a local directory. Since it's slower than using an actual local device, it's better to treat it as remote to reduce latency.",
"I think people would be confused if they don't have the same dataset behavior depending on the disk type.\r\n\r\nIf they want to use a remote bucket they should use the remote URI instead, e.g. `s3://...`. Advancements on this are tracked at #5281 "
] | 2023-05-23T01:04:54 | 2023-05-25T08:50:48 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5885",
"html_url": "https://github.com/huggingface/datasets/pull/5885",
"diff_url": "https://github.com/huggingface/datasets/pull/5885.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5885.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5885/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5888/comments | https://api.github.com/repos/huggingface/datasets/issues/5888/events | https://github.com/huggingface/datasets/issues/5888 | 1,722,290,363 | I_kwDODunzps5mqBC7 | 5,888 | A way to upload and visualize .mp4 files (millions of them) as part of a dataset | {
"login": "AntreasAntoniou",
"id": 10792502,
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AntreasAntoniou",
"html_url": "https://github.com/AntreasAntoniou",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! \r\n\r\nYou want to use `push_to_hub` (creates Parquet files) instead of `save_to_disk` (creates Arrow files) when creating a Hub dataset. Parquet is designed for long-term storage and takes less space than the Arrow format, and, most importantly, `load_dataset` can parse it, which should fix the viewer. \r\n\r\nRegarding the dataset generation, `Dataset.from_generator` with the video data represented as `datasets.Value(\"binary\")` followed by `push_to_hub` should work (if the `push_to_hub` step times out, restart it to resume uploading)\r\n\r\nPS: Once the dataset is uploaded, to make working with the dataset easier, it's a good idea to add a [transform](https://huggingface.co/docs/datasets/main/en/process#format-transform) to the README that shows how to decode the binary video data into something a model can understand. Also, if you get an `ArrowInvalid` error (can happen when working with large binary data) in `Dataset.from_generator`, reduce the value of `writer_batch_size` (the default is 1000) to fix it.",
"One issue here is that Dataset.from_generator can work well for the non 'infinite sampling' version of the dataset. The training set for example is often sampled dynamically given the video files that I have uploaded. I worry that storing the video data as binary means that I'll end up duplicating a lot of the data. Furthermore, storing video data as anything but .mp4 would quickly make the dataset size from 1.9TB to 1PB. ",
"> storing video data as anything but .mp4\r\n\r\nWhat I mean by storing as `datasets.Value(\"binary\")` is embedding raw MP4 bytes in the Arrow table, but, indeed, this would waste a lot of space if there are duplicates.\r\n\r\nSo I see two options:\r\n* if one video is not mapped to too many samples, you can embed the video bytes and do \"group by\" on the rest of the columns (this would turn them into lists) to avoid duplicating them (then, it should be easy to define a `map` in the README that samples the video data to \"unpack\" the samples)\r\n* you can create a dataset script that downloads the video files and embeds their file paths into the Arrow file\r\n\r\nAlso, I misread MP4 as MP3. We need to add a `Video` feature to the `datasets` lib to support MP4 files in the viewer (a bit trickier to implement than the `Image` feature due to the Arrow limitations).",
"I'm transferring this issue to the `datasets` repo, as it's not related to `huggingface_hub`",
"@mariosasko Right. If I want my dataset to be streamable, what are the necessary requirements to achieve that within the context of .mp4 binaries like we have here? I guess your second point here would not support that right?",
"The streaming would work, but the video paths would require using `fsspec.open` to get the content.",
"Are there any plans to make video playable on the hub?",
"Not yet. The (open source) tooling for video is not great in terms of ease of use/performance, so we are discussing internally the best way to support it (one option is creating a new library for video IO, but this will require a lot of work)",
"True. I spend a good 4 months just mixing and matching existing solutions so I could get performance that would not IO bound my model training. \r\n\r\nThis is what I ended up with, in case it's useful\r\n\r\nhttps://github.com/AntreasAntoniou/TALI/blob/045cf9e5aa75b1bf2c6d5351fb910fa10e3ff32c/tali/data/data_plus.py#L85"
] | 2023-05-22T18:05:26 | 2023-06-23T03:37:16 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
I recently chose to use huggingface hub as the home for a large multi modal dataset I've been building. https://huggingface.co/datasets/Antreas/TALI
It combines images, text, audio and video. Now, I could very easily upload a dataset made via datasets.Dataset.from_generator, as long as it did not include video files. I found that including .mp4 files in the entries would not auto-upload those files.
Hence I tried to upload them myself. I quickly found out that uploading many small files is a very bad way to use git lfs, and that it would take ages, so, I resorted to using 7z to pack them all up. But then I had a new problem.
My dataset had a size of 1.9TB. Trying to upload such a large file with the default huggingface_hub API always resulted in time outs etc. So I decided to split the large files into chunks of 5GB each and reupload.
So, eventually it all worked out. But now the dataset can't be properly and natively used by the datasets API because of all the needed preprocessing -- and furthermore the hub is unable to visualize things.
**Describe the solution you'd like**
A native way to upload large datasets that include .mp4 or other video types.
**Describe alternatives you've considered**
Already explained earlier
**Additional context**
https://huggingface.co/datasets/Antreas/TALI
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5888/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5884/comments | https://api.github.com/repos/huggingface/datasets/issues/5884/events | https://github.com/huggingface/datasets/issues/5884 | 1,719,548,172 | I_kwDODunzps5mfjkM | 5,884 | `Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"May eventually be solved in #5883 ",
"#self-assign"
] | 2023-05-22T12:03:06 | 2023-06-09T16:04:56 | 2023-06-09T16:04:55 | CONTRIBUTOR | null | null | null | ### Describe the bug
When loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `é` character `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)`.
### Steps to reproduce the bug
Running the following script will eventually fail, when reaching to the batch that contains non-ASCII compatible strings.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
>>> UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)
```
### Expected behavior
The following script to run properly, making sure that the strings are either `numpy.unicode_` or `numpy.string` instead of `numpy.bytes_` since some characters are not ASCII compatible and that would lead to an issue when applying the `map`.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
```
### Environment info
- `datasets` version: 2.12.1.dev0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5884/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5883/comments | https://api.github.com/repos/huggingface/datasets/issues/5883/events | https://github.com/huggingface/datasets/pull/5883 | 1,719,527,597 | PR_kwDODunzps5RAkYi | 5,883 | Fix string-encoding, make `batch_size` optional, and minor improvements in `Dataset.to_tf_dataset` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n\r\nColab Gist at https://gist.github.com/alvarobartt/1746959d1abb9a33e0c593f3bd82a2fb\r\n\r\nAlso, here's a quick sample of what's happening:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"imdb\", split=\"train\")\r\ntfds = ds.to_tf_dataset(batch_size=16)\r\nfor batch in tfds:\r\n print(batch)\r\n>>> UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)\r\n```\r\n\r\nA more detailed version of it:\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"a\": [1],\r\n \"b\": [\"é\"],\r\n }\r\n)\r\ntfds = ds.to_tf_dataset(batch_size=1)\r\nfor batch in tfds:\r\n print(batch)\r\n>>> UnicodeEncodeError: 'ascii' codec can't encode character '\\xe9' in position 0: ordinal not in range(128)\r\n```\r\n\r\nThe original issue comes from https://github.com/tensorflow/tensorflow/blob/388d952114e59a1aeda440ed4737b29f8b7c6e8a/tensorflow/python/ops/script_ops.py#LL234C4-L234C4, which could easily be solved by replacing that line with `return result.astype(np.unicode_)` but they are mentioning that it may lead to issues.\r\n\r\nEven the following fails in `numpy`:\r\n\r\n```python\r\nimport numpy as np\r\n\r\nx = np.array([\"é\"]).astype(np.bytes_)\r\n```",
"cc. @lhoestq :hugs:",
"cc @Rocketknight1 ",
"> Nice ! Could you add some tests to make sure that batch_size=None works as expected ?\r\n\r\nSure, I'll add the tests for everything, including the string-encoding issue to make sure it's solved!",
"Thanks for the review @lhoestq and @Rocketknight1! I do understand that processing it in batches is always more efficient than processing it one-by-one, it was just to make `batch_size` optional. What we can do is default it to a certain batch size e.g. 16 as before, and that's it, but I think it can still remain optional.",
"@Rocketknight1 then I'll add the integration tests for the optional `batch_size` as well as for the encoding of non-ASCII compatible characters 😄 Do we set the default `batch_size` to 16 instead of `None`?",
"@alvarobartt I think 16 is a reasonable default, yep!",
"I think default should be None, not 16.\r\nUsers won't expect to have it batched by default.",
"Then I'll leave it as is, and add the unit/integration tests, thanks @Rocketknight1 and @lhoestq ",
"Hi @Rocketknight1 @lhoestq! So the string-encoding issue is already solved, but I've got one doubt about the `batch_size` being optional in the multiprocessing approach, since in that case I assume the `batch_size` should be mandatory, for the moment I'm assuming it is/should be mandatory, but let me know if you want me to add a check to disallow `batch_size=None` when `num_workers>1`. Thanks!",
"> To showcase the current issue, here's a Colab Gist, that shows that the `imdb` dataset cannot be read/iterated, since one or more samples contain a non-ascii character that is being converted to `numpy.bytes_`, and so on fails.\r\n> \r\n> Colab Gist at https://gist.github.com/alvarobartt/1746959d1abb9a33e0c593f3bd82a2fb\r\n\r\nI've used the Colab shared above for testing purposes, and it works fine, plus the unit/integration tests are passing. I've also trained a `KerasNLP` model with incoming data from 🤗`datasets` with no issue at all!",
"> in the multiprocessing approach, since in that case I assume the batch_size should be mandatory,\r\n\r\nNo I think they're quite orthogonal, no need to have it mandatory",
"> No I think they're quite orthogonal, no need to have it mandatory\r\n\r\nBut it will break if `batch_size=None` as the multiprocessing approach will aim to prepare batches and distribute those to every worker, and assuming `batch_size=1` when `batch_size=None` I guess is not a good assumption, right?",
"Ah I see. Multiprocessing should support batch_size=None indeed. If you have ideas you can do it in this PR, or raise a NotImplementedError and we can see later",
"Sure @lhoestq, I can add a `NotImplementedError` for the moment, and prepare the next PR straight-away to tackle the multiprocessing approach with `batch_size=None`, but not sure if that may eventually collide with @Rocketknight1 PR at https://github.com/huggingface/datasets/pull/5863",
"Yes, let me merge the PR at #5863 after this one, and then we can open another to improve the behaviour with multiprocessing and `batch_size=None`!",
"Sure @Rocketknight1 makes complete sense to me! Do you want me to add the `raise NotImplementedError` and then we merge this PR? Or you prefer to directly merge the current?",
"`raise NotImplementedError` for now with an error telling the user that multiprocessing needs them to specify a batch size, I think!",
"Since you recently approved @Rocketknight1, are we ready to merge? Thanks 🤗",
"Ah actually it looks like `minimal_tf_collate_fn` doesn't support batch_size=None",
"Hi @lhoestq so I didn't include the call to `collate_fn`, as we won't need to collate the incoming data e.g. \"str\" should remain a \"str\" not a [\"str\"], and the `minimal_collate_fn` was indeed putting everything into a list, so the output was not un-batched, but batched with size 1",
"What if the user passes a collate_fn ? The torch DataLoader still applies it if batch_size=None for example.\r\n\r\nDoes my last change look of to you ? If so I think we can merge",
"> What if the user passes a collate_fn ? The torch DataLoader still applies it if batch_size=None for example.\r\n> \r\n> Does my last change look of to you ? If so I think we can merge\r\n\r\nI think we're good, since it won't batch it under the scenario of `str` being provided instead of `List[str]`, and the unit/integration tests are passing, so I'm OK to merge. Maybe we can double check with Matt? cc @Rocketknight1 ",
"Yes, and sorry for the delay! I'm happy to merge.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006555 / 0.011353 (-0.004798) | 0.004521 / 0.011008 (-0.006487) | 0.096633 / 0.038508 (0.058125) | 0.032859 / 0.023109 (0.009750) | 0.294632 / 0.275898 (0.018734) | 0.325140 / 0.323480 (0.001660) | 0.005676 / 0.007986 (-0.002310) | 0.005252 / 0.004328 (0.000924) | 0.074349 / 0.004250 (0.070099) | 0.045836 / 0.037052 (0.008784) | 0.302919 / 0.258489 (0.044430) | 0.340686 / 0.293841 (0.046845) | 0.028398 / 0.128546 (-0.100148) | 0.008942 / 0.075646 (-0.066704) | 0.326994 / 0.419271 (-0.092278) | 0.049556 / 0.043533 (0.006023) | 0.293883 / 0.255139 (0.038744) | 0.316522 / 0.283200 (0.033322) | 0.097385 / 0.141683 (-0.044298) | 1.405334 / 1.452155 (-0.046821) | 1.521529 / 1.492716 (0.028812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212269 / 0.018006 (0.194263) | 0.445692 / 0.000490 (0.445203) | 0.004930 / 0.000200 (0.004730) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026907 / 0.037411 (-0.010504) | 0.108607 / 0.014526 (0.094081) | 0.116806 / 0.176557 (-0.059751) | 0.178428 / 0.737135 (-0.558707) | 0.122326 / 0.296338 (-0.174012) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404211 / 0.215209 (0.189002) | 4.045374 / 2.077655 (1.967719) | 1.877237 / 1.504120 (0.373117) | 1.706276 / 1.541195 (0.165081) | 1.750610 / 1.468490 (0.282120) | 0.522331 / 4.584777 (-4.062446) | 3.742286 / 3.745712 (-0.003426) | 1.791285 / 5.269862 (-3.478577) | 1.043872 / 4.565676 (-3.521805) | 0.065176 / 0.424275 (-0.359099) | 0.011821 / 0.007607 (0.004214) | 0.507374 / 0.226044 (0.281329) | 5.088803 / 2.268929 (2.819875) | 2.282742 / 55.444624 (-53.161882) | 1.950737 / 6.876477 (-4.925740) | 2.042262 / 2.142072 (-0.099810) | 0.636525 / 4.805227 (-4.168702) | 0.140837 / 6.500664 (-6.359827) | 0.063223 / 0.075469 (-0.012246) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188070 / 1.841788 (-0.653718) | 14.622681 / 8.074308 (6.548372) | 13.247988 / 10.191392 (3.056596) | 0.165858 / 0.680424 (-0.514566) | 0.017476 / 0.534201 (-0.516725) | 0.391973 / 0.579283 (-0.187310) | 0.433326 / 0.434364 (-0.001038) | 0.467163 / 0.540337 (-0.073175) | 0.568359 / 1.386936 (-0.818577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006076 / 0.011353 (-0.005276) | 0.004439 / 0.011008 (-0.006570) | 0.074496 / 0.038508 (0.035988) | 0.031396 / 0.023109 (0.008287) | 0.372237 / 0.275898 (0.096339) | 0.403412 / 0.323480 (0.079932) | 0.005430 / 0.007986 (-0.002555) | 0.003846 / 0.004328 (-0.000483) | 0.074403 / 0.004250 (0.070153) | 0.045398 / 0.037052 (0.008346) | 0.394133 / 0.258489 (0.135644) | 0.421769 / 0.293841 (0.127928) | 0.027936 / 0.128546 (-0.100610) | 0.008962 / 0.075646 (-0.066685) | 0.083158 / 0.419271 (-0.336113) | 0.044863 / 0.043533 (0.001331) | 0.393834 / 0.255139 (0.138695) | 0.391537 / 0.283200 (0.108337) | 0.097971 / 0.141683 (-0.043712) | 1.496632 / 1.452155 (0.044477) | 1.585511 / 1.492716 (0.092795) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010094 / 0.018006 (-0.007913) | 0.437811 / 0.000490 (0.437321) | 0.000963 / 0.000200 (0.000763) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028864 / 0.037411 (-0.008547) | 0.112480 / 0.014526 (0.097954) | 0.120938 / 0.176557 (-0.055619) | 0.170888 / 0.737135 (-0.566247) | 0.125903 / 0.296338 (-0.170435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426716 / 0.215209 (0.211507) | 4.238380 / 2.077655 (2.160725) | 2.052889 / 1.504120 (0.548769) | 1.871043 / 1.541195 (0.329848) | 1.890405 / 1.468490 (0.421915) | 0.522059 / 4.584777 (-4.062718) | 3.813331 / 3.745712 (0.067619) | 2.891651 / 5.269862 (-2.378210) | 1.323836 / 4.565676 (-3.241841) | 0.065124 / 0.424275 (-0.359151) | 0.011498 / 0.007607 (0.003891) | 0.525102 / 0.226044 (0.299057) | 5.245190 / 2.268929 (2.976261) | 2.531149 / 55.444624 (-52.913476) | 2.197323 / 6.876477 (-4.679153) | 2.197314 / 2.142072 (0.055241) | 0.633423 / 4.805227 (-4.171804) | 0.140248 / 6.500664 (-6.360416) | 0.064432 / 0.075469 (-0.011037) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270639 / 1.841788 (-0.571149) | 14.856678 / 8.074308 (6.782369) | 14.337631 / 10.191392 (4.146239) | 0.195319 / 0.680424 (-0.485105) | 0.017628 / 0.534201 (-0.516573) | 0.393984 / 0.579283 (-0.185299) | 0.421987 / 0.434364 (-0.012376) | 0.459245 / 0.540337 (-0.081092) | 0.557786 / 1.386936 (-0.829150) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a129219a48c1b07c06d4bc1db32c317bf513089d \"CML watermark\")\n",
"Will you eventually need help with your PR @Rocketknight1? I'll be happy to help if needed 😄 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007577 / 0.011353 (-0.003776) | 0.004960 / 0.011008 (-0.006048) | 0.113622 / 0.038508 (0.075114) | 0.037981 / 0.023109 (0.014872) | 0.355312 / 0.275898 (0.079414) | 0.393384 / 0.323480 (0.069904) | 0.006575 / 0.007986 (-0.001411) | 0.005941 / 0.004328 (0.001612) | 0.085976 / 0.004250 (0.081726) | 0.053784 / 0.037052 (0.016732) | 0.369358 / 0.258489 (0.110869) | 0.399402 / 0.293841 (0.105561) | 0.032155 / 0.128546 (-0.096391) | 0.010448 / 0.075646 (-0.065199) | 0.389009 / 0.419271 (-0.030263) | 0.057377 / 0.043533 (0.013844) | 0.354968 / 0.255139 (0.099829) | 0.382404 / 0.283200 (0.099204) | 0.111056 / 0.141683 (-0.030627) | 1.807986 / 1.452155 (0.355832) | 1.866070 / 1.492716 (0.373354) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244439 / 0.018006 (0.226432) | 0.491942 / 0.000490 (0.491452) | 0.001910 / 0.000200 (0.001710) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031024 / 0.037411 (-0.006387) | 0.129674 / 0.014526 (0.115148) | 0.142974 / 0.176557 (-0.033583) | 0.213568 / 0.737135 (-0.523568) | 0.147794 / 0.296338 (-0.148545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.480333 / 0.215209 (0.265124) | 4.792901 / 2.077655 (2.715246) | 2.233145 / 1.504120 (0.729025) | 2.036291 / 1.541195 (0.495096) | 2.109631 / 1.468490 (0.641140) | 0.624546 / 4.584777 (-3.960231) | 4.543511 / 3.745712 (0.797799) | 3.961345 / 5.269862 (-1.308517) | 1.903634 / 4.565676 (-2.662042) | 0.076584 / 0.424275 (-0.347691) | 0.014590 / 0.007607 (0.006983) | 0.593195 / 0.226044 (0.367151) | 5.928740 / 2.268929 (3.659811) | 2.781164 / 55.444624 (-52.663460) | 2.364303 / 6.876477 (-4.512173) | 2.510139 / 2.142072 (0.368067) | 0.770886 / 4.805227 (-4.034341) | 0.167995 / 6.500664 (-6.332669) | 0.076622 / 0.075469 (0.001153) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.402398 / 1.841788 (-0.439390) | 17.921233 / 8.074308 (9.846925) | 17.036738 / 10.191392 (6.845346) | 0.168997 / 0.680424 (-0.511427) | 0.020259 / 0.534201 (-0.513941) | 0.465322 / 0.579283 (-0.113962) | 0.500435 / 0.434364 (0.066071) | 0.546846 / 0.540337 (0.006509) | 0.658130 / 1.386936 (-0.728806) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007624 / 0.011353 (-0.003729) | 0.005265 / 0.011008 (-0.005744) | 0.086886 / 0.038508 (0.048377) | 0.038235 / 0.023109 (0.015126) | 0.463969 / 0.275898 (0.188071) | 0.502451 / 0.323480 (0.178971) | 0.006285 / 0.007986 (-0.001701) | 0.004525 / 0.004328 (0.000197) | 0.086557 / 0.004250 (0.082307) | 0.052414 / 0.037052 (0.015362) | 0.482167 / 0.258489 (0.223678) | 0.513684 / 0.293841 (0.219843) | 0.032929 / 0.128546 (-0.095618) | 0.010249 / 0.075646 (-0.065397) | 0.093377 / 0.419271 (-0.325895) | 0.054114 / 0.043533 (0.010582) | 0.466116 / 0.255139 (0.210977) | 0.488977 / 0.283200 (0.205777) | 0.115446 / 0.141683 (-0.026237) | 1.762912 / 1.452155 (0.310757) | 1.874191 / 1.492716 (0.381475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012666 / 0.018006 (-0.005341) | 0.485962 / 0.000490 (0.485473) | 0.002621 / 0.000200 (0.002421) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033661 / 0.037411 (-0.003751) | 0.135395 / 0.014526 (0.120869) | 0.147230 / 0.176557 (-0.029326) | 0.205847 / 0.737135 (-0.531288) | 0.151496 / 0.296338 (-0.144842) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514097 / 0.215209 (0.298887) | 5.134093 / 2.077655 (3.056438) | 2.496775 / 1.504120 (0.992655) | 2.268078 / 1.541195 (0.726883) | 2.342153 / 1.468490 (0.873663) | 0.623130 / 4.584777 (-3.961647) | 4.601787 / 3.745712 (0.856075) | 3.414249 / 5.269862 (-1.855613) | 1.849603 / 4.565676 (-2.716073) | 0.078350 / 0.424275 (-0.345925) | 0.013785 / 0.007607 (0.006178) | 0.638783 / 0.226044 (0.412739) | 6.378356 / 2.268929 (4.109427) | 3.072867 / 55.444624 (-52.371757) | 2.668123 / 6.876477 (-4.208354) | 2.693905 / 2.142072 (0.551833) | 0.764583 / 4.805227 (-4.040644) | 0.166854 / 6.500664 (-6.333810) | 0.076883 / 0.075469 (0.001414) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502003 / 1.841788 (-0.339784) | 18.674205 / 8.074308 (10.599897) | 16.837759 / 10.191392 (6.646367) | 0.176995 / 0.680424 (-0.503428) | 0.020126 / 0.534201 (-0.514075) | 0.464480 / 0.579283 (-0.114803) | 0.516477 / 0.434364 (0.082113) | 0.549818 / 0.540337 (0.009481) | 0.659927 / 1.386936 (-0.727009) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a129219a48c1b07c06d4bc1db32c317bf513089d \"CML watermark\")\n",
"@alvarobartt Yes, I'll ping you for a review once it's ready!"
] | 2023-05-22T11:51:07 | 2023-06-08T11:09:03 | 2023-06-06T16:49:15 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5883",
"html_url": "https://github.com/huggingface/datasets/pull/5883",
"diff_url": "https://github.com/huggingface/datasets/pull/5883.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5883.patch",
"merged_at": "2023-06-06T16:49:15"
} | ## What's in this PR?
This PR addresses some minor fixes and general improvements in the `to_tf_dataset` method of `datasets.Dataset`, to convert a 🤗HuggingFace Dataset as a TensorFlow Dataset.
The main bug solved in this PR comes with the string-encoding, since for safety purposes the internal conversion of `numpy.arrays` when `dtype` is unicode/string, is to convert it into `numpy.bytes`, more information in the docstring of https://github.com/tensorflow/tensorflow/blob/388d952114e59a1aeda440ed4737b29f8b7c6e8a/tensorflow/python/ops/script_ops.py#L210. That's triggered when using `tensorflow.numpy_function` as it's applying another type cast besides the one that `datasets` does, so the casting is applied at least twice per entry/batch. So this means that the definition of the `numpy.unicode_` dtype when the data in the batch is a string, is ignored, and replaced by `numpy.bytes_`.
Besides that, some other minor things have been fixed:
* Made `batch_size` an optional parameter in `to_tf_dataset`
* Map the `tensorflow` output dtypes just once, and not in every `tf.function` call during `map`
* Keep `numpy` formatting in the `datasets.Dataset` if already formatted like it, no need to format it again as `numpy`
* Docstring indentation in `dataset_to_tf` and `multiprocess_dataset_to_tf`
## What's missing in this PR?
I can include some integration tests if needed, to validate that `batch_size` is optional, and that the tensors in the TF-Dataset can be looped over with no issues as before. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5883/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5881/comments | https://api.github.com/repos/huggingface/datasets/issues/5881/events | https://github.com/huggingface/datasets/issues/5881 | 1,719,402,643 | I_kwDODunzps5mfACT | 5,881 | Split dataset by node: index error when sharding iterable dataset | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"cc @lhoestq in case you have any ideas here! Might need a multi-host set-up to debug (can give you access to a JAX one if you need)"
] | 2023-05-22T10:36:13 | 2023-05-23T08:32:14 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
Context: we're splitting an iterable dataset by node and then passing it to a torch data loader with multiple workers
When we iterate over it for 5 steps, we don't get an error
When we instead iterate over it for 8 steps, we get an `IndexError` when fetching the data if we have too many workers
### Steps to reproduce the bug
Here, we have 2 JAX processes (`jax.process_count() = 2`) which we split the dataset over. The dataset loading script can be found here: https://huggingface.co/datasets/distil-whisper/librispeech_asr/blob/c6a1e805cbfeed5057400ac5937327d7e30281b8/librispeech_asr.py#L310
<details>
<summary> Code to reproduce </summary>
```python
from datasets import load_dataset
import jax
from datasets.distributed import split_dataset_by_node
from torch.utils.data import DataLoader
from tqdm import tqdm
# load an example dataset (https://huggingface.co/datasets/distil-whisper/librispeech_asr)
dataset = load_dataset("distil-whisper/librispeech_asr", "all", split="train.clean.100", streaming=True)
# just keep the text column -> no need to define a collator
dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"})
# define some constants
batch_size = 256
num_examples = 5 # works for 5 examples, doesn't for 8
num_workers = dataset_text.n_shards
# try with multiple workers
dataloader = DataLoader(dataset_text, batch_size=batch_size, num_workers=num_workers, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Multiple workers"):
if i == num_examples:
break
# try splitting by node (we can't do this with `dataset_text` since `split_dataset_by_node` expects the Audio column for an ASR dataset)
dataset = split_dataset_by_node(dataset, rank=jax.process_index(), world_size=jax.process_count())
# remove the text column again
dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"})
dataloader = DataLoader(dataset_text, batch_size=16, num_workers=num_workers // 2, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Split by node"):
if i == num_examples:
break
# too many workers
dataloader = DataLoader(dataset_text, batch_size=256, num_workers=num_workers, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"):
if i == num_examples:
break
```
</details>
<details>
<summary> With 5 examples: </summary>
```
Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.33s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Split by node: 100%|██████████████████████████████████████████████████████████████████████| 5/5 [00:13<00:00, 2.76s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary t
o have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more
files than 7.
Too many workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:15<00:00, 3.03s/it]
```
</details>
<details>
<summary> With 7 examples: </summary>
```
Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 8/8 [00:13<00:00, 1.71s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Split by node: 100%|██████████████████████████████████████████████████████████████████████| 8/8 [00:11<00:00, 1.38s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more files than 7.
Too many workers: 88%|██████████████████████████████████████████████████████████▋ | 7/8 [00:13<00:01, 1.89s/it]
Traceback (most recent call last):
File "distil-whisper/test_librispeech.py", line 36, in <module>
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__
for obj in iterable:
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data
return self._process_data(data)
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 644, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 7.
Original Traceback (most recent call last):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 986, in __iter__
yield from self._iter_pytorch(ex_iterable)
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 920, in _iter_pytorch
for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers):
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 540, in shard_data_sources
self.ex_iterable.shard_data_sources(worker_id, num_workers),
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 796, in shard_data_sources
self.ex_iterable.shard_data_sources(worker_id, num_workers),
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 126, in shard_data_sources
requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices])
File "/home/sanchitgandhi/datasets/src/datasets/utils/sharding.py", line 76, in _merge_gen_kwargs
for key in gen_kwargs_list[0]
IndexError: list index out of range
```
</details>
### Expected behavior
Should pass for both 5 and 7 examples
### Environment info
- `datasets` version: 2.12.1.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5881/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5880/comments | https://api.github.com/repos/huggingface/datasets/issues/5880/events | https://github.com/huggingface/datasets/issues/5880 | 1,719,090,101 | I_kwDODunzps5mdzu1 | 5,880 | load_dataset from s3 file system through streaming can't not iterate data | {
"login": "janineguo",
"id": 59083384,
"node_id": "MDQ6VXNlcjU5MDgzMzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/59083384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janineguo",
"html_url": "https://github.com/janineguo",
"followers_url": "https://api.github.com/users/janineguo/followers",
"following_url": "https://api.github.com/users/janineguo/following{/other_user}",
"gists_url": "https://api.github.com/users/janineguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janineguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janineguo/subscriptions",
"organizations_url": "https://api.github.com/users/janineguo/orgs",
"repos_url": "https://api.github.com/users/janineguo/repos",
"events_url": "https://api.github.com/users/janineguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/janineguo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This sounds related to #5281.\r\n\r\nCan you try passing `storage_options=s3_client.storage_options` instead passing it to `use_auth_token=` ?",
"I tried `storage_options` before, but it doesn't work, I checked our source code and I found that we even didn't pass this parameter to the following process. if I use `storage_options` instead of `use_auth_token`, then I also need to change another place of the code. the last line of `streaming_download_manager.py`. our code only passes the `use_auth_token` to the following handler, but does nothing to the `storage_options`\r\n<img width=\"1050\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/59083384/5be90933-3331-4ecf-9e11-34f9852d8f92\">\r\n",
"Cloud storage support is still experimental indeed and you can expect some bugs.\r\n\r\nI think we need to pass the storage options anywhere use_auth_token is passed in indeed. Let me know if you'd be interested in contributing a fix !",
"Oh, that's great, I really like to fix it. because datasets is really useful and most of our projects need to use it, but we can store our data on the internet due to security reasons. fix it not only make our own work more efficient but also can benefit others who use it."
] | 2023-05-22T07:40:27 | 2023-05-26T12:52:08 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it
<img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0">
<img width="1144" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/76872af3-8b3c-42ff-9f55-528c920a7af1">
we can change 4 lines to fix this bug, you can check whether it is ok for us.
<img width="941" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/5a22155a-ece7-496c-8506-047e5c235cd3">
### Steps to reproduce the bug
1. storage a file in you s3 file system
2. use load_dataset to read it through streaming
3. iterate it
### Expected behavior
can iterate it successfully
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5880/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5880/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5878/comments | https://api.github.com/repos/huggingface/datasets/issues/5878/events | https://github.com/huggingface/datasets/issues/5878 | 1,718,203,843 | I_kwDODunzps5mabXD | 5,878 | Prefetching for IterableDataset | {
"login": "vyeevani",
"id": 30946190,
"node_id": "MDQ6VXNlcjMwOTQ2MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/30946190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vyeevani",
"html_url": "https://github.com/vyeevani",
"followers_url": "https://api.github.com/users/vyeevani/followers",
"following_url": "https://api.github.com/users/vyeevani/following{/other_user}",
"gists_url": "https://api.github.com/users/vyeevani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vyeevani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyeevani/subscriptions",
"organizations_url": "https://api.github.com/users/vyeevani/orgs",
"repos_url": "https://api.github.com/users/vyeevani/repos",
"events_url": "https://api.github.com/users/vyeevani/events{/privacy}",
"received_events_url": "https://api.github.com/users/vyeevani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Very cool! Do you have a link to the code that you're using to eagerly fetch the data? Would also be interested in hacking around something here for pre-fetching iterable datasets",
"I ended up just switching back to the pytorch dataloader and using it's multiprocessing functionality to handle this :(. I'm just not that familiar with python multiprocessing to get something to work in jupyter (kept having weird behaviors happening with zombies living after the cell finished).",
"Ultimately settled on using webdataset to circumvent huggingface datasets entirely. Would definitely switch back if: https://github.com/huggingface/datasets/issues/5337 was resolved.",
"Hi! You can combine `datasets` with `torchdata` to prefetch `IterableDataset`'s samples:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torchdata.datapipes.iter import IterableWrapper, HuggingFaceHubReader\r\nfrom torch.utils.data import DataLoader\r\n\r\nds = load_dataset(\"sst\", split=\"train\", streaming=True)\r\n# processing...\r\ndp = IterableWrapper(ds)\r\ndp = dp.prefetch(100)\r\ndl = DataLoader(dp, batch_size=8)\r\n\r\ni = iter(dl)\r\nnext(i)\r\n```",
"Hey @mariosasko! Thanks for the tip here - introducing prefetch with `torchdata` didn't really give me any performance difference vs not prefetching, but the concept is definitely one that could be really beneficial. Are there any benchmarks that show the speed-up you can get with `torchdata`'s prefetch just for comparison?"
] | 2023-05-20T15:25:40 | 2023-06-01T17:40:00 | null | NONE | null | null | null | ### Feature request
Add support for prefetching the next n batches through iterabledataset to reduce batch loading bottleneck in training loop.
### Motivation
The primary motivation behind this is to use hardware accelerators alongside a streaming dataset. This is required when you are in a low ram or low disk space setting as well as quick iteration where you're iterating though different accelerator environments (e.x changing ec2 instances quickly to figure out batch/sec for a particular architecture).
Currently, using the IterableDataset results in accelerators becoming basically useless due to the massive bottleneck induced by the dataset lazy loading/transform/mapping.
I've considered two alternatives:
PyTorch dataloader that handles this. However, I'm using jax, and I believe this is a piece of functionality that should live in the stream class.
Replicating the "num_workers" part of the PyTorch DataLoader to eagerly load batches and apply the transform so Arrow caching will automatically cache results and make them accessible.
### Your contribution
I may or may not have time to do this. Currently, I've written the basic multiprocessor approach to handle the eager DataLoader for my own use case with code that's not integrated to datasets. I'd definitely see this as being the default over the regular Dataset for most people given that they wouldn't have to wait on the datasets while also not worrying about performance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5878/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5878/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5877/comments | https://api.github.com/repos/huggingface/datasets/issues/5877/events | https://github.com/huggingface/datasets/issues/5877 | 1,717,983,961 | I_kwDODunzps5mZlrZ | 5,877 | Request for text deduplication feature | {
"login": "SupreethRao99",
"id": 55043035,
"node_id": "MDQ6VXNlcjU1MDQzMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/55043035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SupreethRao99",
"html_url": "https://github.com/SupreethRao99",
"followers_url": "https://api.github.com/users/SupreethRao99/followers",
"following_url": "https://api.github.com/users/SupreethRao99/following{/other_user}",
"gists_url": "https://api.github.com/users/SupreethRao99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SupreethRao99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SupreethRao99/subscriptions",
"organizations_url": "https://api.github.com/users/SupreethRao99/orgs",
"repos_url": "https://api.github.com/users/SupreethRao99/repos",
"events_url": "https://api.github.com/users/SupreethRao99/events{/privacy}",
"received_events_url": "https://api.github.com/users/SupreethRao99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"The \"exact match\" deduplication will be possible when we resolve https://github.com/huggingface/datasets/issues/2514 (first, https://github.com/apache/arrow/issues/30950 needs to be addressed on the Arrow side). In the meantime, you can use Polars or DuckDB (e.g., via [datasets-sql](https://github.com/mariosasko/datasets_sql)).\r\n\r\nFuzzy deduplication is out-of-scope for now ([splink](https://github.com/moj-analytical-services/splink) is probably the best tool for it).",
"This library can be an intermediate solution : https://github.com/ChenghaoMou/text-dedup/tree/main",
"I have been using polars to remove duplicates but it would be nice to do it directly in pyarrow.\r\n\r\nFor example,\r\n\r\n1. Read dataset with pyarrow\r\n2. Use scan_pyarrow_dataset() with Polars to create a LazyFrame\r\n3. Use sort and unique to remove duplicates based on a subset of columns\r\n4. Convert to table and save data with ds.write_dataset()\r\n\r\nThere are times where that workflow makes perfect sense because I do additional transformations with Polars. Most of the time I am simply just reading dataset A and writing dataset B without duplicates though, and I wish I could use a pyarrow scanner or table directly. "
] | 2023-05-20T01:56:00 | 2023-07-26T21:42:14 | null | NONE | null | null | null | ### Feature request
It would be great if there would be support for high performance, highly scalable text deduplication algorithms as part of the datasets library.
### Motivation
Motivated by this blog post https://huggingface.co/blog/dedup and this library https://github.com/google-research/deduplicate-text-datasets, but slightly frustrated by how its not very easy to work with these tools I am proposing this feature.
### Your contribution
I would be happy to contribute to the development effort of this feature. would love to collaborate with others in the development effort. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5877/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5876/comments | https://api.github.com/repos/huggingface/datasets/issues/5876/events | https://github.com/huggingface/datasets/issues/5876 | 1,717,978,985 | I_kwDODunzps5mZkdp | 5,876 | Incompatibility with DataLab | {
"login": "helpmefindaname",
"id": 26192135,
"node_id": "MDQ6VXNlcjI2MTkyMTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/26192135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helpmefindaname",
"html_url": "https://github.com/helpmefindaname",
"followers_url": "https://api.github.com/users/helpmefindaname/followers",
"following_url": "https://api.github.com/users/helpmefindaname/following{/other_user}",
"gists_url": "https://api.github.com/users/helpmefindaname/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helpmefindaname/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helpmefindaname/subscriptions",
"organizations_url": "https://api.github.com/users/helpmefindaname/orgs",
"repos_url": "https://api.github.com/users/helpmefindaname/repos",
"events_url": "https://api.github.com/users/helpmefindaname/events{/privacy}",
"received_events_url": "https://api.github.com/users/helpmefindaname/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Indeed, `clobber=True` (with a warning if the existing protocol will be overwritten) should fix the issue, but maybe a better solution is to register our compression filesystem before the script is executed and unregister them afterward. WDYT @lhoestq @albertvillanova?",
"I think we should use clobber and show a warning if it overwrote a registered filesystem indeed ! This way the user can re-register the filesystems if needed. Though they should probably be compatible (and maybe do the exact same thing) so I wouldn't de-register the `datasets` filesystems"
] | 2023-05-20T01:39:11 | 2023-05-25T06:42:34 | 2023-05-25T06:42:34 | NONE | null | null | null | ### Describe the bug
Hello,
I am currently working on a project where both [DataLab](https://github.com/ExpressAI/DataLab) and [datasets](https://github.com/huggingface/datasets) are subdependencies.
I noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSystems not being registered before.
When running the code below, I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\__init__.py", line 28, in <module>
from datalabs.arrow_dataset import concatenate_datasets, Dataset
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_dataset.py", line 60, in <module>
from datalabs.arrow_writer import ArrowWriter, OptimizedTypedSequence
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_writer.py", line 28, in <module>
from datalabs.features import (
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\__init__.py", line 2, in <module>
from datalabs.features.audio import Audio
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\audio.py", line 21, in <module>
from datalabs.utils.streaming_download_manager import xopen
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\utils\streaming_download_manager.py", line 16, in <module>
from datalabs.filesystems import COMPRESSION_FILESYSTEMS
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\filesystems\__init__.py", line 37, in <module>
fsspec.register_implementation(fs_class.protocol, fs_class)
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\fsspec\registry.py", line 51, in register_implementation
raise ValueError(
ValueError: Name (bz2) already in the registry and clobber is False
```
I think as simple solution would be to just set `clobber=True` in https://github.com/huggingface/datasets/blob/main/src/datasets/filesystems/__init__.py#L28. This allows the register to discard previous registrations. This should work, as the datalabs FileSystems are copies of the datasets FileSystems. However, I don't know if it is guaranteed to be compatible with other libraries that might use the same protocols.
I am linking the symmetric issue on [DataLab](https://github.com/ExpressAI/DataLab/issues/425) as ideally the issue is solved in both libraries the same way. Otherwise, it could lead to different behaviors depending on which library gets imported first.
### Steps to reproduce the bug
1. Run `pip install datalabs==0.4.15 datasets==2.12.0`
2. Run the following python code:
```
import datalabs
import datasets
```
### Expected behavior
It should be possible to import both libraries without getting a Value Error
### Environment info
datalabs==0.4.15
datasets==2.12.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5876/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5875/comments | https://api.github.com/repos/huggingface/datasets/issues/5875/events | https://github.com/huggingface/datasets/issues/5875 | 1,716,770,394 | I_kwDODunzps5mU9Za | 5,875 | Why split slicing doesn't behave like list slicing ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | open | false | null | [] | null | [
"A duplicate of https://github.com/huggingface/datasets/issues/1774"
] | 2023-05-19T07:21:10 | 2023-05-23T16:02:14 | null | NONE | null | null | null | ### Describe the bug
If I want to get the first 10 samples of my dataset, I can do :
```
ds = datasets.load_dataset('mnist', split='train[:10]')
```
But if I exceed the number of samples in the dataset, an exception is raised :
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
> ValueError: Requested slice [:999999999] incompatible with 60000 examples.
### Steps to reproduce the bug
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
### Expected behavior
I would expect it to behave like python lists (no exception raised, the whole list is kept) :
```
d = list(range(1000))[:999999]
print(len(d)) # > 1000
```
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5875/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5874/comments | https://api.github.com/repos/huggingface/datasets/issues/5874/events | https://github.com/huggingface/datasets/issues/5874 | 1,715,708,930 | I_kwDODunzps5mQ6QC | 5,874 | Using as_dataset on a "parquet" builder | {
"login": "rems75",
"id": 9039058,
"node_id": "MDQ6VXNlcjkwMzkwNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9039058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rems75",
"html_url": "https://github.com/rems75",
"followers_url": "https://api.github.com/users/rems75/followers",
"following_url": "https://api.github.com/users/rems75/following{/other_user}",
"gists_url": "https://api.github.com/users/rems75/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rems75/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rems75/subscriptions",
"organizations_url": "https://api.github.com/users/rems75/orgs",
"repos_url": "https://api.github.com/users/rems75/repos",
"events_url": "https://api.github.com/users/rems75/events{/privacy}",
"received_events_url": "https://api.github.com/users/rems75/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! You can refer to [this doc](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) to see the intended usage (basically, it skips the Arrow -> Parquet conversion step in `ds = load_dataset(...); ds.to_parquet(\"path/to/parquet\")`) and allows writing Parquet to remote storage unlike `to_parquet`).\r\n\r\n> I guess I'd expect as_dataset to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with load_dataset to no avail, probably due to misunderstandings on my part).\r\n\r\n`as_dataset` does not work with `file_format=\"parquet\"` files as Parquet files cannot be memory-mapped, so I think we should just raise an error in that case.\r\n"
] | 2023-05-18T14:09:03 | 2023-05-31T13:23:55 | 2023-05-31T13:23:55 | NONE | null | null | null | ### Describe the bug
I used a custom builder to ``download_and_prepare`` a dataset. The first (very minor) issue is that the doc seems to suggest ``download_and_prepare`` will return the dataset, while it does not ([builder.py](https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L718-L738)).
```
>>> from datasets import load_dataset_builder
>>> builder = load_dataset_builder("rotten_tomatoes")
>>> ds = builder.download_and_prepare("./output_dir", file_format="parquet")
```
The main issue I am facing is loading the dataset from those parquet files. I used the `as_dataset` method suggested by the doc, however it returns:
`
FileNotFoundError: [Errno 2] Failed to open local file 'output_dir/__main__-train-00000-of-00245.arrow'. Detail:
[errno 2] No such file or directory.
`
### Steps to reproduce the bug
1. Create a custom builder of some sort: `builder = CustomBuilder()`.
2. Run `download_and_prepare` with the parquet format: `builder.download_and_prepare("./output_dir", file_format="parquet")`.
3. Run `dataset = builder.as_dataset()`.
### Expected behavior
I guess I'd expect `as_dataset` to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with `load_dataset` to no avail, probably due to misunderstandings on my part).
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-1027-gcp-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.14.1
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5874/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5873/comments | https://api.github.com/repos/huggingface/datasets/issues/5873/events | https://github.com/huggingface/datasets/issues/5873 | 1,713,269,724 | I_kwDODunzps5mHmvc | 5,873 | Allow setting the environment variable for the lock file path | {
"login": "xin3he",
"id": 83260933,
"node_id": "MDQ6VXNlcjgzMjYwOTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/83260933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xin3he",
"html_url": "https://github.com/xin3he",
"followers_url": "https://api.github.com/users/xin3he/followers",
"following_url": "https://api.github.com/users/xin3he/following{/other_user}",
"gists_url": "https://api.github.com/users/xin3he/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xin3he/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xin3he/subscriptions",
"organizations_url": "https://api.github.com/users/xin3he/orgs",
"repos_url": "https://api.github.com/users/xin3he/repos",
"events_url": "https://api.github.com/users/xin3he/events{/privacy}",
"received_events_url": "https://api.github.com/users/xin3he/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2023-05-17T07:10:02 | 2023-05-17T07:11:05 | null | NONE | null | null | null | ### Feature request
Add an environment variable to replace the default lock file path.
### Motivation
Usually, dataset path is a read-only path while the lock file needs to be modified each time. It would be convenient if the path can be reset individually.
### Your contribution
```/src/datasets/utils/filelock.py
class UnixFileLock(BaseFileLock):
def __init__(self, lock_file, timeout=-1, max_filename_length=None):
#-------------------
if os.getenv('DS_TMP_PATH'):
file_name = str(lock_file).split('/')[-1]
dataset_tmp_path = os.getenv('DS_TMP_PATH')
lock_file = os.path.join(dataset_tmp_path, file_name)
#-------------------
max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax
super().__init__(lock_file, timeout=timeout, max_filename_length=max_filename_length)
```
A simple demo is as upper. Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5873/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5872/comments | https://api.github.com/repos/huggingface/datasets/issues/5872/events | https://github.com/huggingface/datasets/pull/5872 | 1,713,174,662 | PR_kwDODunzps5QrQ5o | 5,872 | Fix infer module for uppercase extensions | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007049 / 0.011353 (-0.004304) | 0.005034 / 0.011008 (-0.005974) | 0.097737 / 0.038508 (0.059229) | 0.033280 / 0.023109 (0.010170) | 0.301017 / 0.275898 (0.025119) | 0.336593 / 0.323480 (0.013113) | 0.005567 / 0.007986 (-0.002419) | 0.005384 / 0.004328 (0.001056) | 0.072980 / 0.004250 (0.068730) | 0.045030 / 0.037052 (0.007978) | 0.303280 / 0.258489 (0.044791) | 0.367528 / 0.293841 (0.073687) | 0.034131 / 0.128546 (-0.094415) | 0.012118 / 0.075646 (-0.063528) | 0.331677 / 0.419271 (-0.087594) | 0.049211 / 0.043533 (0.005678) | 0.297535 / 0.255139 (0.042396) | 0.318136 / 0.283200 (0.034936) | 0.101574 / 0.141683 (-0.040109) | 1.472769 / 1.452155 (0.020615) | 1.541724 / 1.492716 (0.049007) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014646 / 0.018006 (-0.003360) | 0.439050 / 0.000490 (0.438560) | 0.008575 / 0.000200 (0.008375) | 0.000297 / 0.000054 (0.000242) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027591 / 0.037411 (-0.009820) | 0.111639 / 0.014526 (0.097113) | 0.117098 / 0.176557 (-0.059458) | 0.173281 / 0.737135 (-0.563855) | 0.123197 / 0.296338 (-0.173141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397507 / 0.215209 (0.182298) | 3.971457 / 2.077655 (1.893803) | 1.781158 / 1.504120 (0.277038) | 1.590419 / 1.541195 (0.049224) | 1.716374 / 1.468490 (0.247884) | 0.687150 / 4.584777 (-3.897627) | 3.691009 / 3.745712 (-0.054703) | 2.050900 / 5.269862 (-3.218961) | 1.304893 / 4.565676 (-3.260784) | 0.084507 / 0.424275 (-0.339768) | 0.012231 / 0.007607 (0.004624) | 0.493033 / 0.226044 (0.266988) | 4.929957 / 2.268929 (2.661028) | 2.209069 / 55.444624 (-53.235555) | 1.885992 / 6.876477 (-4.990485) | 2.007004 / 2.142072 (-0.135069) | 0.827265 / 4.805227 (-3.977963) | 0.168225 / 6.500664 (-6.332439) | 0.064988 / 0.075469 (-0.010481) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182341 / 1.841788 (-0.659447) | 14.691983 / 8.074308 (6.617674) | 14.350720 / 10.191392 (4.159328) | 0.164307 / 0.680424 (-0.516117) | 0.017480 / 0.534201 (-0.516720) | 0.421843 / 0.579283 (-0.157441) | 0.417481 / 0.434364 (-0.016883) | 0.496587 / 0.540337 (-0.043751) | 0.581208 / 1.386936 (-0.805728) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007070 / 0.011353 (-0.004283) | 0.005083 / 0.011008 (-0.005926) | 0.075009 / 0.038508 (0.036500) | 0.032343 / 0.023109 (0.009234) | 0.366788 / 0.275898 (0.090890) | 0.392273 / 0.323480 (0.068794) | 0.005512 / 0.007986 (-0.002474) | 0.003999 / 0.004328 (-0.000329) | 0.073743 / 0.004250 (0.069492) | 0.046203 / 0.037052 (0.009151) | 0.367874 / 0.258489 (0.109385) | 0.409154 / 0.293841 (0.115313) | 0.035227 / 0.128546 (-0.093319) | 0.012223 / 0.075646 (-0.063424) | 0.087149 / 0.419271 (-0.332122) | 0.045648 / 0.043533 (0.002115) | 0.362414 / 0.255139 (0.107275) | 0.379970 / 0.283200 (0.096770) | 0.100631 / 0.141683 (-0.041052) | 1.439733 / 1.452155 (-0.012422) | 1.506266 / 1.492716 (0.013550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227071 / 0.018006 (0.209065) | 0.451243 / 0.000490 (0.450753) | 0.000406 / 0.000200 (0.000206) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028952 / 0.037411 (-0.008459) | 0.111934 / 0.014526 (0.097408) | 0.124080 / 0.176557 (-0.052477) | 0.174022 / 0.737135 (-0.563113) | 0.126811 / 0.296338 (-0.169527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436423 / 0.215209 (0.221214) | 4.331959 / 2.077655 (2.254304) | 2.111914 / 1.504120 (0.607794) | 1.921338 / 1.541195 (0.380143) | 1.994425 / 1.468490 (0.525935) | 0.699164 / 4.584777 (-3.885613) | 3.722143 / 3.745712 (-0.023569) | 3.516538 / 5.269862 (-1.753323) | 1.867245 / 4.565676 (-2.698431) | 0.085923 / 0.424275 (-0.338352) | 0.012059 / 0.007607 (0.004452) | 0.586147 / 0.226044 (0.360102) | 5.395823 / 2.268929 (3.126894) | 2.594430 / 55.444624 (-52.850194) | 2.275021 / 6.876477 (-4.601456) | 2.347810 / 2.142072 (0.205737) | 0.835118 / 4.805227 (-3.970109) | 0.167089 / 6.500664 (-6.333575) | 0.064893 / 0.075469 (-0.010576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291423 / 1.841788 (-0.550365) | 14.992696 / 8.074308 (6.918388) | 13.307842 / 10.191392 (3.116450) | 0.163799 / 0.680424 (-0.516625) | 0.017315 / 0.534201 (-0.516886) | 0.461319 / 0.579283 (-0.117965) | 0.430474 / 0.434364 (-0.003889) | 0.568115 / 0.540337 (0.027777) | 0.647909 / 1.386936 (-0.739027) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a5161c9ecdcdde9cc99c7f212da13523d5ba6bdb \"CML watermark\")\n"
] | 2023-05-17T05:56:45 | 2023-05-17T14:26:59 | 2023-05-17T14:19:18 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5872",
"html_url": "https://github.com/huggingface/datasets/pull/5872",
"diff_url": "https://github.com/huggingface/datasets/pull/5872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5872.patch",
"merged_at": "2023-05-17T14:19:18"
} | Fix the `infer_module_for_data_files` and `infer_module_for_data_files_in_archives` functions when passed a data file name with uppercase extension, e.g. `filename.TXT`.
Before, `None` module was returned. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5872/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5871/comments | https://api.github.com/repos/huggingface/datasets/issues/5871/events | https://github.com/huggingface/datasets/issues/5871 | 1,712,573,073 | I_kwDODunzps5mE8qR | 5,871 | data configuration hash suffix depends on uncanonicalized data_dir | {
"login": "kylrth",
"id": 5044802,
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylrth",
"html_url": "https://github.com/kylrth",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"repos_url": "https://api.github.com/users/kylrth/repos",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "kylrth",
"id": 5044802,
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylrth",
"html_url": "https://github.com/kylrth",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"repos_url": "https://api.github.com/users/kylrth/repos",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "kylrth",
"id": 5044802,
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylrth",
"html_url": "https://github.com/kylrth",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"repos_url": "https://api.github.com/users/kylrth/repos",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It could even use `os.path.realpath` to resolve symlinks.",
"Indeed, it makes sense to normalize `data_dir`. Feel free to submit a PR (this can be \"fixed\" [here](https://github.com/huggingface/datasets/blob/89f775226321ba94e5bf4670a323c0fb44f5f65c/src/datasets/builder.py#L173))",
"#self-assign"
] | 2023-05-16T18:56:04 | 2023-06-02T15:52:05 | 2023-06-02T15:52:05 | CONTRIBUTOR | null | null | null | ### Describe the bug
I am working with the `recipe_nlg` dataset, which requires manual download. Once it's downloaded, I've noticed that the hash in the custom data configuration is different if I add a trailing `/` to my `data_dir`. It took me a while to notice that the hashes were different, and to understand that that was the cause of my dataset being processed anew instead of the cached version being used.
### Steps to reproduce the bug
1. Follow the steps to manually download the `recipe_nlg` dataset to `/data/recipenlg`.
2. Load it using `load_dataset`, once without a trailing slash and once with one:
```python
>>> ds = load_dataset("recipe_nlg", data_dir="/data/recipenlg")
Using custom data configuration default-082278caeea85765
Downloading and preparing dataset recipe_nlg/default to /home/kyle/.cache/huggingface/datasets/recipe_nlg/default-082278caeea85765/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74...
Dataset recipe_nlg downloaded and prepared to /home/kyle/.cache/huggingface/datasets/recipe_nlg/default-082278caeea85765/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74. Subsequent calls will reuse this data.
100%|███████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.10s/it]
DatasetDict({
train: Dataset({
features: ['id', 'title', 'ingredients', 'directions', 'link', 'source', 'ner'],
num_rows: 2231142
})
})
>>> ds = load_dataset("recipe_nlg", data_dir="/data/recipenlg/")
Using custom data configuration default-83e87680785d0493
Downloading and preparing dataset recipe_nlg/default to /home/user/.cache/huggingface/datasets/recipe_nlg/default-83e87680785d0493/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74...
Generating train split: 1%| | 12701/2231142 [00:04<13:15, 2790.25 examples/s
^C
```
3. Observe that the hash suffix in the custom data configuration changes due to the altered string.
### Expected behavior
I think I would expect the hash to remain constant if it actually points to the same location on disk. I would expect the use of `os.path.normpath` to canonicalize the paths.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5871/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5870/comments | https://api.github.com/repos/huggingface/datasets/issues/5870/events | https://github.com/huggingface/datasets/issues/5870 | 1,712,156,282 | I_kwDODunzps5mDW56 | 5,870 | Behaviour difference between datasets.map and IterableDatasets.map | {
"login": "llStringll",
"id": 30209072,
"node_id": "MDQ6VXNlcjMwMjA5MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/30209072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/llStringll",
"html_url": "https://github.com/llStringll",
"followers_url": "https://api.github.com/users/llStringll/followers",
"following_url": "https://api.github.com/users/llStringll/following{/other_user}",
"gists_url": "https://api.github.com/users/llStringll/gists{/gist_id}",
"starred_url": "https://api.github.com/users/llStringll/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/llStringll/subscriptions",
"organizations_url": "https://api.github.com/users/llStringll/orgs",
"repos_url": "https://api.github.com/users/llStringll/repos",
"events_url": "https://api.github.com/users/llStringll/events{/privacy}",
"received_events_url": "https://api.github.com/users/llStringll/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"PS - some work is definitely needed for 'special cases' docs, not explanations, just usages of 'functions' under mixture of special cases, like a combination of custom databuilder + iterable dataset for large size + dynamic .map() application."
] | 2023-05-16T14:32:57 | 2023-05-16T14:36:05 | null | NONE | null | null | null | ### Describe the bug
All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs.
I basically need to .map() a transform on images in an iterable dataset, which was made using a custom databuilder config.
This works very good in map-styles datasets, but the .map() fails in IterableDatasets, show behvaiour as such:
"pixel_values" key not found, KeyError in examples object/dict passed into transform function for map, which works fine with map style, even as batch.
In iterable style, the object/dict passed into map() paramter callable function is completely different as what is mentioned in all examples.
Please look into this. Thank you
My databuilder class is inherited as such:
def _info(self):
print ("Config: ",self.config.__dict__.keys())
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"labels": datasets.Sequence(datasets.Value("uint16")),
# "labels_name": datasets.Value("string"),
# "pixel_values": datasets.Array3D(shape=(3, 1280, 960), dtype="float32"),
"pixel_values": datasets.Array3D(shape=(1280, 960, 3), dtype="uint8"),
"image_s3_path": datasets.Value("string"),
}
),
supervised_keys=None,
homepage="none",
citation="",
)
def _split_generators(self, dl_manager):
records_train = list(db.mini_set.find({'split':'train'},{'image_s3_path':1, 'ocwen_template_name':1}))[:10000]
records_val = list(db.mini_set.find({'split':'val'},{'image_s3_path':1, 'ocwen_template_name':1}))[:1000]
# print (len(records),self.config.num_shards)
# shard_size_train = len(records_train)//self.config.num_shards
# sharded_records_train = [records_train[i:i+shard_size_train] for i in range(0,len(records_train),shard_size_train)]
# shard_size_val = len(records_val)//self.config.num_shards
# sharded_records_val = [records_val[i:i+shard_size_val] for i in range(0,len(records_val),shard_size_val)]
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"records":records_train} # passing list of records, for sharding to take over
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION, gen_kwargs={"records":records_val} # passing list of records, for sharding to take over
),
]
def _generate_examples(self, records):
# print ("Generating examples for [{}] shards".format(len(shards)))
# initiate_db_connection()
# records = list(db.mini_set.find({'split':split},{'image_s3_path':1, 'ocwen_template_name':1}))[:10]
id_ = 0
# for records in shards:
for i,rec in enumerate(records):
img_local_path = fetch_file(rec['image_s3_path'],self.config.buffer_dir)
# t = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.squeeze()
# print (t.shape, type(t),type(t[0][0][0]))
# sys.exit()
pvs = np.array(Image.open(img_local_path).resize((1280,960))) # image object is wxh, so resize as per that, numpy array of it is hxwxc, transposing to cxwxh
# pvs = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.astype(np.float16).squeeze()
# print (type(pvs[0][0][0]))
lblids = self.config.processor.tokenizer('<s_class>'+rec['ocwen_template_name']+'</s_class>'+'</s>', add_special_tokens=False, padding=False, truncation=False, return_tensors="np")["input_ids"].squeeze(0) # take padding later, as per batch collating
# print (len(lblids),type(lblids[0]))
# print (type(pvs),pvs.shape,type(pvs[0][0][0]), type(lblids))
yield id_, {"labels":lblids,"pixel_values":pvs,"image_s3_path":rec['image_s3_path']}
id_+=1
os.remove(img_local_path)
and I load it inside my trainer script as such
`ds = load_dataset("/tmp/DonutDS/dataset/", split="train", streaming=True) # iterable dataset, where .map() falls`
or also as
`ds = load_from_disk('/tmp/DonutDS/dataset/') #map style dataset`
Thank you to the team for having such a great library, and for this bug fix in advance!
### Steps to reproduce the bug
Above config can allow one to reproduce the said bug
### Expected behavior
.map() should show some consistency b/w map-style and iterable-style datasets, or atleast the docs should address iterable-style datasets behaviour and examples. I honestly do not figure the use of such docs.
### Environment info
datasets==2.9.0
transformers==4.26.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5870/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5869/comments | https://api.github.com/repos/huggingface/datasets/issues/5869/events | https://github.com/huggingface/datasets/issues/5869 | 1,711,990,003 | I_kwDODunzps5mCuTz | 5,869 | Image Encoding Issue when submitting a Parquet Dataset | {
"login": "PhilippeMoussalli",
"id": 47530815,
"node_id": "MDQ6VXNlcjQ3NTMwODE1",
"avatar_url": "https://avatars.githubusercontent.com/u/47530815?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilippeMoussalli",
"html_url": "https://github.com/PhilippeMoussalli",
"followers_url": "https://api.github.com/users/PhilippeMoussalli/followers",
"following_url": "https://api.github.com/users/PhilippeMoussalli/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilippeMoussalli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilippeMoussalli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilippeMoussalli/subscriptions",
"organizations_url": "https://api.github.com/users/PhilippeMoussalli/orgs",
"repos_url": "https://api.github.com/users/PhilippeMoussalli/repos",
"events_url": "https://api.github.com/users/PhilippeMoussalli/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilippeMoussalli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @PhilippeMoussalli thanks for opening a detailed issue. It seems the issue is more related to the `datasets` library so I'll ping @lhoestq @mariosasko on this one :) \n\n(edit: also can one of you move the issue to the datasets repo? Thanks in advance 🙏)",
"Hi ! The `Image()` info is stored in the **schema metadata**. More precisely there should be a \"huggingface\" field in the schema metadata that contains the `datasets` feature type of each column.\r\n\r\nTo fix your issue, you can use the same schema as the original Parquet files to write the new ones. You can also get the schema with metadata from a `Features` object, e.g.\r\n\r\n```python\r\nfrom datasets import Features, Image, Value\r\n\r\nfeatures = Features({\"image\": Image(), \"text\": Value(\"string\")})\r\nschema = features.arrow_schema\r\nprint(schema.metadata)\r\n# {b'huggingface': b'{\"info\": {\"features\": {\"image\": {\"_type\": \"Image\"}, \"text\": {\"dtype\": \"string\", \"_type\": \"Value\"}}}}'}\r\n```",
"It appears that the parquet files at `hf://datasets/lambdalabs/pokemon-blip-captions` don't have this metadata, and it is defined in the dataset_infos.json instead (legacy).\r\n\r\nYou can get the right schema with the HF metadata this way:\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nfeatures = load_dataset_builder(\"lambdalabs/pokemon-blip-captions\").info.features\r\nschema = features.arrow_schema\r\n```",
"Btw in the future we might add support for an dedicated Image extension type in Arrow so that you won't need to add the schema metadata anymore ;)",
"Thanks @Wauplin @lhoestq for the quick reply :)! \r\n\r\nI tried your approach by passing the huggingface schema to the dask writer \r\n\r\n```\r\nfrom datasets import Features, Image, Value\r\ndf = dd.read_parquet(f\"hf://datasets/lambdalabs/pokemon-blip-captions\",index=False)\r\nfeatures = Features({\"image\": Image(), \"text\": Value(\"string\")})\r\nschema = features.arrow_schema\r\ndd.to_parquet(df, path = \"hf://datasets/philippemo/dummy_dataset/data\", schema=schema)\r\n```\r\nAt first it didn't work as I was not able to visualize the images, so then I manually added the `dataset_infos.json` from the example dataset and it worked :)\r\n\r\nHowever, It's not very ideal since there are some metadata in that file that need to be computed in order to load the data properly such as `num_of_bytes` and `num_examples` which might be unknown in my use case. \r\n\r\n![Screenshot from 2023-05-16 16-54-55](https://github.com/huggingface/datasets/assets/47530815/b2b448d2-d3d8-43a7-9682-9c0187a5192b)\r\n\r\nDo you have any pointers there? you mentioned that `datasets_info.json` will be deprecated/legacy. Could you point me to some example image datasets on the hub that are stored as parquet and don't have the `datasets_info.json`?\r\n\r\n",
"You don't need the dataset_infos.json file as long as you have the schema with HF metadata ;)\r\nI could also check that it works fine myself on the git revision without the dataset_infos.json file.\r\n\r\nWhat made you think it didn't work ?",
"> You don't need the dataset_infos.json file as long as you have the schema with HF metadata ;) I could also check that it works fine myself on the git revision without the dataset_infos.json file.\r\n> \r\n> What made you think it didn't work ?\r\n\r\nThose are two identical dataset repos where both were pushed with dask with the specified schema you mentioned above. I then uploaded the `dataset_infos.json` manually taken from the original example dataset into one of them. \r\n\r\n* **With schema**: https://huggingface.co/datasets/philippemo/dummy_dataset_with_schema\r\n* **Without schema**: https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema\r\n\r\nYou can see that in the examples without schema the images fail to render properly. When loaded with `datasets` they return an dict and not a Pillow Image ",
"I see ! I think it's a bug on our side - it should work without the metadata - let me investigate",
"Alright, it's fixed: https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema\r\n\r\nIt shows the image correctly now - even without the extra metadata :)",
"Thanks @lhoestq! \r\nI tested pushing a dataset again without the metadata and it works perfectly! \r\nI appreciate the help",
"Hi @lhoestq, \r\n\r\nI'v tried pushing another dataset again and I think the issue reappeared again: \r\n\r\n```\r\ndf = dd.read_parquet(f\"hf://datasets/lambdalabs/pokemon-blip-captions\")\r\nfeatures = datasets.Features({\"image\": datasets.Image(), \"text\": datasets.Value(\"string\")})\r\nschema = features.arrow_schema\r\ndd.to_parquet(df, path = \"hf://datasets/philippemo/dummy_dataset_without_schema_12_06/data\", schema=schema)\r\n```\r\n\r\nHere is the dataset: \r\n https://huggingface.co/datasets/philippemo/dummy_dataset_without_schema_12_06\r\nThe one that was working 2 weeks ago still seems to be intact though, it might be that It rendered properly when it was initially submitted and after this something was reverted from your side:\r\nhttps://huggingface.co/datasets/philippemo/dummy_dataset_without_schema\r\n\r\nIt's weird because nothing really changed from the implementation, might be another issue in the hub backend. Do you have any pointers on how to resolve this? ",
"We're doing some changes in the way we're handling image parquet datasets right now. We'll include the fix from https://github.com/huggingface/datasets/pull/5921 in the new datasets-server version in the coming days",
"alright thanks for the update :), would that be part of the new release of datasets or is it something separate? if so, where can I track it? ",
"Once the new version of `datasets` is released (tomorrow probably) we'll open an issue on https://github.com/huggingface/datasets-server to update to this version :)",
"Alright we did the update :) This is fixed for good now",
"Yes thanks 🎉🎉🎉"
] | 2023-05-16T09:42:58 | 2023-06-16T12:48:38 | 2023-06-16T09:30:48 | NONE | null | null | null | ### Describe the bug
Hello,
I'd like to report an issue related to pushing a dataset represented as a Parquet file to a dataset repository using Dask. Here are the details:
We attempted to load an example dataset in Parquet format from the Hugging Face (HF) filesystem using Dask with the following code snippet:
```
import dask.dataframe as dd
df = dd.read_parquet("hf://datasets/lambdalabs/pokemon-blip-captions",index=False)
```
In this dataset, the "image" column is represented as a dictionary/struct with the format:
```
df = df.compute()
df["image"].iloc[0].keys()
-> dict_keys(['bytes', 'path'])
```
I think this is the format encoded by the [`Image`](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.Image) feature extractor from datasets to format suitable for Arrow.
The next step was to push the dataset to a repository that I created:
```
dd.to_parquet(dask_df, path = "hf://datasets/philippemo/dummy_dataset/data")
```
However, after pushing the dataset using Dask, the "image" column is now represented as the encoded dictionary `(['bytes', 'path'])`, and the images are not properly visualized. You can find the dataset here: [Link to the problematic dataset](https://huggingface.co/datasets/philippemo/dummy_dataset).
It's worth noting that both the original dataset and the one submitted with Dask have the same schema with minor alterations related to metadata:
**[ Schema of original dummy example.](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions/blob/main/data/train-00000-of-00001-566cc9b19d7203f8.parquet)**
```
image: struct<bytes: binary, path: null>
child 0, bytes: binary
child 1, path: null
text: string
```
**[ Schema of pushed dataset with dask](https://huggingface.co/datasets/philippemo/dummy_dataset/blob/main/data/part.0.parquet)**
```
image: struct<bytes: binary, path: null>
child 0, bytes: binary
child 1, path: null
text: string
```
This issue seems to be related to an encoding type that occurs when pushing a model to the hub. Normally, models should be represented as an HF dataset before pushing, but we are working with an example where we need to push large datasets using Dask.
Could you please provide clarification on how to resolve this issue?
Thank you!
### Reproduction
To get the schema I downloaded the parquet files and used pyarrow.parquet to read the schema
```
import pyarrow.parquet
pyarrow.parquet.read_schema(<path_to_parquet>, memory_map=True)
```
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.14.1
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/philippe/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: philippemo
- Configured git credential helpers: cache
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.4.0
- hf_transfer: N/A
- gradio: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/philippe/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/philippe/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/philippe/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5869/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5868/comments | https://api.github.com/repos/huggingface/datasets/issues/5868/events | https://github.com/huggingface/datasets/issues/5868 | 1,711,173,098 | I_kwDODunzps5l_m3q | 5,868 | Is it possible to change a cached file and 're-cache' it instead of re-generating? | {
"login": "zyh3826",
"id": 31238754,
"node_id": "MDQ6VXNlcjMxMjM4NzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyh3826",
"html_url": "https://github.com/zyh3826",
"followers_url": "https://api.github.com/users/zyh3826/followers",
"following_url": "https://api.github.com/users/zyh3826/following{/other_user}",
"gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions",
"organizations_url": "https://api.github.com/users/zyh3826/orgs",
"repos_url": "https://api.github.com/users/zyh3826/repos",
"events_url": "https://api.github.com/users/zyh3826/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyh3826/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Arrow files/primitives (tables and arrays) are immutable, so re-generating them is the only option, I'm afraid.",
"> \r\n\r\nGot it, thanks for your reply"
] | 2023-05-16T03:45:42 | 2023-05-17T11:21:36 | 2023-05-17T11:21:36 | NONE | null | null | null | ### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it.
### Your contribution
For now, I can't help, sorry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5868/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5867/comments | https://api.github.com/repos/huggingface/datasets/issues/5867/events | https://github.com/huggingface/datasets/pull/5867 | 1,710,656,067 | PR_kwDODunzps5QizOn | 5,867 | Add logic for hashing modules/functions optimized with `torch.compile` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006598 / 0.011353 (-0.004755) | 0.004565 / 0.011008 (-0.006443) | 0.099063 / 0.038508 (0.060555) | 0.028334 / 0.023109 (0.005225) | 0.323539 / 0.275898 (0.047641) | 0.372462 / 0.323480 (0.048982) | 0.005120 / 0.007986 (-0.002865) | 0.004797 / 0.004328 (0.000468) | 0.076862 / 0.004250 (0.072611) | 0.038021 / 0.037052 (0.000968) | 0.337801 / 0.258489 (0.079312) | 0.374601 / 0.293841 (0.080760) | 0.031158 / 0.128546 (-0.097389) | 0.011672 / 0.075646 (-0.063974) | 0.324913 / 0.419271 (-0.094359) | 0.051702 / 0.043533 (0.008169) | 0.339440 / 0.255139 (0.084301) | 0.372502 / 0.283200 (0.089303) | 0.097590 / 0.141683 (-0.044093) | 1.534238 / 1.452155 (0.082083) | 1.599701 / 1.492716 (0.106985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204101 / 0.018006 (0.186095) | 0.416981 / 0.000490 (0.416491) | 0.003436 / 0.000200 (0.003236) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023527 / 0.037411 (-0.013885) | 0.095748 / 0.014526 (0.081222) | 0.104498 / 0.176557 (-0.072059) | 0.164000 / 0.737135 (-0.573135) | 0.109170 / 0.296338 (-0.187168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418239 / 0.215209 (0.203030) | 4.153959 / 2.077655 (2.076305) | 1.856687 / 1.504120 (0.352567) | 1.657818 / 1.541195 (0.116623) | 1.715146 / 1.468490 (0.246656) | 0.700673 / 4.584777 (-3.884103) | 3.401060 / 3.745712 (-0.344652) | 2.891045 / 5.269862 (-2.378816) | 1.519433 / 4.565676 (-3.046243) | 0.083151 / 0.424275 (-0.341124) | 0.012352 / 0.007607 (0.004745) | 0.523901 / 0.226044 (0.297856) | 5.288871 / 2.268929 (3.019943) | 2.322806 / 55.444624 (-53.121818) | 1.982223 / 6.876477 (-4.894253) | 2.074883 / 2.142072 (-0.067189) | 0.812400 / 4.805227 (-3.992827) | 0.152183 / 6.500664 (-6.348481) | 0.066538 / 0.075469 (-0.008931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223220 / 1.841788 (-0.618567) | 14.024391 / 8.074308 (5.950083) | 14.166657 / 10.191392 (3.975265) | 0.146017 / 0.680424 (-0.534407) | 0.016698 / 0.534201 (-0.517503) | 0.380779 / 0.579283 (-0.198504) | 0.387113 / 0.434364 (-0.047251) | 0.446329 / 0.540337 (-0.094009) | 0.523819 / 1.386936 (-0.863118) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006803 / 0.011353 (-0.004549) | 0.004554 / 0.011008 (-0.006454) | 0.077406 / 0.038508 (0.038897) | 0.028495 / 0.023109 (0.005386) | 0.358847 / 0.275898 (0.082949) | 0.393256 / 0.323480 (0.069776) | 0.005317 / 0.007986 (-0.002669) | 0.004690 / 0.004328 (0.000362) | 0.075842 / 0.004250 (0.071592) | 0.041985 / 0.037052 (0.004933) | 0.367546 / 0.258489 (0.109057) | 0.408019 / 0.293841 (0.114178) | 0.030712 / 0.128546 (-0.097834) | 0.011756 / 0.075646 (-0.063891) | 0.086002 / 0.419271 (-0.333269) | 0.038949 / 0.043533 (-0.004583) | 0.361045 / 0.255139 (0.105906) | 0.381728 / 0.283200 (0.098528) | 0.090692 / 0.141683 (-0.050991) | 1.493251 / 1.452155 (0.041097) | 1.584566 / 1.492716 (0.091850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217470 / 0.018006 (0.199463) | 0.429955 / 0.000490 (0.429465) | 0.000394 / 0.000200 (0.000194) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026223 / 0.037411 (-0.011189) | 0.102570 / 0.014526 (0.088045) | 0.110848 / 0.176557 (-0.065709) | 0.162413 / 0.737135 (-0.574722) | 0.114579 / 0.296338 (-0.181760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464957 / 0.215209 (0.249748) | 4.656597 / 2.077655 (2.578942) | 2.279755 / 1.504120 (0.775636) | 2.230263 / 1.541195 (0.689068) | 2.341540 / 1.468490 (0.873050) | 0.699505 / 4.584777 (-3.885272) | 3.389003 / 3.745712 (-0.356709) | 1.867526 / 5.269862 (-3.402336) | 1.167171 / 4.565676 (-3.398506) | 0.083451 / 0.424275 (-0.340824) | 0.012348 / 0.007607 (0.004741) | 0.584205 / 0.226044 (0.358161) | 5.853623 / 2.268929 (3.584694) | 2.646650 / 55.444624 (-52.797974) | 2.286504 / 6.876477 (-4.589973) | 2.327536 / 2.142072 (0.185464) | 0.811209 / 4.805227 (-3.994018) | 0.151842 / 6.500664 (-6.348822) | 0.067783 / 0.075469 (-0.007686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330427 / 1.841788 (-0.511360) | 14.668981 / 8.074308 (6.594673) | 13.321154 / 10.191392 (3.129762) | 0.164383 / 0.680424 (-0.516040) | 0.016667 / 0.534201 (-0.517534) | 0.383439 / 0.579283 (-0.195844) | 0.392988 / 0.434364 (-0.041376) | 0.443318 / 0.540337 (-0.097020) | 0.537849 / 1.386936 (-0.849087) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e99bd4583bd636074b1826e2d0581161807480f1 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006379 / 0.011353 (-0.004974) | 0.004691 / 0.011008 (-0.006317) | 0.098047 / 0.038508 (0.059539) | 0.028126 / 0.023109 (0.005017) | 0.327143 / 0.275898 (0.051245) | 0.362482 / 0.323480 (0.039002) | 0.004953 / 0.007986 (-0.003033) | 0.003386 / 0.004328 (-0.000943) | 0.076222 / 0.004250 (0.071971) | 0.037583 / 0.037052 (0.000531) | 0.329661 / 0.258489 (0.071172) | 0.365945 / 0.293841 (0.072104) | 0.030455 / 0.128546 (-0.098091) | 0.011397 / 0.075646 (-0.064249) | 0.323889 / 0.419271 (-0.095383) | 0.043719 / 0.043533 (0.000186) | 0.331499 / 0.255139 (0.076360) | 0.359357 / 0.283200 (0.076158) | 0.088904 / 0.141683 (-0.052779) | 1.458584 / 1.452155 (0.006429) | 1.549375 / 1.492716 (0.056658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195808 / 0.018006 (0.177802) | 0.411148 / 0.000490 (0.410659) | 0.003602 / 0.000200 (0.003402) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023278 / 0.037411 (-0.014133) | 0.097317 / 0.014526 (0.082791) | 0.102669 / 0.176557 (-0.073888) | 0.168203 / 0.737135 (-0.568933) | 0.105205 / 0.296338 (-0.191133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424800 / 0.215209 (0.209591) | 4.228444 / 2.077655 (2.150790) | 1.895544 / 1.504120 (0.391424) | 1.698793 / 1.541195 (0.157598) | 1.717931 / 1.468490 (0.249441) | 0.702251 / 4.584777 (-3.882526) | 3.407013 / 3.745712 (-0.338699) | 2.784634 / 5.269862 (-2.485228) | 1.491317 / 4.565676 (-3.074359) | 0.082926 / 0.424275 (-0.341350) | 0.012320 / 0.007607 (0.004713) | 0.524188 / 0.226044 (0.298143) | 5.249798 / 2.268929 (2.980870) | 2.358953 / 55.444624 (-53.085672) | 1.985922 / 6.876477 (-4.890555) | 2.034293 / 2.142072 (-0.107779) | 0.815671 / 4.805227 (-3.989556) | 0.152583 / 6.500664 (-6.348081) | 0.066687 / 0.075469 (-0.008782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210901 / 1.841788 (-0.630886) | 13.621765 / 8.074308 (5.547457) | 14.213215 / 10.191392 (4.021823) | 0.143346 / 0.680424 (-0.537078) | 0.016904 / 0.534201 (-0.517297) | 0.379795 / 0.579283 (-0.199489) | 0.381287 / 0.434364 (-0.053077) | 0.449086 / 0.540337 (-0.091251) | 0.538792 / 1.386936 (-0.848144) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006207 / 0.011353 (-0.005146) | 0.004404 / 0.011008 (-0.006604) | 0.076363 / 0.038508 (0.037854) | 0.027335 / 0.023109 (0.004226) | 0.370967 / 0.275898 (0.095069) | 0.401936 / 0.323480 (0.078456) | 0.004835 / 0.007986 (-0.003151) | 0.004559 / 0.004328 (0.000231) | 0.074964 / 0.004250 (0.070713) | 0.038254 / 0.037052 (0.001202) | 0.374799 / 0.258489 (0.116310) | 0.425191 / 0.293841 (0.131350) | 0.035290 / 0.128546 (-0.093256) | 0.011379 / 0.075646 (-0.064267) | 0.085911 / 0.419271 (-0.333360) | 0.043073 / 0.043533 (-0.000460) | 0.373557 / 0.255139 (0.118418) | 0.395179 / 0.283200 (0.111979) | 0.098602 / 0.141683 (-0.043081) | 1.467234 / 1.452155 (0.015079) | 1.571868 / 1.492716 (0.079152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221848 / 0.018006 (0.203842) | 0.394943 / 0.000490 (0.394454) | 0.002983 / 0.000200 (0.002783) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024385 / 0.037411 (-0.013027) | 0.100087 / 0.014526 (0.085561) | 0.104897 / 0.176557 (-0.071660) | 0.156150 / 0.737135 (-0.580985) | 0.109113 / 0.296338 (-0.187226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441995 / 0.215209 (0.226786) | 4.415423 / 2.077655 (2.337769) | 2.148791 / 1.504120 (0.644671) | 1.947061 / 1.541195 (0.405866) | 1.954807 / 1.468490 (0.486317) | 0.690245 / 4.584777 (-3.894532) | 3.372766 / 3.745712 (-0.372946) | 1.851073 / 5.269862 (-3.418789) | 1.155558 / 4.565676 (-3.410118) | 0.082796 / 0.424275 (-0.341479) | 0.012845 / 0.007607 (0.005238) | 0.548173 / 0.226044 (0.322129) | 5.530984 / 2.268929 (3.262056) | 2.665360 / 55.444624 (-52.779264) | 2.324266 / 6.876477 (-4.552211) | 2.329397 / 2.142072 (0.187324) | 0.801481 / 4.805227 (-4.003746) | 0.152145 / 6.500664 (-6.348519) | 0.067915 / 0.075469 (-0.007554) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291488 / 1.841788 (-0.550299) | 13.912143 / 8.074308 (5.837835) | 12.975493 / 10.191392 (2.784101) | 0.129915 / 0.680424 (-0.550509) | 0.016516 / 0.534201 (-0.517685) | 0.386979 / 0.579283 (-0.192304) | 0.389163 / 0.434364 (-0.045201) | 0.443324 / 0.540337 (-0.097014) | 0.533744 / 1.386936 (-0.853192) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#eb48834fc2aa45cad73fe70a7ecaa0dd6015b8d0 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5867). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008635 / 0.011353 (-0.002717) | 0.006014 / 0.011008 (-0.004995) | 0.116314 / 0.038508 (0.077806) | 0.041113 / 0.023109 (0.018004) | 0.358564 / 0.275898 (0.082666) | 0.397547 / 0.323480 (0.074067) | 0.007012 / 0.007986 (-0.000974) | 0.004638 / 0.004328 (0.000310) | 0.086509 / 0.004250 (0.082259) | 0.056731 / 0.037052 (0.019678) | 0.358859 / 0.258489 (0.100370) | 0.425339 / 0.293841 (0.131498) | 0.041780 / 0.128546 (-0.086767) | 0.014203 / 0.075646 (-0.061443) | 0.398240 / 0.419271 (-0.021031) | 0.060180 / 0.043533 (0.016647) | 0.352887 / 0.255139 (0.097748) | 0.381793 / 0.283200 (0.098594) | 0.148578 / 0.141683 (0.006895) | 1.749483 / 1.452155 (0.297328) | 1.869765 / 1.492716 (0.377049) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244435 / 0.018006 (0.226428) | 0.499545 / 0.000490 (0.499055) | 0.004576 / 0.000200 (0.004376) | 0.000147 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031163 / 0.037411 (-0.006249) | 0.131082 / 0.014526 (0.116556) | 0.137442 / 0.176557 (-0.039114) | 0.203783 / 0.737135 (-0.533352) | 0.144068 / 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.503587 / 0.215209 (0.288378) | 5.011953 / 2.077655 (2.934299) | 2.366968 / 1.504120 (0.862848) | 2.130914 / 1.541195 (0.589719) | 2.243560 / 1.468490 (0.775070) | 0.856719 / 4.584777 (-3.728058) | 4.707445 / 3.745712 (0.961733) | 2.506166 / 5.269862 (-2.763696) | 1.590400 / 4.565676 (-2.975277) | 0.102075 / 0.424275 (-0.322200) | 0.014499 / 0.007607 (0.006892) | 0.624966 / 0.226044 (0.398922) | 6.197671 / 2.268929 (3.928742) | 2.898481 / 55.444624 (-52.546143) | 2.499590 / 6.876477 (-4.376886) | 2.649690 / 2.142072 (0.507617) | 1.012542 / 4.805227 (-3.792685) | 0.202833 / 6.500664 (-6.297831) | 0.078033 / 0.075469 (0.002564) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.448321 / 1.841788 (-0.393467) | 18.084909 / 8.074308 (10.010601) | 17.383027 / 10.191392 (7.191635) | 0.212167 / 0.680424 (-0.468256) | 0.020754 / 0.534201 (-0.513447) | 0.514653 / 0.579283 (-0.064630) | 0.543307 / 0.434364 (0.108944) | 0.653066 / 0.540337 (0.112728) | 0.745773 / 1.386936 (-0.641164) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008576 / 0.011353 (-0.002777) | 0.005834 / 0.011008 (-0.005174) | 0.089842 / 0.038508 (0.051334) | 0.040035 / 0.023109 (0.016926) | 0.449329 / 0.275898 (0.173431) | 0.471572 / 0.323480 (0.148092) | 0.006771 / 0.007986 (-0.001215) | 0.006129 / 0.004328 (0.001800) | 0.090370 / 0.004250 (0.086119) | 0.056924 / 0.037052 (0.019872) | 0.455134 / 0.258489 (0.196645) | 0.502670 / 0.293841 (0.208829) | 0.041689 / 0.128546 (-0.086857) | 0.014447 / 0.075646 (-0.061200) | 0.104528 / 0.419271 (-0.314744) | 0.055535 / 0.043533 (0.012003) | 0.450667 / 0.255139 (0.195528) | 0.453108 / 0.283200 (0.169908) | 0.119296 / 0.141683 (-0.022387) | 1.747359 / 1.452155 (0.295204) | 1.839421 / 1.492716 (0.346705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.314910 / 0.018006 (0.296904) | 0.495575 / 0.000490 (0.495085) | 0.054702 / 0.000200 (0.054503) | 0.000505 / 0.000054 (0.000450) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033991 / 0.037411 (-0.003420) | 0.133268 / 0.014526 (0.118742) | 0.142286 / 0.176557 (-0.034271) | 0.200562 / 0.737135 (-0.536573) | 0.147161 / 0.296338 (-0.149178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520288 / 0.215209 (0.305079) | 5.227684 / 2.077655 (3.150029) | 2.553330 / 1.504120 (1.049210) | 2.324338 / 1.541195 (0.783143) | 2.406790 / 1.468490 (0.938300) | 0.850404 / 4.584777 (-3.734373) | 4.612156 / 3.745712 (0.866444) | 2.592546 / 5.269862 (-2.677316) | 1.708984 / 4.565676 (-2.856692) | 0.103751 / 0.424275 (-0.320524) | 0.014379 / 0.007607 (0.006772) | 0.634661 / 0.226044 (0.408616) | 6.344939 / 2.268929 (4.076010) | 3.179807 / 55.444624 (-52.264817) | 2.831856 / 6.876477 (-4.044621) | 2.866729 / 2.142072 (0.724656) | 0.994519 / 4.805227 (-3.810708) | 0.201566 / 6.500664 (-6.299098) | 0.078902 / 0.075469 (0.003433) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.538738 / 1.841788 (-0.303049) | 18.746367 / 8.074308 (10.672059) | 16.504763 / 10.191392 (6.313371) | 0.197898 / 0.680424 (-0.482526) | 0.020469 / 0.534201 (-0.513732) | 0.529106 / 0.579283 (-0.050177) | 0.536891 / 0.434364 (0.102527) | 0.600947 / 0.540337 (0.060610) | 0.701713 / 1.386936 (-0.685223) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3054f66b4765a520e6fe165c44a4307d40775229 \"CML watermark\")\n"
] | 2023-05-15T19:03:35 | 2023-05-17T13:41:48 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5867",
"html_url": "https://github.com/huggingface/datasets/pull/5867",
"diff_url": "https://github.com/huggingface/datasets/pull/5867.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5867.patch",
"merged_at": null
} | Fix https://github.com/huggingface/datasets/issues/5839
PS: The `Pickler.save` method is becoming a bit messy, so I plan to refactor the pickler a bit at some point. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5867/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5866/comments | https://api.github.com/repos/huggingface/datasets/issues/5866/events | https://github.com/huggingface/datasets/issues/5866 | 1,710,496,993 | I_kwDODunzps5l9Bzh | 5,866 | Issue with Sequence features | {
"login": "alialamiidrissi",
"id": 14365168,
"node_id": "MDQ6VXNlcjE0MzY1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alialamiidrissi",
"html_url": "https://github.com/alialamiidrissi",
"followers_url": "https://api.github.com/users/alialamiidrissi/followers",
"following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}",
"gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions",
"organizations_url": "https://api.github.com/users/alialamiidrissi/orgs",
"repos_url": "https://api.github.com/users/alialamiidrissi/repos",
"events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}",
"received_events_url": "https://api.github.com/users/alialamiidrissi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix."
] | 2023-05-15T17:13:29 | 2023-05-26T11:57:17 | 2023-05-26T11:57:17 | NONE | null | null | null | ### Describe the bug
Sequences features sometimes causes errors when the specified length is not -1
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Features, ClassLabel, Sequence, Value, Dataset
feats = Features(**{'target': ClassLabel(names=[0, 1]),'x': Sequence(feature=Value(dtype='float64',id=None), length=2, id=None)})
Dataset.from_dict({"target": np.ones(2000).astype(int), "x": np.random.rand(2000,2)},features = feats).flatten_indices()
```
Throws:
```
TypeError: Couldn't cast array of type
fixed_size_list<item: double>[2]
to
Sequence(feature=Value(dtype='float64', id=None), length=2, id=None)
```
The same code works without any issues when `length = -1`
EDIT: The error seems to happen only when the length of the dataset is bigger than 1000 for some reason
### Expected behavior
No exception
### Environment info
- `datasets` version: 2.10.1
- Python version: 3.9.5
- PyArrow version: 11.0.0
- Pandas version: 1.4.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5866/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5865/comments | https://api.github.com/repos/huggingface/datasets/issues/5865/events | https://github.com/huggingface/datasets/pull/5865 | 1,710,455,738 | PR_kwDODunzps5QiHnw | 5,865 | Deprecate task api | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"If it's easy to keep supporting it we can keep it no ? There are many datasets on the hub that implement the tasks templates in dataset scripts and it's maybe easier to keep task templates than opening PRs to those datasets.",
"do we know if people use the tasks api?\r\n\r\nedit: i mean, i'm fine with removing it if it's not used much, especially considering that it's not documented well.",
"@lhoestq \r\n\r\nLess than 80 public datasets (all canonical) implement `task_templates`, so updating them should be easy.\r\n\r\nPS: I skipped gated datasets when checking for the presence of `task_templates`, but it's safe to assume their contribution to the total count is insignificant.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006480 / 0.011353 (-0.004872) | 0.003904 / 0.011008 (-0.007104) | 0.084287 / 0.038508 (0.045779) | 0.071438 / 0.023109 (0.048329) | 0.309823 / 0.275898 (0.033925) | 0.341038 / 0.323480 (0.017558) | 0.005163 / 0.007986 (-0.002822) | 0.003291 / 0.004328 (-0.001037) | 0.064473 / 0.004250 (0.060222) | 0.053385 / 0.037052 (0.016332) | 0.323561 / 0.258489 (0.065072) | 0.346332 / 0.293841 (0.052491) | 0.030588 / 0.128546 (-0.097958) | 0.008342 / 0.075646 (-0.067305) | 0.287205 / 0.419271 (-0.132067) | 0.051953 / 0.043533 (0.008420) | 0.310925 / 0.255139 (0.055786) | 0.344443 / 0.283200 (0.061244) | 0.022754 / 0.141683 (-0.118928) | 1.459648 / 1.452155 (0.007494) | 1.528413 / 1.492716 (0.035697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206404 / 0.018006 (0.188398) | 0.461864 / 0.000490 (0.461374) | 0.004501 / 0.000200 (0.004302) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026891 / 0.037411 (-0.010520) | 0.081206 / 0.014526 (0.066680) | 0.093648 / 0.176557 (-0.082908) | 0.148491 / 0.737135 (-0.588645) | 0.093874 / 0.296338 (-0.202464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401715 / 0.215209 (0.186506) | 4.018597 / 2.077655 (1.940943) | 2.029735 / 1.504120 (0.525615) | 1.860069 / 1.541195 (0.318875) | 1.935712 / 1.468490 (0.467222) | 0.485896 / 4.584777 (-4.098881) | 3.638177 / 3.745712 (-0.107535) | 5.124058 / 5.269862 (-0.145804) | 3.099666 / 4.565676 (-1.466011) | 0.057173 / 0.424275 (-0.367102) | 0.007240 / 0.007607 (-0.000367) | 0.478758 / 0.226044 (0.252713) | 4.798471 / 2.268929 (2.529543) | 2.502980 / 55.444624 (-52.941645) | 2.170650 / 6.876477 (-4.705827) | 2.381394 / 2.142072 (0.239321) | 0.578766 / 4.805227 (-4.226462) | 0.132342 / 6.500664 (-6.368322) | 0.059759 / 0.075469 (-0.015710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249238 / 1.841788 (-0.592549) | 19.224673 / 8.074308 (11.150365) | 13.786894 / 10.191392 (3.595502) | 0.164633 / 0.680424 (-0.515791) | 0.018065 / 0.534201 (-0.516136) | 0.390589 / 0.579283 (-0.188694) | 0.408993 / 0.434364 (-0.025370) | 0.457001 / 0.540337 (-0.083336) | 0.625327 / 1.386936 (-0.761609) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006827 / 0.011353 (-0.004526) | 0.004007 / 0.011008 (-0.007001) | 0.065239 / 0.038508 (0.026731) | 0.079829 / 0.023109 (0.056719) | 0.400323 / 0.275898 (0.124425) | 0.434158 / 0.323480 (0.110678) | 0.005314 / 0.007986 (-0.002671) | 0.003354 / 0.004328 (-0.000974) | 0.065044 / 0.004250 (0.060794) | 0.060315 / 0.037052 (0.023262) | 0.401513 / 0.258489 (0.143024) | 0.441119 / 0.293841 (0.147278) | 0.031783 / 0.128546 (-0.096763) | 0.008608 / 0.075646 (-0.067038) | 0.071755 / 0.419271 (-0.347517) | 0.048816 / 0.043533 (0.005283) | 0.393896 / 0.255139 (0.138757) | 0.412156 / 0.283200 (0.128956) | 0.024410 / 0.141683 (-0.117272) | 1.515159 / 1.452155 (0.063005) | 1.562217 / 1.492716 (0.069501) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229993 / 0.018006 (0.211987) | 0.449898 / 0.000490 (0.449409) | 0.000376 / 0.000200 (0.000176) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030297 / 0.037411 (-0.007115) | 0.086737 / 0.014526 (0.072212) | 0.098312 / 0.176557 (-0.078244) | 0.152890 / 0.737135 (-0.584246) | 0.099335 / 0.296338 (-0.197003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415786 / 0.215209 (0.200577) | 4.137606 / 2.077655 (2.059952) | 2.120082 / 1.504120 (0.615963) | 1.943984 / 1.541195 (0.402789) | 2.040821 / 1.468490 (0.572331) | 0.479273 / 4.584777 (-4.105504) | 3.563854 / 3.745712 (-0.181858) | 3.396071 / 5.269862 (-1.873790) | 2.011302 / 4.565676 (-2.554374) | 0.057202 / 0.424275 (-0.367073) | 0.007338 / 0.007607 (-0.000269) | 0.488378 / 0.226044 (0.262333) | 4.881615 / 2.268929 (2.612686) | 2.669685 / 55.444624 (-52.774939) | 2.258236 / 6.876477 (-4.618241) | 2.343303 / 2.142072 (0.201230) | 0.606762 / 4.805227 (-4.198466) | 0.133190 / 6.500664 (-6.367475) | 0.062971 / 0.075469 (-0.012498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345215 / 1.841788 (-0.496573) | 20.023713 / 8.074308 (11.949405) | 14.555777 / 10.191392 (4.364385) | 0.162388 / 0.680424 (-0.518036) | 0.018528 / 0.534201 (-0.515673) | 0.393055 / 0.579283 (-0.186229) | 0.411820 / 0.434364 (-0.022544) | 0.461705 / 0.540337 (-0.078633) | 0.629395 / 1.386936 (-0.757541) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4f54f2ff4c68a00242789e9890e3b46cab320448 \"CML watermark\")\n",
"Ok ! I also know https://huggingface.co/datasets/hf-internal-testing/cats_vs_dogs_sample/blob/main/cats_vs_dogs_sample.py that needs to be updated as well",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009100 / 0.011353 (-0.002253) | 0.005158 / 0.011008 (-0.005850) | 0.109291 / 0.038508 (0.070782) | 0.086053 / 0.023109 (0.062943) | 0.469859 / 0.275898 (0.193961) | 0.476142 / 0.323480 (0.152662) | 0.006739 / 0.007986 (-0.001247) | 0.005077 / 0.004328 (0.000748) | 0.078193 / 0.004250 (0.073943) | 0.065956 / 0.037052 (0.028904) | 0.490323 / 0.258489 (0.231834) | 0.497418 / 0.293841 (0.203577) | 0.060562 / 0.128546 (-0.067984) | 0.016321 / 0.075646 (-0.059325) | 0.379703 / 0.419271 (-0.039568) | 0.087335 / 0.043533 (0.043802) | 0.488240 / 0.255139 (0.233101) | 0.497391 / 0.283200 (0.214191) | 0.040699 / 0.141683 (-0.100984) | 1.778925 / 1.452155 (0.326770) | 1.856436 / 1.492716 (0.363720) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236428 / 0.018006 (0.218422) | 0.551950 / 0.000490 (0.551460) | 0.007400 / 0.000200 (0.007201) | 0.000120 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028461 / 0.037411 (-0.008950) | 0.093441 / 0.014526 (0.078915) | 0.103868 / 0.176557 (-0.072688) | 0.176269 / 0.737135 (-0.560867) | 0.107760 / 0.296338 (-0.188578) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.593382 / 0.215209 (0.378173) | 5.863711 / 2.077655 (3.786057) | 2.493777 / 1.504120 (0.989657) | 2.088547 / 1.541195 (0.547352) | 2.173147 / 1.468490 (0.704656) | 0.875661 / 4.584777 (-3.709116) | 5.209023 / 3.745712 (1.463310) | 4.483261 / 5.269862 (-0.786600) | 2.843288 / 4.565676 (-1.722388) | 0.098488 / 0.424275 (-0.325787) | 0.008371 / 0.007607 (0.000764) | 0.668413 / 0.226044 (0.442368) | 6.709802 / 2.268929 (4.440873) | 3.132453 / 55.444624 (-52.312172) | 2.428736 / 6.876477 (-4.447741) | 2.560867 / 2.142072 (0.418794) | 0.983550 / 4.805227 (-3.821677) | 0.207072 / 6.500664 (-6.293592) | 0.073786 / 0.075469 (-0.001683) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.625871 / 1.841788 (-0.215917) | 23.481015 / 8.074308 (15.406707) | 20.556677 / 10.191392 (10.365285) | 0.238147 / 0.680424 (-0.442277) | 0.029453 / 0.534201 (-0.504748) | 0.464589 / 0.579283 (-0.114695) | 0.599129 / 0.434364 (0.164765) | 0.550146 / 0.540337 (0.009808) | 0.794646 / 1.386936 (-0.592290) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008613 / 0.011353 (-0.002739) | 0.004979 / 0.011008 (-0.006030) | 0.078095 / 0.038508 (0.039587) | 0.080285 / 0.023109 (0.057176) | 0.482881 / 0.275898 (0.206983) | 0.520442 / 0.323480 (0.196962) | 0.006241 / 0.007986 (-0.001744) | 0.003964 / 0.004328 (-0.000364) | 0.080027 / 0.004250 (0.075777) | 0.065209 / 0.037052 (0.028157) | 0.476113 / 0.258489 (0.217623) | 0.535383 / 0.293841 (0.241542) | 0.053084 / 0.128546 (-0.075462) | 0.014284 / 0.075646 (-0.061362) | 0.083859 / 0.419271 (-0.335413) | 0.061024 / 0.043533 (0.017492) | 0.477810 / 0.255139 (0.222671) | 0.508718 / 0.283200 (0.225518) | 0.036602 / 0.141683 (-0.105081) | 1.810422 / 1.452155 (0.358267) | 1.832833 / 1.492716 (0.340117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281443 / 0.018006 (0.263437) | 0.568249 / 0.000490 (0.567760) | 0.000493 / 0.000200 (0.000293) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033302 / 0.037411 (-0.004110) | 0.100433 / 0.014526 (0.085907) | 0.105465 / 0.176557 (-0.071091) | 0.161986 / 0.737135 (-0.575149) | 0.115736 / 0.296338 (-0.180603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.622892 / 0.215209 (0.407683) | 6.144361 / 2.077655 (4.066706) | 2.849443 / 1.504120 (1.345323) | 2.544097 / 1.541195 (1.002902) | 2.579859 / 1.468490 (1.111369) | 0.826078 / 4.584777 (-3.758699) | 5.021808 / 3.745712 (1.276096) | 4.694784 / 5.269862 (-0.575077) | 2.796263 / 4.565676 (-1.769413) | 0.090983 / 0.424275 (-0.333292) | 0.008445 / 0.007607 (0.000838) | 0.744675 / 0.226044 (0.518631) | 7.662989 / 2.268929 (5.394060) | 3.665611 / 55.444624 (-51.779013) | 2.942836 / 6.876477 (-3.933641) | 2.874402 / 2.142072 (0.732329) | 1.010097 / 4.805227 (-3.795130) | 0.218008 / 6.500664 (-6.282656) | 0.087359 / 0.075469 (0.011890) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.655631 / 1.841788 (-0.186157) | 23.539596 / 8.074308 (15.465288) | 20.909512 / 10.191392 (10.718120) | 0.202092 / 0.680424 (-0.478332) | 0.029807 / 0.534201 (-0.504394) | 0.487591 / 0.579283 (-0.091692) | 0.573719 / 0.434364 (0.139355) | 0.531168 / 0.540337 (-0.009170) | 0.742375 / 1.386936 (-0.644561) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aa231a7be55c6bca2bede8af4ac6da63c3162116 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006247 / 0.011353 (-0.005106) | 0.003650 / 0.011008 (-0.007358) | 0.079655 / 0.038508 (0.041147) | 0.060279 / 0.023109 (0.037170) | 0.309033 / 0.275898 (0.033135) | 0.338479 / 0.323480 (0.014999) | 0.004651 / 0.007986 (-0.003335) | 0.002849 / 0.004328 (-0.001480) | 0.062852 / 0.004250 (0.058602) | 0.049230 / 0.037052 (0.012178) | 0.312502 / 0.258489 (0.054012) | 0.354558 / 0.293841 (0.060717) | 0.027497 / 0.128546 (-0.101049) | 0.007885 / 0.075646 (-0.067762) | 0.260232 / 0.419271 (-0.159040) | 0.045459 / 0.043533 (0.001926) | 0.311629 / 0.255139 (0.056490) | 0.367806 / 0.283200 (0.084606) | 0.020875 / 0.141683 (-0.120808) | 1.423802 / 1.452155 (-0.028352) | 1.497729 / 1.492716 (0.005013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185629 / 0.018006 (0.167623) | 0.441421 / 0.000490 (0.440931) | 0.004847 / 0.000200 (0.004647) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022428 / 0.037411 (-0.014984) | 0.073375 / 0.014526 (0.058849) | 0.083194 / 0.176557 (-0.093363) | 0.143984 / 0.737135 (-0.593151) | 0.084128 / 0.296338 (-0.212211) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397220 / 0.215209 (0.182010) | 3.954394 / 2.077655 (1.876740) | 1.920638 / 1.504120 (0.416518) | 1.744284 / 1.541195 (0.203089) | 1.802623 / 1.468490 (0.334133) | 0.501988 / 4.584777 (-4.082789) | 3.096071 / 3.745712 (-0.649642) | 4.648267 / 5.269862 (-0.621595) | 2.770995 / 4.565676 (-1.794682) | 0.057513 / 0.424275 (-0.366762) | 0.006315 / 0.007607 (-0.001292) | 0.467683 / 0.226044 (0.241639) | 4.683959 / 2.268929 (2.415031) | 2.384980 / 55.444624 (-53.059645) | 2.030894 / 6.876477 (-4.845583) | 2.148374 / 2.142072 (0.006302) | 0.585142 / 4.805227 (-4.220085) | 0.123173 / 6.500664 (-6.377491) | 0.059140 / 0.075469 (-0.016329) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244707 / 1.841788 (-0.597080) | 18.176043 / 8.074308 (10.101735) | 13.742770 / 10.191392 (3.551378) | 0.149692 / 0.680424 (-0.530732) | 0.016591 / 0.534201 (-0.517610) | 0.342138 / 0.579283 (-0.237145) | 0.353931 / 0.434364 (-0.080433) | 0.392317 / 0.540337 (-0.148020) | 0.524011 / 1.386936 (-0.862925) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005937 / 0.011353 (-0.005416) | 0.003609 / 0.011008 (-0.007399) | 0.061729 / 0.038508 (0.023221) | 0.057844 / 0.023109 (0.034735) | 0.418051 / 0.275898 (0.142153) | 0.453014 / 0.323480 (0.129534) | 0.004530 / 0.007986 (-0.003456) | 0.002861 / 0.004328 (-0.001468) | 0.062236 / 0.004250 (0.057986) | 0.048612 / 0.037052 (0.011560) | 0.418487 / 0.258489 (0.159998) | 0.455114 / 0.293841 (0.161273) | 0.027419 / 0.128546 (-0.101127) | 0.007919 / 0.075646 (-0.067728) | 0.066940 / 0.419271 (-0.352331) | 0.041816 / 0.043533 (-0.001717) | 0.419788 / 0.255139 (0.164649) | 0.439682 / 0.283200 (0.156483) | 0.020902 / 0.141683 (-0.120781) | 1.473993 / 1.452155 (0.021838) | 1.532438 / 1.492716 (0.039722) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228766 / 0.018006 (0.210760) | 0.412189 / 0.000490 (0.411699) | 0.000371 / 0.000200 (0.000171) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026139 / 0.037411 (-0.011272) | 0.076626 / 0.014526 (0.062100) | 0.088262 / 0.176557 (-0.088295) | 0.143096 / 0.737135 (-0.594039) | 0.089642 / 0.296338 (-0.206696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423030 / 0.215209 (0.207821) | 4.218333 / 2.077655 (2.140679) | 2.280943 / 1.504120 (0.776823) | 2.051746 / 1.541195 (0.510551) | 2.101085 / 1.468490 (0.632595) | 0.495860 / 4.584777 (-4.088917) | 3.108065 / 3.745712 (-0.637647) | 2.944188 / 5.269862 (-2.325673) | 1.833693 / 4.565676 (-2.731984) | 0.057509 / 0.424275 (-0.366766) | 0.006406 / 0.007607 (-0.001201) | 0.497208 / 0.226044 (0.271164) | 4.974972 / 2.268929 (2.706044) | 2.786639 / 55.444624 (-52.657985) | 2.423815 / 6.876477 (-4.452662) | 2.446377 / 2.142072 (0.304305) | 0.584521 / 4.805227 (-4.220706) | 0.124129 / 6.500664 (-6.376535) | 0.061373 / 0.075469 (-0.014096) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307076 / 1.841788 (-0.534711) | 18.443873 / 8.074308 (10.369565) | 13.835730 / 10.191392 (3.644338) | 0.159795 / 0.680424 (-0.520629) | 0.016643 / 0.534201 (-0.517558) | 0.334300 / 0.579283 (-0.244983) | 0.347136 / 0.434364 (-0.087228) | 0.394633 / 0.540337 (-0.145704) | 0.552445 / 1.386936 (-0.834491) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8cfc0262363ea8cbd8c78537a09f851ec6ec30f5 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007273 / 0.011353 (-0.004080) | 0.004704 / 0.011008 (-0.006304) | 0.105857 / 0.038508 (0.067349) | 0.062493 / 0.023109 (0.039384) | 0.325704 / 0.275898 (0.049806) | 0.355795 / 0.323480 (0.032315) | 0.005552 / 0.007986 (-0.002433) | 0.003543 / 0.004328 (-0.000785) | 0.068098 / 0.004250 (0.063848) | 0.049563 / 0.037052 (0.012511) | 0.362956 / 0.258489 (0.104467) | 0.376047 / 0.293841 (0.082206) | 0.039272 / 0.128546 (-0.089275) | 0.011521 / 0.075646 (-0.064125) | 0.291899 / 0.419271 (-0.127373) | 0.056916 / 0.043533 (0.013383) | 0.365352 / 0.255139 (0.110213) | 0.357251 / 0.283200 (0.074051) | 0.031670 / 0.141683 (-0.110013) | 1.533294 / 1.452155 (0.081140) | 1.566580 / 1.492716 (0.073864) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219812 / 0.018006 (0.201805) | 0.499808 / 0.000490 (0.499318) | 0.000343 / 0.000200 (0.000143) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024011 / 0.037411 (-0.013400) | 0.079686 / 0.014526 (0.065161) | 0.087925 / 0.176557 (-0.088631) | 0.149065 / 0.737135 (-0.588071) | 0.088514 / 0.296338 (-0.207824) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.495003 / 0.215209 (0.279794) | 5.106371 / 2.077655 (3.028717) | 2.285497 / 1.504120 (0.781377) | 2.056052 / 1.541195 (0.514858) | 2.024913 / 1.468490 (0.556423) | 0.726048 / 4.584777 (-3.858729) | 4.873945 / 3.745712 (1.128233) | 7.488671 / 5.269862 (2.218809) | 4.361208 / 4.565676 (-0.204469) | 0.089014 / 0.424275 (-0.335261) | 0.007178 / 0.007607 (-0.000429) | 0.633669 / 0.226044 (0.407625) | 6.328154 / 2.268929 (4.059226) | 3.071598 / 55.444624 (-52.373026) | 2.416077 / 6.876477 (-4.460399) | 2.431033 / 2.142072 (0.288961) | 0.918167 / 4.805227 (-3.887060) | 0.193829 / 6.500664 (-6.306836) | 0.073446 / 0.075469 (-0.002023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.344994 / 1.841788 (-0.496793) | 19.911699 / 8.074308 (11.837391) | 17.182697 / 10.191392 (6.991305) | 0.216932 / 0.680424 (-0.463492) | 0.025415 / 0.534201 (-0.508786) | 0.416806 / 0.579283 (-0.162477) | 0.524934 / 0.434364 (0.090570) | 0.510783 / 0.540337 (-0.029554) | 0.687856 / 1.386936 (-0.699081) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008469 / 0.011353 (-0.002884) | 0.003797 / 0.011008 (-0.007211) | 0.067276 / 0.038508 (0.028768) | 0.066825 / 0.023109 (0.043716) | 0.394976 / 0.275898 (0.119078) | 0.432563 / 0.323480 (0.109083) | 0.006003 / 0.007986 (-0.001982) | 0.003399 / 0.004328 (-0.000930) | 0.070899 / 0.004250 (0.066649) | 0.050940 / 0.037052 (0.013887) | 0.378291 / 0.258489 (0.119802) | 0.429889 / 0.293841 (0.136048) | 0.043245 / 0.128546 (-0.085302) | 0.012182 / 0.075646 (-0.063465) | 0.074560 / 0.419271 (-0.344711) | 0.065290 / 0.043533 (0.021757) | 0.371209 / 0.255139 (0.116070) | 0.389731 / 0.283200 (0.106532) | 0.045729 / 0.141683 (-0.095954) | 1.451785 / 1.452155 (-0.000370) | 1.598539 / 1.492716 (0.105822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261357 / 0.018006 (0.243351) | 0.520142 / 0.000490 (0.519653) | 0.008305 / 0.000200 (0.008105) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026492 / 0.037411 (-0.010919) | 0.082430 / 0.014526 (0.067904) | 0.095979 / 0.176557 (-0.080578) | 0.151752 / 0.737135 (-0.585383) | 0.090086 / 0.296338 (-0.206252) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535967 / 0.215209 (0.320758) | 5.228605 / 2.077655 (3.150950) | 2.395078 / 1.504120 (0.890959) | 2.185500 / 1.541195 (0.644306) | 2.219456 / 1.468490 (0.750966) | 0.764794 / 4.584777 (-3.819983) | 4.796617 / 3.745712 (1.050905) | 4.143450 / 5.269862 (-1.126411) | 2.527391 / 4.565676 (-2.038286) | 0.081418 / 0.424275 (-0.342857) | 0.007170 / 0.007607 (-0.000437) | 0.706071 / 0.226044 (0.480026) | 6.501060 / 2.268929 (4.232131) | 3.176315 / 55.444624 (-52.268309) | 2.443245 / 6.876477 (-4.433232) | 2.517832 / 2.142072 (0.375759) | 0.916254 / 4.805227 (-3.888973) | 0.184282 / 6.500664 (-6.316382) | 0.062613 / 0.075469 (-0.012857) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.444283 / 1.841788 (-0.397504) | 20.227311 / 8.074308 (12.153003) | 17.512856 / 10.191392 (7.321464) | 0.219556 / 0.680424 (-0.460868) | 0.024705 / 0.534201 (-0.509496) | 0.423215 / 0.579283 (-0.156068) | 0.513103 / 0.434364 (0.078739) | 0.473853 / 0.540337 (-0.066485) | 0.738165 / 1.386936 (-0.648771) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b65660b7c6e853391991734210e38f805459b0ed \"CML watermark\")\n"
] | 2023-05-15T16:48:24 | 2023-07-10T12:33:59 | 2023-07-10T12:24:01 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5865",
"html_url": "https://github.com/huggingface/datasets/pull/5865",
"diff_url": "https://github.com/huggingface/datasets/pull/5865.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5865.patch",
"merged_at": "2023-07-10T12:24:01"
} | The task API is not well adopted in the ecosystem, so this PR deprecates it. The `train_eval_index` is a newer, more flexible solution that should be used instead (I think?).
These are the projects that still use the task API :
* the image classification example in Transformers: [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/pytorch/image-classification/run_image_classification_no_trainer.py#L262) and [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/tensorflow/image-classification/run_image_classification.py#L277)
* autotrain: [here](https://github.com/huggingface/autotrain-backend/blob/455e274004b56f9377d64db4ab03671508fcc4cd/zeus/zeus/run/utils.py#L666)
* api-inference-community: [here](https://github.com/huggingface/api-inference-community/blob/fb8fb29d577a5bf01c82944db745489a6d6ed3d4/manage.py#L64) (but the rest of the code does not call the `resolve_dataset` function)
So we need to update these files after the merge.
cc @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5865/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5864/comments | https://api.github.com/repos/huggingface/datasets/issues/5864/events | https://github.com/huggingface/datasets/issues/5864 | 1,710,450,047 | I_kwDODunzps5l82V_ | 5,864 | Slow iteration over Torch tensors | {
"login": "crisostomi",
"id": 51738205,
"node_id": "MDQ6VXNlcjUxNzM4MjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/51738205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crisostomi",
"html_url": "https://github.com/crisostomi",
"followers_url": "https://api.github.com/users/crisostomi/followers",
"following_url": "https://api.github.com/users/crisostomi/following{/other_user}",
"gists_url": "https://api.github.com/users/crisostomi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crisostomi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crisostomi/subscriptions",
"organizations_url": "https://api.github.com/users/crisostomi/orgs",
"repos_url": "https://api.github.com/users/crisostomi/repos",
"events_url": "https://api.github.com/users/crisostomi/events{/privacy}",
"received_events_url": "https://api.github.com/users/crisostomi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I am highly interested performance of dataset so I ran your example as a curious user.\r\n```python\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\n```\r\nhave return values and \"x\" is a new column, it shoulde be\r\n```python\r\nds=train_dataset.cast_column(\"img\", Array3D(shape=(3,32,32), dtype=\"float32\"))\r\n```\r\nI rewrite your example as\r\n```python\r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\nds=train_dataset.cast_column(\"img\", Array3D(shape=(3,32,32), dtype=\"float32\"))\r\nfor i in tqdm(ds):\r\n pass\r\n```\r\nthat require ~11s in my environment. While\r\n```python\r\nds = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\nfor i in tqdm(ds):\r\n pass\r\n```\r\nonly need ~6s. (So I guess it's still undesirable)"
] | 2023-05-15T16:43:58 | 2023-05-16T03:27:38 | null | NONE | null | null | null | ### Describe the bug
I have a problem related to this [issue](https://github.com/huggingface/datasets/issues/5841): I get a way slower iteration when using a Torch dataloader if I use vanilla Numpy tensors or if I first apply a ToTensor transform to the input. In particular, it takes 5 seconds to iterate over the vanilla input and ~30s after the transformation.
### Steps to reproduce the bug
Here is the minimum code to reproduce the problem
```python
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features
from torch.utils.data import DataLoader
from tqdm import tqdm
import torchvision
from torchvision.transforms import ToTensor, Normalize
#################################
# Without transform
#################################
train_dataset = load_dataset(
'cifar100',
split='train',
use_auth_token=True,
)
train_dataset.set_format(type="numpy", columns=["img", "fine_label"])
train_loader= DataLoader(
train_dataset,
batch_size=100,
pin_memory=False,
shuffle=True,
num_workers=8,
)
for batch in tqdm(train_loader, desc="Loading data, no transform"):
pass
#################################
# With transform
#################################
transform_func = torchvision.transforms.Compose([
ToTensor(),
Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),]
)
train_dataset = train_dataset.map(
desc=f"Preprocessing samples",
function=lambda x: {"img": transform_func(x["img"])},
)
train_dataset.set_format(type="numpy", columns=["img", "fine_label"])
train_loader= DataLoader(
train_dataset,
batch_size=100,
pin_memory=False,
shuffle=True,
num_workers=8,
)
for batch in tqdm(train_loader, desc="Loading data after transform"):
pass
```
I have also tried converting the Image column to an Array3D
```python
img_shape = train_dataset[0]["img"].shape
features = train_dataset.features.copy()
features["x"] = Array3D(shape=img_shape, dtype="float32")
train_dataset = train_dataset.map(
desc=f"Preprocessing samples",
function=lambda x: {"x": np.array(x["img"], dtype=np.uint8)},
features=features,
)
train_dataset.cast_column("x", Array3D(shape=img_shape, dtype="float32"))
train_dataset.set_format(type="numpy", columns=["x", "fine_label"])
```
but to no avail. Any clue?
### Expected behavior
The iteration should take approximately the same time with or without the transformation, as it doesn't change the shape of the input. What may be the issue here?
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5864/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5863/comments | https://api.github.com/repos/huggingface/datasets/issues/5863/events | https://github.com/huggingface/datasets/pull/5863 | 1,710,335,905 | PR_kwDODunzps5QhtlM | 5,863 | Use a new low-memory approach for tf dataset index shuffling | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5863). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007764 / 0.011353 (-0.003588) | 0.005397 / 0.011008 (-0.005611) | 0.097995 / 0.038508 (0.059487) | 0.036360 / 0.023109 (0.013251) | 0.312148 / 0.275898 (0.036250) | 0.349427 / 0.323480 (0.025947) | 0.006635 / 0.007986 (-0.001350) | 0.004373 / 0.004328 (0.000044) | 0.074350 / 0.004250 (0.070099) | 0.054667 / 0.037052 (0.017614) | 0.301621 / 0.258489 (0.043132) | 0.364233 / 0.293841 (0.070392) | 0.035356 / 0.128546 (-0.093191) | 0.012512 / 0.075646 (-0.063134) | 0.333399 / 0.419271 (-0.085873) | 0.051363 / 0.043533 (0.007830) | 0.302372 / 0.255139 (0.047233) | 0.326542 / 0.283200 (0.043343) | 0.118610 / 0.141683 (-0.023073) | 1.438485 / 1.452155 (-0.013669) | 1.539131 / 1.492716 (0.046415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010920 / 0.018006 (-0.007086) | 0.561263 / 0.000490 (0.560773) | 0.003972 / 0.000200 (0.003772) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030333 / 0.037411 (-0.007078) | 0.113608 / 0.014526 (0.099083) | 0.125802 / 0.176557 (-0.050755) | 0.183885 / 0.737135 (-0.553250) | 0.130242 / 0.296338 (-0.166097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404147 / 0.215209 (0.188938) | 4.021990 / 2.077655 (1.944335) | 1.821450 / 1.504120 (0.317330) | 1.619032 / 1.541195 (0.077837) | 1.791267 / 1.468490 (0.322777) | 0.706683 / 4.584777 (-3.878094) | 3.819056 / 3.745712 (0.073344) | 3.485714 / 5.269862 (-1.784147) | 1.938968 / 4.565676 (-2.626709) | 0.086501 / 0.424275 (-0.337774) | 0.012300 / 0.007607 (0.004693) | 0.503600 / 0.226044 (0.277555) | 5.042123 / 2.268929 (2.773195) | 2.269712 / 55.444624 (-53.174912) | 1.944912 / 6.876477 (-4.931565) | 2.155196 / 2.142072 (0.013123) | 0.853434 / 4.805227 (-3.951793) | 0.175554 / 6.500664 (-6.325110) | 0.072005 / 0.075469 (-0.003464) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203765 / 1.841788 (-0.638022) | 15.836634 / 8.074308 (7.762326) | 15.707348 / 10.191392 (5.515956) | 0.164828 / 0.680424 (-0.515596) | 0.018115 / 0.534201 (-0.516086) | 0.434591 / 0.579283 (-0.144692) | 0.437858 / 0.434364 (0.003495) | 0.524672 / 0.540337 (-0.015665) | 0.610535 / 1.386936 (-0.776401) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007558 / 0.011353 (-0.003795) | 0.005258 / 0.011008 (-0.005750) | 0.075263 / 0.038508 (0.036755) | 0.033915 / 0.023109 (0.010805) | 0.371368 / 0.275898 (0.095470) | 0.399239 / 0.323480 (0.075760) | 0.006547 / 0.007986 (-0.001439) | 0.004675 / 0.004328 (0.000347) | 0.074230 / 0.004250 (0.069980) | 0.054653 / 0.037052 (0.017601) | 0.376655 / 0.258489 (0.118166) | 0.438437 / 0.293841 (0.144596) | 0.035838 / 0.128546 (-0.092709) | 0.012641 / 0.075646 (-0.063005) | 0.087279 / 0.419271 (-0.331993) | 0.046311 / 0.043533 (0.002778) | 0.356649 / 0.255139 (0.101510) | 0.377876 / 0.283200 (0.094677) | 0.108097 / 0.141683 (-0.033586) | 1.478461 / 1.452155 (0.026306) | 1.560375 / 1.492716 (0.067658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316384 / 0.018006 (0.298378) | 0.539382 / 0.000490 (0.538892) | 0.002029 / 0.000200 (0.001829) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029950 / 0.037411 (-0.007462) | 0.111371 / 0.014526 (0.096846) | 0.125254 / 0.176557 (-0.051303) | 0.173064 / 0.737135 (-0.564071) | 0.130446 / 0.296338 (-0.165893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424882 / 0.215209 (0.209673) | 4.241575 / 2.077655 (2.163920) | 2.096216 / 1.504120 (0.592096) | 1.916017 / 1.541195 (0.374823) | 2.016318 / 1.468490 (0.547828) | 0.701197 / 4.584777 (-3.883580) | 3.762365 / 3.745712 (0.016652) | 3.307805 / 5.269862 (-1.962057) | 1.841752 / 4.565676 (-2.723925) | 0.086003 / 0.424275 (-0.338272) | 0.012247 / 0.007607 (0.004640) | 0.532926 / 0.226044 (0.306882) | 5.370509 / 2.268929 (3.101580) | 2.587853 / 55.444624 (-52.856772) | 2.264541 / 6.876477 (-4.611936) | 2.374833 / 2.142072 (0.232760) | 0.827751 / 4.805227 (-3.977476) | 0.169454 / 6.500664 (-6.331210) | 0.066340 / 0.075469 (-0.009129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319128 / 1.841788 (-0.522660) | 16.702085 / 8.074308 (8.627777) | 13.559957 / 10.191392 (3.368565) | 0.146659 / 0.680424 (-0.533765) | 0.017384 / 0.534201 (-0.516817) | 0.421126 / 0.579283 (-0.158157) | 0.422067 / 0.434364 (-0.012297) | 0.490615 / 0.540337 (-0.049723) | 0.587151 / 1.386936 (-0.799785) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#79f4b6de25128999f5fc0a7bde9aa71c461f518f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006604 / 0.011353 (-0.004749) | 0.004508 / 0.011008 (-0.006500) | 0.098652 / 0.038508 (0.060144) | 0.028172 / 0.023109 (0.005063) | 0.366997 / 0.275898 (0.091099) | 0.403691 / 0.323480 (0.080211) | 0.005127 / 0.007986 (-0.002859) | 0.003340 / 0.004328 (-0.000989) | 0.075408 / 0.004250 (0.071157) | 0.038049 / 0.037052 (0.000996) | 0.367914 / 0.258489 (0.109425) | 0.410958 / 0.293841 (0.117118) | 0.030454 / 0.128546 (-0.098093) | 0.011422 / 0.075646 (-0.064224) | 0.325048 / 0.419271 (-0.094223) | 0.042959 / 0.043533 (-0.000574) | 0.374536 / 0.255139 (0.119397) | 0.394738 / 0.283200 (0.111538) | 0.090481 / 0.141683 (-0.051201) | 1.504858 / 1.452155 (0.052703) | 1.569072 / 1.492716 (0.076356) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010062 / 0.018006 (-0.007945) | 0.408619 / 0.000490 (0.408130) | 0.002307 / 0.000200 (0.002107) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022898 / 0.037411 (-0.014514) | 0.096975 / 0.014526 (0.082449) | 0.103032 / 0.176557 (-0.073524) | 0.164877 / 0.737135 (-0.572259) | 0.107324 / 0.296338 (-0.189014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446652 / 0.215209 (0.231442) | 4.466939 / 2.077655 (2.389285) | 2.204590 / 1.504120 (0.700471) | 2.004048 / 1.541195 (0.462853) | 2.053035 / 1.468490 (0.584545) | 0.696617 / 4.584777 (-3.888160) | 3.391173 / 3.745712 (-0.354539) | 1.863306 / 5.269862 (-3.406556) | 1.160637 / 4.565676 (-3.405039) | 0.083115 / 0.424275 (-0.341160) | 0.012470 / 0.007607 (0.004862) | 0.547207 / 0.226044 (0.321163) | 5.500667 / 2.268929 (3.231739) | 2.656615 / 55.444624 (-52.788009) | 2.313281 / 6.876477 (-4.563195) | 2.395632 / 2.142072 (0.253559) | 0.815361 / 4.805227 (-3.989867) | 0.152112 / 6.500664 (-6.348552) | 0.067485 / 0.075469 (-0.007984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206975 / 1.841788 (-0.634813) | 13.684136 / 8.074308 (5.609828) | 13.919129 / 10.191392 (3.727737) | 0.140767 / 0.680424 (-0.539657) | 0.016445 / 0.534201 (-0.517756) | 0.379136 / 0.579283 (-0.200147) | 0.385395 / 0.434364 (-0.048969) | 0.445781 / 0.540337 (-0.094556) | 0.522056 / 1.386936 (-0.864880) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006370 / 0.011353 (-0.004983) | 0.004514 / 0.011008 (-0.006495) | 0.075671 / 0.038508 (0.037163) | 0.026723 / 0.023109 (0.003614) | 0.359819 / 0.275898 (0.083921) | 0.387935 / 0.323480 (0.064456) | 0.004888 / 0.007986 (-0.003098) | 0.004619 / 0.004328 (0.000290) | 0.075546 / 0.004250 (0.071295) | 0.039024 / 0.037052 (0.001971) | 0.361173 / 0.258489 (0.102684) | 0.411425 / 0.293841 (0.117584) | 0.030842 / 0.128546 (-0.097705) | 0.011555 / 0.075646 (-0.064091) | 0.084697 / 0.419271 (-0.334574) | 0.039281 / 0.043533 (-0.004252) | 0.370082 / 0.255139 (0.114943) | 0.382113 / 0.283200 (0.098913) | 0.091237 / 0.141683 (-0.050445) | 1.534185 / 1.452155 (0.082030) | 1.576488 / 1.492716 (0.083772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226568 / 0.018006 (0.208562) | 0.401566 / 0.000490 (0.401076) | 0.002915 / 0.000200 (0.002715) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025357 / 0.037411 (-0.012054) | 0.099747 / 0.014526 (0.085221) | 0.106443 / 0.176557 (-0.070113) | 0.157147 / 0.737135 (-0.579989) | 0.110759 / 0.296338 (-0.185580) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444648 / 0.215209 (0.229439) | 4.437930 / 2.077655 (2.360275) | 2.154033 / 1.504120 (0.649913) | 1.958351 / 1.541195 (0.417157) | 1.991031 / 1.468490 (0.522541) | 0.691440 / 4.584777 (-3.893337) | 3.369087 / 3.745712 (-0.376625) | 1.847103 / 5.269862 (-3.422758) | 1.152509 / 4.565676 (-3.413168) | 0.082519 / 0.424275 (-0.341756) | 0.012609 / 0.007607 (0.005001) | 0.547267 / 0.226044 (0.321222) | 5.501335 / 2.268929 (3.232407) | 2.621079 / 55.444624 (-52.823545) | 2.281332 / 6.876477 (-4.595145) | 2.300427 / 2.142072 (0.158354) | 0.803611 / 4.805227 (-4.001616) | 0.151784 / 6.500664 (-6.348880) | 0.067801 / 0.075469 (-0.007669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.343201 / 1.841788 (-0.498587) | 13.901033 / 8.074308 (5.826725) | 13.114738 / 10.191392 (2.923346) | 0.149358 / 0.680424 (-0.531066) | 0.016596 / 0.534201 (-0.517605) | 0.377310 / 0.579283 (-0.201973) | 0.387045 / 0.434364 (-0.047319) | 0.441272 / 0.540337 (-0.099065) | 0.525783 / 1.386936 (-0.861153) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c127e5575ab4e22648976ad268d76264ef5d04f8 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008147 / 0.011353 (-0.003205) | 0.005531 / 0.011008 (-0.005477) | 0.099796 / 0.038508 (0.061288) | 0.041574 / 0.023109 (0.018465) | 0.315752 / 0.275898 (0.039854) | 0.369846 / 0.323480 (0.046366) | 0.006489 / 0.007986 (-0.001497) | 0.004339 / 0.004328 (0.000010) | 0.074769 / 0.004250 (0.070519) | 0.051313 / 0.037052 (0.014261) | 0.313463 / 0.258489 (0.054974) | 0.369918 / 0.293841 (0.076077) | 0.035893 / 0.128546 (-0.092653) | 0.012487 / 0.075646 (-0.063159) | 0.336464 / 0.419271 (-0.082807) | 0.052870 / 0.043533 (0.009337) | 0.310795 / 0.255139 (0.055656) | 0.333146 / 0.283200 (0.049946) | 0.112813 / 0.141683 (-0.028870) | 1.488192 / 1.452155 (0.036038) | 1.563438 / 1.492716 (0.070721) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.015015 / 0.018006 (-0.002991) | 0.531783 / 0.000490 (0.531294) | 0.005039 / 0.000200 (0.004839) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030205 / 0.037411 (-0.007207) | 0.115997 / 0.014526 (0.101471) | 0.122958 / 0.176557 (-0.053599) | 0.186956 / 0.737135 (-0.550180) | 0.130268 / 0.296338 (-0.166071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402648 / 0.215209 (0.187439) | 3.996121 / 2.077655 (1.918466) | 1.811715 / 1.504120 (0.307595) | 1.640805 / 1.541195 (0.099610) | 1.810478 / 1.468490 (0.341988) | 0.699996 / 4.584777 (-3.884781) | 3.834020 / 3.745712 (0.088308) | 3.688364 / 5.269862 (-1.581498) | 1.973828 / 4.565676 (-2.591849) | 0.087085 / 0.424275 (-0.337190) | 0.012501 / 0.007607 (0.004894) | 0.498934 / 0.226044 (0.272889) | 4.977608 / 2.268929 (2.708680) | 2.258678 / 55.444624 (-53.185947) | 1.934251 / 6.876477 (-4.942226) | 2.177409 / 2.142072 (0.035337) | 0.873470 / 4.805227 (-3.931757) | 0.173132 / 6.500664 (-6.327532) | 0.069144 / 0.075469 (-0.006325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.181554 / 1.841788 (-0.660234) | 15.694468 / 8.074308 (7.620160) | 15.026954 / 10.191392 (4.835562) | 0.167092 / 0.680424 (-0.513332) | 0.017921 / 0.534201 (-0.516280) | 0.425649 / 0.579283 (-0.153634) | 0.423225 / 0.434364 (-0.011139) | 0.522132 / 0.540337 (-0.018205) | 0.612806 / 1.386936 (-0.774130) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007896 / 0.011353 (-0.003457) | 0.005581 / 0.011008 (-0.005427) | 0.076338 / 0.038508 (0.037830) | 0.037064 / 0.023109 (0.013954) | 0.399706 / 0.275898 (0.123808) | 0.431698 / 0.323480 (0.108218) | 0.006846 / 0.007986 (-0.001140) | 0.006010 / 0.004328 (0.001682) | 0.075771 / 0.004250 (0.071520) | 0.058214 / 0.037052 (0.021161) | 0.395753 / 0.258489 (0.137264) | 0.459925 / 0.293841 (0.166084) | 0.036349 / 0.128546 (-0.092197) | 0.012720 / 0.075646 (-0.062926) | 0.087248 / 0.419271 (-0.332024) | 0.049405 / 0.043533 (0.005872) | 0.387576 / 0.255139 (0.132437) | 0.409861 / 0.283200 (0.126661) | 0.111639 / 0.141683 (-0.030043) | 1.482840 / 1.452155 (0.030685) | 1.574465 / 1.492716 (0.081749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.320628 / 0.018006 (0.302622) | 0.556338 / 0.000490 (0.555848) | 0.000445 / 0.000200 (0.000245) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032905 / 0.037411 (-0.004507) | 0.121253 / 0.014526 (0.106727) | 0.127241 / 0.176557 (-0.049316) | 0.178090 / 0.737135 (-0.559045) | 0.143285 / 0.296338 (-0.153054) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437852 / 0.215209 (0.222643) | 4.369770 / 2.077655 (2.292115) | 2.219932 / 1.504120 (0.715812) | 2.032520 / 1.541195 (0.491325) | 2.154300 / 1.468490 (0.685810) | 0.678942 / 4.584777 (-3.905835) | 3.768148 / 3.745712 (0.022436) | 2.152738 / 5.269862 (-3.117124) | 1.341480 / 4.565676 (-3.224197) | 0.084326 / 0.424275 (-0.339949) | 0.012288 / 0.007607 (0.004681) | 0.547677 / 0.226044 (0.321633) | 5.496777 / 2.268929 (3.227848) | 2.702267 / 55.444624 (-52.742357) | 2.388580 / 6.876477 (-4.487897) | 2.471673 / 2.142072 (0.329601) | 0.833645 / 4.805227 (-3.971582) | 0.167113 / 6.500664 (-6.333551) | 0.067658 / 0.075469 (-0.007811) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282050 / 1.841788 (-0.559737) | 16.413677 / 8.074308 (8.339369) | 14.080910 / 10.191392 (3.889518) | 0.171782 / 0.680424 (-0.508642) | 0.018186 / 0.534201 (-0.516015) | 0.425244 / 0.579283 (-0.154039) | 0.430260 / 0.434364 (-0.004104) | 0.500838 / 0.540337 (-0.039499) | 0.591900 / 1.386936 (-0.795036) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5fc5c538de84da400118e3712077acc580ce85c4 \"CML watermark\")\n",
"The approach we take here is to no longer materialize the entire index array or shuffle buffer. Instead, we do the following:\r\n\r\n1) Generate a dataset with `tf.data.Dataset.range`. This dataset is not materialized - it's basically a range iterator.\r\n2) When we begin iterating over a dataset, generate a random seed. This value is constant for each pass over the dataset, and is regenerated if we start a new iteration or epoch over the dataset.\r\n3) Map the range dataset and the random seed with `tf.random.index_shuffle`. This converts indices into the equivalent values in a permuted array. In other words `tf.random.index_shuffle(indices, maxval=50_000_000)` is equivalent to `np.random.permutation(50_000_000)[indices]`, but without ever materializing the `np.random.permutation(50_000_000)` array.\r\n\r\nUsing this approach gives us a complete iteration over the dataset that does not skip any samples, compiles in TF and also never materializes the complete index array, which should avoid the memory usage issues. I'm testing that now!",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008395 / 0.011353 (-0.002958) | 0.005893 / 0.011008 (-0.005115) | 0.117081 / 0.038508 (0.078573) | 0.040987 / 0.023109 (0.017878) | 0.394234 / 0.275898 (0.118336) | 0.447036 / 0.323480 (0.123556) | 0.006703 / 0.007986 (-0.001283) | 0.006085 / 0.004328 (0.001757) | 0.086479 / 0.004250 (0.082228) | 0.050192 / 0.037052 (0.013140) | 0.400958 / 0.258489 (0.142469) | 0.455551 / 0.293841 (0.161710) | 0.041481 / 0.128546 (-0.087065) | 0.014135 / 0.075646 (-0.061511) | 0.399929 / 0.419271 (-0.019343) | 0.060824 / 0.043533 (0.017291) | 0.395946 / 0.255139 (0.140807) | 0.428811 / 0.283200 (0.145611) | 0.120057 / 0.141683 (-0.021626) | 1.703244 / 1.452155 (0.251090) | 1.841153 / 1.492716 (0.348436) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.021826 / 0.018006 (0.003820) | 0.494279 / 0.000490 (0.493789) | 0.011258 / 0.000200 (0.011058) | 0.000382 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031651 / 0.037411 (-0.005760) | 0.132871 / 0.014526 (0.118345) | 0.137388 / 0.176557 (-0.039169) | 0.205808 / 0.737135 (-0.531327) | 0.147585 / 0.296338 (-0.148753) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474483 / 0.215209 (0.259274) | 4.726568 / 2.077655 (2.648914) | 2.136172 / 1.504120 (0.632052) | 1.918364 / 1.541195 (0.377169) | 2.068794 / 1.468490 (0.600304) | 0.836481 / 4.584777 (-3.748296) | 4.550583 / 3.745712 (0.804871) | 2.456287 / 5.269862 (-2.813574) | 1.563127 / 4.565676 (-3.002550) | 0.102541 / 0.424275 (-0.321734) | 0.014492 / 0.007607 (0.006885) | 0.598572 / 0.226044 (0.372528) | 5.953321 / 2.268929 (3.684392) | 2.695210 / 55.444624 (-52.749414) | 2.294317 / 6.876477 (-4.582160) | 2.456585 / 2.142072 (0.314513) | 1.019907 / 4.805227 (-3.785320) | 0.201225 / 6.500664 (-6.299439) | 0.077113 / 0.075469 (0.001644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.497662 / 1.841788 (-0.344126) | 18.216941 / 8.074308 (10.142633) | 17.016638 / 10.191392 (6.825246) | 0.193271 / 0.680424 (-0.487153) | 0.020440 / 0.534201 (-0.513761) | 0.509361 / 0.579283 (-0.069922) | 0.513389 / 0.434364 (0.079025) | 0.622266 / 0.540337 (0.081928) | 0.741733 / 1.386936 (-0.645203) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008641 / 0.011353 (-0.002712) | 0.005792 / 0.011008 (-0.005216) | 0.086020 / 0.038508 (0.047512) | 0.040005 / 0.023109 (0.016896) | 0.435120 / 0.275898 (0.159222) | 0.480269 / 0.323480 (0.156789) | 0.006669 / 0.007986 (-0.001317) | 0.006039 / 0.004328 (0.001711) | 0.083468 / 0.004250 (0.079218) | 0.057700 / 0.037052 (0.020648) | 0.416418 / 0.258489 (0.157929) | 0.508286 / 0.293841 (0.214445) | 0.041198 / 0.128546 (-0.087349) | 0.014346 / 0.075646 (-0.061301) | 0.100553 / 0.419271 (-0.318718) | 0.054201 / 0.043533 (0.010668) | 0.438232 / 0.255139 (0.183093) | 0.454707 / 0.283200 (0.171508) | 0.118332 / 0.141683 (-0.023351) | 1.657607 / 1.452155 (0.205452) | 1.825510 / 1.492716 (0.332794) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236156 / 0.018006 (0.218150) | 0.487612 / 0.000490 (0.487123) | 0.005747 / 0.000200 (0.005547) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035127 / 0.037411 (-0.002284) | 0.132013 / 0.014526 (0.117487) | 0.142316 / 0.176557 (-0.034241) | 0.198627 / 0.737135 (-0.538508) | 0.145454 / 0.296338 (-0.150885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513041 / 0.215209 (0.297832) | 5.066197 / 2.077655 (2.988542) | 2.508779 / 1.504120 (1.004659) | 2.273901 / 1.541195 (0.732706) | 2.364958 / 1.468490 (0.896468) | 0.811367 / 4.584777 (-3.773410) | 4.504744 / 3.745712 (0.759032) | 2.499811 / 5.269862 (-2.770050) | 1.583349 / 4.565676 (-2.982328) | 0.101701 / 0.424275 (-0.322574) | 0.014379 / 0.007607 (0.006772) | 0.669506 / 0.226044 (0.443462) | 6.556702 / 2.268929 (4.287774) | 3.123457 / 55.444624 (-52.321167) | 2.731997 / 6.876477 (-4.144480) | 2.862866 / 2.142072 (0.720794) | 0.992956 / 4.805227 (-3.812271) | 0.200473 / 6.500664 (-6.300191) | 0.078780 / 0.075469 (0.003311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.540718 / 1.841788 (-0.301070) | 18.749344 / 8.074308 (10.675036) | 15.648983 / 10.191392 (5.457591) | 0.174089 / 0.680424 (-0.506335) | 0.020441 / 0.534201 (-0.513760) | 0.503742 / 0.579283 (-0.075541) | 0.500648 / 0.434364 (0.066284) | 0.598558 / 0.540337 (0.058221) | 0.712093 / 1.386936 (-0.674843) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#621554280f964b5fe87ece1a46b794406d943b1e \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009940 / 0.011353 (-0.001412) | 0.006193 / 0.011008 (-0.004815) | 0.125874 / 0.038508 (0.087366) | 0.038664 / 0.023109 (0.015555) | 0.380013 / 0.275898 (0.104115) | 0.430152 / 0.323480 (0.106672) | 0.006961 / 0.007986 (-0.001025) | 0.004749 / 0.004328 (0.000420) | 0.099743 / 0.004250 (0.095492) | 0.052349 / 0.037052 (0.015297) | 0.433354 / 0.258489 (0.174865) | 0.436273 / 0.293841 (0.142433) | 0.053929 / 0.128546 (-0.074617) | 0.019369 / 0.075646 (-0.056278) | 0.421783 / 0.419271 (0.002511) | 0.062746 / 0.043533 (0.019213) | 0.377225 / 0.255139 (0.122086) | 0.413708 / 0.283200 (0.130508) | 0.111371 / 0.141683 (-0.030312) | 1.819166 / 1.452155 (0.367011) | 1.974527 / 1.492716 (0.481810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090664 / 0.018006 (0.072658) | 0.566166 / 0.000490 (0.565676) | 0.079305 / 0.000200 (0.079105) | 0.000755 / 0.000054 (0.000700) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029720 / 0.037411 (-0.007691) | 0.126030 / 0.014526 (0.111504) | 0.146020 / 0.176557 (-0.030537) | 0.210354 / 0.737135 (-0.526781) | 0.149428 / 0.296338 (-0.146910) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.624371 / 0.215209 (0.409162) | 6.332839 / 2.077655 (4.255184) | 2.547784 / 1.504120 (1.043664) | 2.150508 / 1.541195 (0.609313) | 2.240816 / 1.468490 (0.772326) | 1.271131 / 4.584777 (-3.313646) | 5.642726 / 3.745712 (1.897014) | 3.212988 / 5.269862 (-2.056874) | 2.258123 / 4.565676 (-2.307553) | 0.149477 / 0.424275 (-0.274798) | 0.014603 / 0.007607 (0.006996) | 0.782155 / 0.226044 (0.556111) | 7.855191 / 2.268929 (5.586262) | 3.308638 / 55.444624 (-52.135986) | 2.548142 / 6.876477 (-4.328335) | 2.627374 / 2.142072 (0.485301) | 1.515170 / 4.805227 (-3.290058) | 0.262479 / 6.500664 (-6.238185) | 0.082181 / 0.075469 (0.006712) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.573169 / 1.841788 (-0.268618) | 18.105719 / 8.074308 (10.031411) | 22.015179 / 10.191392 (11.823787) | 0.254678 / 0.680424 (-0.425746) | 0.027098 / 0.534201 (-0.507103) | 0.578045 / 0.579283 (-0.001238) | 0.647130 / 0.434364 (0.212766) | 0.650522 / 0.540337 (0.110185) | 0.797713 / 1.386936 (-0.589223) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010376 / 0.011353 (-0.000977) | 0.005990 / 0.011008 (-0.005018) | 0.097144 / 0.038508 (0.058635) | 0.038205 / 0.023109 (0.015096) | 0.468347 / 0.275898 (0.192449) | 0.497646 / 0.323480 (0.174166) | 0.006916 / 0.007986 (-0.001069) | 0.004760 / 0.004328 (0.000431) | 0.109838 / 0.004250 (0.105587) | 0.048321 / 0.037052 (0.011269) | 0.437458 / 0.258489 (0.178969) | 0.534864 / 0.293841 (0.241023) | 0.053655 / 0.128546 (-0.074892) | 0.021915 / 0.075646 (-0.053732) | 0.121047 / 0.419271 (-0.298224) | 0.059694 / 0.043533 (0.016162) | 0.466937 / 0.255139 (0.211798) | 0.482030 / 0.283200 (0.198831) | 0.117458 / 0.141683 (-0.024225) | 1.835551 / 1.452155 (0.383396) | 1.965748 / 1.492716 (0.473031) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234885 / 0.018006 (0.216879) | 0.529925 / 0.000490 (0.529436) | 0.000484 / 0.000200 (0.000284) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030959 / 0.037411 (-0.006453) | 0.128905 / 0.014526 (0.114379) | 0.136913 / 0.176557 (-0.039643) | 0.195133 / 0.737135 (-0.542002) | 0.147929 / 0.296338 (-0.148410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.715661 / 0.215209 (0.500451) | 6.994125 / 2.077655 (4.916470) | 3.033178 / 1.504120 (1.529058) | 2.663709 / 1.541195 (1.122515) | 2.707558 / 1.468490 (1.239068) | 1.316195 / 4.584777 (-3.268582) | 5.688264 / 3.745712 (1.942552) | 3.260897 / 5.269862 (-2.008964) | 2.134985 / 4.565676 (-2.430691) | 0.153945 / 0.424275 (-0.270330) | 0.014727 / 0.007607 (0.007119) | 0.911339 / 0.226044 (0.685294) | 8.902640 / 2.268929 (6.633711) | 3.806606 / 55.444624 (-51.638018) | 3.052238 / 6.876477 (-3.824238) | 3.046945 / 2.142072 (0.904873) | 1.559837 / 4.805227 (-3.245390) | 0.272276 / 6.500664 (-6.228388) | 0.087728 / 0.075469 (0.012259) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.712691 / 1.841788 (-0.129097) | 18.127575 / 8.074308 (10.053267) | 19.734063 / 10.191392 (9.542671) | 0.235006 / 0.680424 (-0.445418) | 0.027581 / 0.534201 (-0.506620) | 0.551080 / 0.579283 (-0.028203) | 0.608564 / 0.434364 (0.174200) | 0.636578 / 0.540337 (0.096241) | 0.732374 / 1.386936 (-0.654562) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#36911ca06d9c4e37ce36da6228cb3af8b40c2add \"CML watermark\")\n",
"Looks good in testing - this should be ready for review! cc @lhoestq @massquantity",
"Looks good to me, though i doubt that very few people will upgrade to TF >= 2.9 unless their memory is full:)",
"Is it more efficient than using numpy to shuffle as in multiprocessing ? Why not use the same strategy ?",
"Good question, honestly! The NumPy strategy works fine, but requires us to handle multiple processes instead of doing everything in `tf.data`. We could just scrap this entire code path and always use the multiprocessing NumPy approach, but I think single-threaded throughput would be lower if we did that. If you prefer it for code simplicity, though, I can do that.\r\n\r\nIn the longer term, I'm hoping that `tf.data` gets native support for our data structures and we can transition the whole pipeline to pure `tf.data`, but that still hasn't happened 🫠",
"And @massquantity TF 2.13 is going to release in a couple of days, so I hope most users are at least on TF 2.9 by now!",
"Unless there is a big gap in performance I think code simplicity would be appreciated ^^",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008638 / 0.011353 (-0.002715) | 0.006013 / 0.011008 (-0.004995) | 0.116456 / 0.038508 (0.077948) | 0.040419 / 0.023109 (0.017310) | 0.418374 / 0.275898 (0.142476) | 0.447693 / 0.323480 (0.124213) | 0.007002 / 0.007986 (-0.000984) | 0.006175 / 0.004328 (0.001847) | 0.087801 / 0.004250 (0.083550) | 0.051980 / 0.037052 (0.014928) | 0.393275 / 0.258489 (0.134786) | 0.449601 / 0.293841 (0.155760) | 0.041670 / 0.128546 (-0.086876) | 0.014396 / 0.075646 (-0.061251) | 0.399175 / 0.419271 (-0.020096) | 0.060635 / 0.043533 (0.017102) | 0.391449 / 0.255139 (0.136310) | 0.420713 / 0.283200 (0.137513) | 0.121369 / 0.141683 (-0.020314) | 1.692630 / 1.452155 (0.240475) | 1.815526 / 1.492716 (0.322810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244321 / 0.018006 (0.226315) | 0.487947 / 0.000490 (0.487458) | 0.004563 / 0.000200 (0.004363) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033425 / 0.037411 (-0.003987) | 0.134458 / 0.014526 (0.119932) | 0.138810 / 0.176557 (-0.037746) | 0.208871 / 0.737135 (-0.528264) | 0.147964 / 0.296338 (-0.148374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483347 / 0.215209 (0.268138) | 4.799550 / 2.077655 (2.721895) | 2.174149 / 1.504120 (0.670029) | 1.943276 / 1.541195 (0.402081) | 2.010884 / 1.468490 (0.542394) | 0.832030 / 4.584777 (-3.752747) | 4.716713 / 3.745712 (0.971001) | 4.615810 / 5.269862 (-0.654052) | 2.379600 / 4.565676 (-2.186077) | 0.103560 / 0.424275 (-0.320715) | 0.014683 / 0.007607 (0.007076) | 0.598558 / 0.226044 (0.372514) | 5.999126 / 2.268929 (3.730197) | 2.677819 / 55.444624 (-52.766805) | 2.320838 / 6.876477 (-4.555639) | 2.503684 / 2.142072 (0.361611) | 1.016459 / 4.805227 (-3.788769) | 0.201672 / 6.500664 (-6.298992) | 0.079310 / 0.075469 (0.003841) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.446374 / 1.841788 (-0.395413) | 19.219310 / 8.074308 (11.145002) | 17.294665 / 10.191392 (7.103273) | 0.246115 / 0.680424 (-0.434309) | 0.021406 / 0.534201 (-0.512795) | 0.524084 / 0.579283 (-0.055200) | 0.511254 / 0.434364 (0.076890) | 0.621304 / 0.540337 (0.080966) | 0.727088 / 1.386936 (-0.659848) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008907 / 0.011353 (-0.002446) | 0.006165 / 0.011008 (-0.004843) | 0.090786 / 0.038508 (0.052278) | 0.040893 / 0.023109 (0.017784) | 0.451252 / 0.275898 (0.175354) | 0.477811 / 0.323480 (0.154331) | 0.007418 / 0.007986 (-0.000568) | 0.005789 / 0.004328 (0.001461) | 0.087422 / 0.004250 (0.083171) | 0.061800 / 0.037052 (0.024748) | 0.459085 / 0.258489 (0.200596) | 0.488897 / 0.293841 (0.195056) | 0.048157 / 0.128546 (-0.080389) | 0.014676 / 0.075646 (-0.060970) | 0.104372 / 0.419271 (-0.314900) | 0.058066 / 0.043533 (0.014534) | 0.446131 / 0.255139 (0.190992) | 0.460428 / 0.283200 (0.177228) | 0.128492 / 0.141683 (-0.013191) | 1.811419 / 1.452155 (0.359265) | 1.894781 / 1.492716 (0.402064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220527 / 0.018006 (0.202520) | 0.487663 / 0.000490 (0.487173) | 0.003864 / 0.000200 (0.003664) | 0.000162 / 0.000054 (0.000107) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036354 / 0.037411 (-0.001057) | 0.140469 / 0.014526 (0.125944) | 0.149990 / 0.176557 (-0.026566) | 0.212369 / 0.737135 (-0.524766) | 0.154000 / 0.296338 (-0.142338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514172 / 0.215209 (0.298963) | 5.129247 / 2.077655 (3.051593) | 2.536773 / 1.504120 (1.032653) | 2.317253 / 1.541195 (0.776058) | 2.424066 / 1.468490 (0.955576) | 0.836160 / 4.584777 (-3.748617) | 4.906235 / 3.745712 (1.160523) | 4.431395 / 5.269862 (-0.838467) | 2.332845 / 4.565676 (-2.232831) | 0.102867 / 0.424275 (-0.321409) | 0.014851 / 0.007607 (0.007244) | 0.644104 / 0.226044 (0.418060) | 6.415847 / 2.268929 (4.146918) | 3.186984 / 55.444624 (-52.257641) | 2.774125 / 6.876477 (-4.102352) | 2.848045 / 2.142072 (0.705972) | 1.018757 / 4.805227 (-3.786470) | 0.212333 / 6.500664 (-6.288331) | 0.079405 / 0.075469 (0.003936) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748375 / 1.841788 (-0.093412) | 19.733829 / 8.074308 (11.659521) | 15.766665 / 10.191392 (5.575273) | 0.192087 / 0.680424 (-0.488337) | 0.027641 / 0.534201 (-0.506560) | 0.504101 / 0.579283 (-0.075182) | 0.493815 / 0.434364 (0.059451) | 0.583247 / 0.540337 (0.042910) | 0.697432 / 1.386936 (-0.689504) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#95c177e02ca20bf7bb3ed8f185d2d6f05a5e5f30 \"CML watermark\")\n",
"Hi @lhoestq, I tried moving everything to the NumPy path but ran into issues - the `SharedMemory` constructs it depends on were only added in Python 3.8. As a result, if we move everything to that path then `to_tf_dataset` does not work on older Python versions.\r\n\r\nFor now, how do you feel about reverting and using my original solution, which has fallbacks for all versions of Python and TensorFlow? Once our minimum versions pass Python 3.8 or TF 2.9 we can remove the older code paths.",
"Gentle ping on this question @lhoestq!",
"Ah yes indeed. Feel free to revert and add comments to explain why you needed to have a different approach for single process",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008395 / 0.011353 (-0.002958) | 0.005773 / 0.011008 (-0.005235) | 0.115702 / 0.038508 (0.077194) | 0.039897 / 0.023109 (0.016788) | 0.483140 / 0.275898 (0.207242) | 0.531288 / 0.323480 (0.207808) | 0.006739 / 0.007986 (-0.001246) | 0.004419 / 0.004328 (0.000090) | 0.086374 / 0.004250 (0.082124) | 0.056498 / 0.037052 (0.019446) | 0.491589 / 0.258489 (0.233100) | 0.556366 / 0.293841 (0.262525) | 0.041366 / 0.128546 (-0.087181) | 0.014373 / 0.075646 (-0.061274) | 0.395504 / 0.419271 (-0.023767) | 0.094382 / 0.043533 (0.050849) | 0.483000 / 0.255139 (0.227861) | 0.522693 / 0.283200 (0.239494) | 0.138804 / 0.141683 (-0.002879) | 1.719563 / 1.452155 (0.267409) | 1.853470 / 1.492716 (0.360753) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235616 / 0.018006 (0.217610) | 0.483267 / 0.000490 (0.482777) | 0.008663 / 0.000200 (0.008463) | 0.000401 / 0.000054 (0.000347) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033124 / 0.037411 (-0.004287) | 0.128821 / 0.014526 (0.114295) | 0.138910 / 0.176557 (-0.037647) | 0.213570 / 0.737135 (-0.523566) | 0.146646 / 0.296338 (-0.149693) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479998 / 0.215209 (0.264789) | 4.772325 / 2.077655 (2.694670) | 2.228424 / 1.504120 (0.724304) | 2.000915 / 1.541195 (0.459721) | 2.105799 / 1.468490 (0.637309) | 0.824235 / 4.584777 (-3.760542) | 4.511902 / 3.745712 (0.766189) | 4.723073 / 5.269862 (-0.546789) | 2.333442 / 4.565676 (-2.232235) | 0.101161 / 0.424275 (-0.323114) | 0.014403 / 0.007607 (0.006796) | 0.596395 / 0.226044 (0.370351) | 5.961046 / 2.268929 (3.692117) | 2.746679 / 55.444624 (-52.697946) | 2.352085 / 6.876477 (-4.524392) | 2.609812 / 2.142072 (0.467740) | 0.996950 / 4.805227 (-3.808277) | 0.197923 / 6.500664 (-6.302741) | 0.075546 / 0.075469 (0.000077) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.529896 / 1.841788 (-0.311892) | 18.183887 / 8.074308 (10.109578) | 16.352332 / 10.191392 (6.160940) | 0.213504 / 0.680424 (-0.466920) | 0.020388 / 0.534201 (-0.513813) | 0.497832 / 0.579283 (-0.081451) | 0.495477 / 0.434364 (0.061113) | 0.585984 / 0.540337 (0.045647) | 0.688726 / 1.386936 (-0.698210) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008422 / 0.011353 (-0.002931) | 0.005876 / 0.011008 (-0.005132) | 0.089310 / 0.038508 (0.050802) | 0.039769 / 0.023109 (0.016660) | 0.425279 / 0.275898 (0.149381) | 0.470818 / 0.323480 (0.147338) | 0.006519 / 0.007986 (-0.001467) | 0.006276 / 0.004328 (0.001948) | 0.085753 / 0.004250 (0.081503) | 0.053867 / 0.037052 (0.016815) | 0.429193 / 0.258489 (0.170704) | 0.480278 / 0.293841 (0.186437) | 0.040657 / 0.128546 (-0.087889) | 0.014055 / 0.075646 (-0.061591) | 0.101422 / 0.419271 (-0.317849) | 0.053803 / 0.043533 (0.010271) | 0.428348 / 0.255139 (0.173209) | 0.452193 / 0.283200 (0.168994) | 0.124914 / 0.141683 (-0.016769) | 1.750122 / 1.452155 (0.297968) | 1.850875 / 1.492716 (0.358159) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249958 / 0.018006 (0.231952) | 0.485183 / 0.000490 (0.484694) | 0.000472 / 0.000200 (0.000272) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034563 / 0.037411 (-0.002848) | 0.135565 / 0.014526 (0.121039) | 0.143271 / 0.176557 (-0.033285) | 0.199080 / 0.737135 (-0.538056) | 0.149336 / 0.296338 (-0.147003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.526170 / 0.215209 (0.310961) | 5.270960 / 2.077655 (3.193305) | 2.664585 / 1.504120 (1.160465) | 2.440027 / 1.541195 (0.898832) | 2.612764 / 1.468490 (1.144274) | 0.828965 / 4.584777 (-3.755812) | 4.769983 / 3.745712 (1.024271) | 2.441962 / 5.269862 (-2.827900) | 1.549032 / 4.565676 (-3.016644) | 0.100851 / 0.424275 (-0.323424) | 0.014425 / 0.007607 (0.006818) | 0.640908 / 0.226044 (0.414864) | 6.399041 / 2.268929 (4.130113) | 3.242424 / 55.444624 (-52.202200) | 2.836317 / 6.876477 (-4.040160) | 2.933010 / 2.142072 (0.790938) | 1.002277 / 4.805227 (-3.802950) | 0.201247 / 6.500664 (-6.299417) | 0.078777 / 0.075469 (0.003308) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620415 / 1.841788 (-0.221373) | 19.153631 / 8.074308 (11.079323) | 16.744068 / 10.191392 (6.552676) | 0.167327 / 0.680424 (-0.513097) | 0.020186 / 0.534201 (-0.514015) | 0.503683 / 0.579283 (-0.075600) | 0.500051 / 0.434364 (0.065687) | 0.587188 / 0.540337 (0.046850) | 0.699975 / 1.386936 (-0.686961) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#291d7ffa695edb4b4e818c783b16d3466246cd56 \"CML watermark\")\n",
"This is probably ready, but likely conflicts with #5883. I'll wait for that PR to be merged and then rebase and merge this one.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008387 / 0.011353 (-0.002965) | 0.005824 / 0.011008 (-0.005184) | 0.117721 / 0.038508 (0.079213) | 0.040420 / 0.023109 (0.017311) | 0.404961 / 0.275898 (0.129063) | 0.426695 / 0.323480 (0.103215) | 0.006634 / 0.007986 (-0.001352) | 0.006033 / 0.004328 (0.001705) | 0.088652 / 0.004250 (0.084402) | 0.048075 / 0.037052 (0.011022) | 0.400683 / 0.258489 (0.142194) | 0.432489 / 0.293841 (0.138648) | 0.042065 / 0.128546 (-0.086482) | 0.014071 / 0.075646 (-0.061575) | 0.399398 / 0.419271 (-0.019873) | 0.066034 / 0.043533 (0.022501) | 0.400056 / 0.255139 (0.144918) | 0.421130 / 0.283200 (0.137930) | 0.119721 / 0.141683 (-0.021962) | 1.752166 / 1.452155 (0.300011) | 1.820161 / 1.492716 (0.327444) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244264 / 0.018006 (0.226258) | 0.480882 / 0.000490 (0.480392) | 0.005604 / 0.000200 (0.005404) | 0.000175 / 0.000054 (0.000121) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032397 / 0.037411 (-0.005015) | 0.131632 / 0.014526 (0.117106) | 0.139765 / 0.176557 (-0.036792) | 0.213135 / 0.737135 (-0.524000) | 0.147891 / 0.296338 (-0.148447) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474534 / 0.215209 (0.259325) | 4.730424 / 2.077655 (2.652770) | 2.163706 / 1.504120 (0.659586) | 1.936051 / 1.541195 (0.394857) | 2.012185 / 1.468490 (0.543695) | 0.826583 / 4.584777 (-3.758194) | 4.921494 / 3.745712 (1.175782) | 2.431401 / 5.269862 (-2.838460) | 1.566020 / 4.565676 (-2.999656) | 0.101255 / 0.424275 (-0.323020) | 0.014553 / 0.007607 (0.006946) | 0.608301 / 0.226044 (0.382256) | 6.089801 / 2.268929 (3.820873) | 2.691986 / 55.444624 (-52.752638) | 2.296498 / 6.876477 (-4.579979) | 2.455388 / 2.142072 (0.313315) | 0.984342 / 4.805227 (-3.820885) | 0.200447 / 6.500664 (-6.300217) | 0.077602 / 0.075469 (0.002133) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.445067 / 1.841788 (-0.396721) | 18.588670 / 8.074308 (10.514362) | 16.950216 / 10.191392 (6.758824) | 0.169688 / 0.680424 (-0.510736) | 0.020544 / 0.534201 (-0.513657) | 0.508506 / 0.579283 (-0.070777) | 0.516218 / 0.434364 (0.081854) | 0.646072 / 0.540337 (0.105734) | 0.763227 / 1.386936 (-0.623709) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008816 / 0.011353 (-0.002537) | 0.006016 / 0.011008 (-0.004992) | 0.090946 / 0.038508 (0.052438) | 0.040189 / 0.023109 (0.017080) | 0.446723 / 0.275898 (0.170825) | 0.494633 / 0.323480 (0.171153) | 0.007206 / 0.007986 (-0.000779) | 0.004508 / 0.004328 (0.000180) | 0.088477 / 0.004250 (0.084226) | 0.055587 / 0.037052 (0.018535) | 0.445349 / 0.258489 (0.186860) | 0.504940 / 0.293841 (0.211099) | 0.041976 / 0.128546 (-0.086570) | 0.014296 / 0.075646 (-0.061351) | 0.102835 / 0.419271 (-0.316436) | 0.054786 / 0.043533 (0.011253) | 0.444789 / 0.255139 (0.189651) | 0.472306 / 0.283200 (0.189106) | 0.123365 / 0.141683 (-0.018318) | 1.725803 / 1.452155 (0.273648) | 1.832216 / 1.492716 (0.339500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252680 / 0.018006 (0.234674) | 0.476719 / 0.000490 (0.476229) | 0.000461 / 0.000200 (0.000261) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035961 / 0.037411 (-0.001450) | 0.135399 / 0.014526 (0.120873) | 0.147549 / 0.176557 (-0.029007) | 0.207468 / 0.737135 (-0.529667) | 0.151591 / 0.296338 (-0.144747) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528143 / 0.215209 (0.312934) | 5.270766 / 2.077655 (3.193111) | 2.675644 / 1.504120 (1.171524) | 2.472855 / 1.541195 (0.931660) | 2.636020 / 1.468490 (1.167530) | 0.841325 / 4.584777 (-3.743452) | 4.702290 / 3.745712 (0.956578) | 2.523537 / 5.269862 (-2.746325) | 1.595617 / 4.565676 (-2.970059) | 0.102095 / 0.424275 (-0.322180) | 0.014568 / 0.007607 (0.006961) | 0.652090 / 0.226044 (0.426046) | 6.503086 / 2.268929 (4.234158) | 3.277025 / 55.444624 (-52.167599) | 2.931264 / 6.876477 (-3.945213) | 3.021667 / 2.142072 (0.879594) | 1.002560 / 4.805227 (-3.802668) | 0.202621 / 6.500664 (-6.298043) | 0.080583 / 0.075469 (0.005114) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.639281 / 1.841788 (-0.202507) | 18.911529 / 8.074308 (10.837220) | 17.082795 / 10.191392 (6.891403) | 0.179456 / 0.680424 (-0.500968) | 0.021740 / 0.534201 (-0.512460) | 0.526426 / 0.579283 (-0.052857) | 0.535083 / 0.434364 (0.100719) | 0.583304 / 0.540337 (0.042967) | 0.696733 / 1.386936 (-0.690203) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#757f19283f22eeb3e9aedefd82abc0aa2235f797 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006823 / 0.011353 (-0.004530) | 0.004847 / 0.011008 (-0.006161) | 0.096038 / 0.038508 (0.057530) | 0.033037 / 0.023109 (0.009928) | 0.298379 / 0.275898 (0.022481) | 0.333319 / 0.323480 (0.009839) | 0.005343 / 0.007986 (-0.002643) | 0.003863 / 0.004328 (-0.000465) | 0.072928 / 0.004250 (0.068678) | 0.040898 / 0.037052 (0.003846) | 0.303116 / 0.258489 (0.044627) | 0.334021 / 0.293841 (0.040181) | 0.034780 / 0.128546 (-0.093767) | 0.011978 / 0.075646 (-0.063668) | 0.331642 / 0.419271 (-0.087629) | 0.052729 / 0.043533 (0.009196) | 0.298586 / 0.255139 (0.043447) | 0.319296 / 0.283200 (0.036097) | 0.097711 / 0.141683 (-0.043972) | 1.416899 / 1.452155 (-0.035256) | 1.546008 / 1.492716 (0.053292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234303 / 0.018006 (0.216296) | 0.492767 / 0.000490 (0.492278) | 0.004935 / 0.000200 (0.004736) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030617 / 0.037411 (-0.006795) | 0.121203 / 0.014526 (0.106677) | 0.126677 / 0.176557 (-0.049879) | 0.186379 / 0.737135 (-0.550756) | 0.129849 / 0.296338 (-0.166490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416324 / 0.215209 (0.201115) | 4.135563 / 2.077655 (2.057908) | 1.976182 / 1.504120 (0.472062) | 1.807611 / 1.541195 (0.266416) | 1.886282 / 1.468490 (0.417792) | 0.713006 / 4.584777 (-3.871771) | 3.899205 / 3.745712 (0.153493) | 2.283427 / 5.269862 (-2.986435) | 1.543088 / 4.565676 (-3.022589) | 0.086189 / 0.424275 (-0.338087) | 0.012908 / 0.007607 (0.005301) | 0.516156 / 0.226044 (0.290112) | 5.144199 / 2.268929 (2.875271) | 2.460142 / 55.444624 (-52.984482) | 2.209054 / 6.876477 (-4.667423) | 2.325277 / 2.142072 (0.183204) | 0.849890 / 4.805227 (-3.955337) | 0.173687 / 6.500664 (-6.326977) | 0.070178 / 0.075469 (-0.005291) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.241790 / 1.841788 (-0.599997) | 16.047257 / 8.074308 (7.972949) | 15.774146 / 10.191392 (5.582754) | 0.145871 / 0.680424 (-0.534553) | 0.018106 / 0.534201 (-0.516095) | 0.433642 / 0.579283 (-0.145641) | 0.425311 / 0.434364 (-0.009053) | 0.533963 / 0.540337 (-0.006375) | 0.638786 / 1.386936 (-0.748151) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007242 / 0.011353 (-0.004111) | 0.005599 / 0.011008 (-0.005410) | 0.073443 / 0.038508 (0.034935) | 0.033764 / 0.023109 (0.010655) | 0.365990 / 0.275898 (0.090092) | 0.392943 / 0.323480 (0.069463) | 0.005987 / 0.007986 (-0.001999) | 0.004312 / 0.004328 (-0.000016) | 0.072831 / 0.004250 (0.068580) | 0.048854 / 0.037052 (0.011802) | 0.362477 / 0.258489 (0.103988) | 0.399993 / 0.293841 (0.106152) | 0.035602 / 0.128546 (-0.092944) | 0.012445 / 0.075646 (-0.063202) | 0.085768 / 0.419271 (-0.333504) | 0.048544 / 0.043533 (0.005011) | 0.362246 / 0.255139 (0.107107) | 0.388753 / 0.283200 (0.105554) | 0.109829 / 0.141683 (-0.031854) | 1.546881 / 1.452155 (0.094726) | 1.619454 / 1.492716 (0.126737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189926 / 0.018006 (0.171920) | 0.447936 / 0.000490 (0.447446) | 0.002354 / 0.000200 (0.002155) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031740 / 0.037411 (-0.005671) | 0.122595 / 0.014526 (0.108069) | 0.128389 / 0.176557 (-0.048168) | 0.180570 / 0.737135 (-0.556566) | 0.132939 / 0.296338 (-0.163399) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425073 / 0.215209 (0.209863) | 4.238964 / 2.077655 (2.161309) | 2.095116 / 1.504120 (0.590996) | 1.913925 / 1.541195 (0.372730) | 2.024669 / 1.468490 (0.556179) | 0.699172 / 4.584777 (-3.885605) | 3.845807 / 3.745712 (0.100094) | 2.167502 / 5.269862 (-3.102360) | 1.375267 / 4.565676 (-3.190410) | 0.086739 / 0.424275 (-0.337536) | 0.012198 / 0.007607 (0.004591) | 0.525975 / 0.226044 (0.299931) | 5.249449 / 2.268929 (2.980521) | 2.550565 / 55.444624 (-52.894060) | 2.257557 / 6.876477 (-4.618920) | 2.298936 / 2.142072 (0.156863) | 0.850295 / 4.805227 (-3.954932) | 0.170506 / 6.500664 (-6.330158) | 0.065659 / 0.075469 (-0.009810) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330556 / 1.841788 (-0.511231) | 16.920203 / 8.074308 (8.845894) | 15.966739 / 10.191392 (5.775347) | 0.164000 / 0.680424 (-0.516424) | 0.018211 / 0.534201 (-0.515990) | 0.436253 / 0.579283 (-0.143030) | 0.449666 / 0.434364 (0.015302) | 0.522287 / 0.540337 (-0.018050) | 0.615944 / 1.386936 (-0.770992) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#824f96c11a02b3817d6b1bf4dfed0abab27777f0 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007273 / 0.011353 (-0.004080) | 0.005198 / 0.011008 (-0.005810) | 0.114362 / 0.038508 (0.075854) | 0.031113 / 0.023109 (0.008003) | 0.378568 / 0.275898 (0.102670) | 0.441695 / 0.323480 (0.118215) | 0.006037 / 0.007986 (-0.001949) | 0.005102 / 0.004328 (0.000774) | 0.098682 / 0.004250 (0.094432) | 0.042797 / 0.037052 (0.005745) | 0.360028 / 0.258489 (0.101539) | 0.435757 / 0.293841 (0.141916) | 0.041438 / 0.128546 (-0.087109) | 0.013728 / 0.075646 (-0.061918) | 0.376154 / 0.419271 (-0.043117) | 0.075324 / 0.043533 (0.031791) | 0.357221 / 0.255139 (0.102082) | 0.416378 / 0.283200 (0.133178) | 0.110707 / 0.141683 (-0.030975) | 1.603215 / 1.452155 (0.151061) | 1.736843 / 1.492716 (0.244127) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249479 / 0.018006 (0.231473) | 0.513205 / 0.000490 (0.512715) | 0.003856 / 0.000200 (0.003656) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027750 / 0.037411 (-0.009661) | 0.105437 / 0.014526 (0.090911) | 0.115903 / 0.176557 (-0.060653) | 0.179662 / 0.737135 (-0.557474) | 0.116305 / 0.296338 (-0.180033) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.551681 / 0.215209 (0.336472) | 5.544590 / 2.077655 (3.466935) | 2.193933 / 1.504120 (0.689813) | 1.898395 / 1.541195 (0.357201) | 1.877288 / 1.468490 (0.408798) | 0.858097 / 4.584777 (-3.726680) | 4.920982 / 3.745712 (1.175270) | 2.478220 / 5.269862 (-2.791641) | 1.779608 / 4.565676 (-2.786069) | 0.101321 / 0.424275 (-0.322954) | 0.012627 / 0.007607 (0.005020) | 0.674865 / 0.226044 (0.448820) | 6.808224 / 2.268929 (4.539295) | 2.822466 / 55.444624 (-52.622159) | 2.170379 / 6.876477 (-4.706098) | 2.224278 / 2.142072 (0.082205) | 1.032763 / 4.805227 (-3.772464) | 0.198851 / 6.500664 (-6.301813) | 0.069249 / 0.075469 (-0.006220) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.425987 / 1.841788 (-0.415801) | 16.212942 / 8.074308 (8.138634) | 18.945770 / 10.191392 (8.754378) | 0.192901 / 0.680424 (-0.487522) | 0.025343 / 0.534201 (-0.508858) | 0.465441 / 0.579283 (-0.113842) | 0.540966 / 0.434364 (0.106602) | 0.576736 / 0.540337 (0.036399) | 0.675717 / 1.386936 (-0.711219) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007426 / 0.011353 (-0.003927) | 0.005023 / 0.011008 (-0.005985) | 0.085083 / 0.038508 (0.046575) | 0.030559 / 0.023109 (0.007449) | 0.398461 / 0.275898 (0.122563) | 0.418998 / 0.323480 (0.095518) | 0.006697 / 0.007986 (-0.001288) | 0.004665 / 0.004328 (0.000337) | 0.087724 / 0.004250 (0.083473) | 0.045799 / 0.037052 (0.008747) | 0.395165 / 0.258489 (0.136676) | 0.430172 / 0.293841 (0.136331) | 0.040486 / 0.128546 (-0.088060) | 0.014237 / 0.075646 (-0.061409) | 0.099429 / 0.419271 (-0.319843) | 0.056006 / 0.043533 (0.012473) | 0.389046 / 0.255139 (0.133907) | 0.419559 / 0.283200 (0.136359) | 0.108550 / 0.141683 (-0.033132) | 1.614052 / 1.452155 (0.161897) | 1.677785 / 1.492716 (0.185069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202178 / 0.018006 (0.184172) | 0.486365 / 0.000490 (0.485875) | 0.003844 / 0.000200 (0.003644) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027963 / 0.037411 (-0.009449) | 0.110399 / 0.014526 (0.095873) | 0.122266 / 0.176557 (-0.054291) | 0.178551 / 0.737135 (-0.558585) | 0.129259 / 0.296338 (-0.167080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604178 / 0.215209 (0.388969) | 6.135943 / 2.077655 (4.058288) | 2.547576 / 1.504120 (1.043456) | 2.262470 / 1.541195 (0.721276) | 2.275402 / 1.468490 (0.806912) | 0.878804 / 4.584777 (-3.705972) | 5.152200 / 3.745712 (1.406488) | 2.553715 / 5.269862 (-2.716147) | 1.580959 / 4.565676 (-2.984717) | 0.107895 / 0.424275 (-0.316380) | 0.012751 / 0.007607 (0.005143) | 0.770678 / 0.226044 (0.544633) | 7.744303 / 2.268929 (5.475374) | 3.342037 / 55.444624 (-52.102588) | 2.756848 / 6.876477 (-4.119629) | 2.739357 / 2.142072 (0.597285) | 1.086330 / 4.805227 (-3.718897) | 0.230983 / 6.500664 (-6.269681) | 0.073771 / 0.075469 (-0.001698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.493441 / 1.841788 (-0.348347) | 16.621611 / 8.074308 (8.547303) | 19.081000 / 10.191392 (8.889608) | 0.215623 / 0.680424 (-0.464801) | 0.025660 / 0.534201 (-0.508541) | 0.446490 / 0.579283 (-0.132793) | 0.560078 / 0.434364 (0.125714) | 0.527231 / 0.540337 (-0.013106) | 0.636551 / 1.386936 (-0.750385) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b899ea45c0a7e724ceb5f43c3a8b9fdb081fa67a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008266 / 0.011353 (-0.003087) | 0.005082 / 0.011008 (-0.005927) | 0.119858 / 0.038508 (0.081350) | 0.032907 / 0.023109 (0.009798) | 0.362816 / 0.275898 (0.086918) | 0.403684 / 0.323480 (0.080204) | 0.006296 / 0.007986 (-0.001690) | 0.006220 / 0.004328 (0.001891) | 0.095609 / 0.004250 (0.091359) | 0.048734 / 0.037052 (0.011682) | 0.385724 / 0.258489 (0.127235) | 0.424315 / 0.293841 (0.130475) | 0.042344 / 0.128546 (-0.086202) | 0.016147 / 0.075646 (-0.059500) | 0.409661 / 0.419271 (-0.009610) | 0.057900 / 0.043533 (0.014367) | 0.387013 / 0.255139 (0.131874) | 0.388901 / 0.283200 (0.105702) | 0.103920 / 0.141683 (-0.037762) | 1.732730 / 1.452155 (0.280575) | 1.863912 / 1.492716 (0.371196) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237406 / 0.018006 (0.219400) | 0.514398 / 0.000490 (0.513909) | 0.005941 / 0.000200 (0.005741) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027524 / 0.037411 (-0.009888) | 0.116498 / 0.014526 (0.101972) | 0.129034 / 0.176557 (-0.047522) | 0.218272 / 0.737135 (-0.518864) | 0.148389 / 0.296338 (-0.147950) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604555 / 0.215209 (0.389346) | 5.921576 / 2.077655 (3.843921) | 2.410483 / 1.504120 (0.906363) | 2.220286 / 1.541195 (0.679092) | 2.138880 / 1.468490 (0.670390) | 0.934962 / 4.584777 (-3.649815) | 5.808855 / 3.745712 (2.063143) | 4.881554 / 5.269862 (-0.388308) | 2.536408 / 4.565676 (-2.029268) | 0.124260 / 0.424275 (-0.300015) | 0.017798 / 0.007607 (0.010190) | 0.778991 / 0.226044 (0.552947) | 7.899262 / 2.268929 (5.630333) | 3.208667 / 55.444624 (-52.235957) | 2.631182 / 6.876477 (-4.245295) | 2.676199 / 2.142072 (0.534127) | 1.165516 / 4.805227 (-3.639711) | 0.228751 / 6.500664 (-6.271913) | 0.081378 / 0.075469 (0.005909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.522156 / 1.841788 (-0.319632) | 17.975381 / 8.074308 (9.901073) | 18.918882 / 10.191392 (8.727490) | 0.223984 / 0.680424 (-0.456440) | 0.025171 / 0.534201 (-0.509030) | 0.467894 / 0.579283 (-0.111389) | 0.559501 / 0.434364 (0.125137) | 0.550392 / 0.540337 (0.010055) | 0.696923 / 1.386936 (-0.690013) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008577 / 0.011353 (-0.002775) | 0.006735 / 0.011008 (-0.004273) | 0.095108 / 0.038508 (0.056600) | 0.035059 / 0.023109 (0.011950) | 0.448576 / 0.275898 (0.172677) | 0.492049 / 0.323480 (0.168569) | 0.006600 / 0.007986 (-0.001385) | 0.004760 / 0.004328 (0.000431) | 0.094670 / 0.004250 (0.090419) | 0.052543 / 0.037052 (0.015491) | 0.458927 / 0.258489 (0.200438) | 0.511522 / 0.293841 (0.217681) | 0.046046 / 0.128546 (-0.082500) | 0.015227 / 0.075646 (-0.060419) | 0.114585 / 0.419271 (-0.304686) | 0.057569 / 0.043533 (0.014036) | 0.441989 / 0.255139 (0.186850) | 0.487001 / 0.283200 (0.203801) | 0.115688 / 0.141683 (-0.025995) | 1.777366 / 1.452155 (0.325211) | 1.906216 / 1.492716 (0.413499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224880 / 0.018006 (0.206874) | 0.504153 / 0.000490 (0.503664) | 0.001143 / 0.000200 (0.000943) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033618 / 0.037411 (-0.003793) | 0.127396 / 0.014526 (0.112870) | 0.135648 / 0.176557 (-0.040909) | 0.193140 / 0.737135 (-0.543995) | 0.142129 / 0.296338 (-0.154209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.692845 / 0.215209 (0.477636) | 6.804897 / 2.077655 (4.727242) | 2.851041 / 1.504120 (1.346921) | 2.480698 / 1.541195 (0.939504) | 2.488619 / 1.468490 (1.020129) | 0.970439 / 4.584777 (-3.614338) | 5.466059 / 3.745712 (1.720347) | 2.790261 / 5.269862 (-2.479601) | 1.727638 / 4.565676 (-2.838039) | 0.116345 / 0.424275 (-0.307930) | 0.014348 / 0.007607 (0.006740) | 0.845510 / 0.226044 (0.619465) | 8.397198 / 2.268929 (6.128270) | 3.591998 / 55.444624 (-51.852626) | 2.858339 / 6.876477 (-4.018137) | 2.905075 / 2.142072 (0.763003) | 1.193569 / 4.805227 (-3.611658) | 0.243091 / 6.500664 (-6.257573) | 0.082198 / 0.075469 (0.006729) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.610327 / 1.841788 (-0.231461) | 17.191414 / 8.074308 (9.117106) | 20.176518 / 10.191392 (9.985126) | 0.246574 / 0.680424 (-0.433850) | 0.024343 / 0.534201 (-0.509858) | 0.482091 / 0.579283 (-0.097192) | 0.585241 / 0.434364 (0.150877) | 0.558833 / 0.540337 (0.018496) | 0.654811 / 1.386936 (-0.732125) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#81761dbfa738354a9c50309313dfe90bea26d872 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006353 / 0.011353 (-0.004999) | 0.004393 / 0.011008 (-0.006616) | 0.098751 / 0.038508 (0.060242) | 0.029090 / 0.023109 (0.005981) | 0.304169 / 0.275898 (0.028271) | 0.339879 / 0.323480 (0.016399) | 0.005577 / 0.007986 (-0.002408) | 0.003516 / 0.004328 (-0.000813) | 0.077347 / 0.004250 (0.073097) | 0.041935 / 0.037052 (0.004882) | 0.305865 / 0.258489 (0.047376) | 0.357063 / 0.293841 (0.063222) | 0.025245 / 0.128546 (-0.103301) | 0.008753 / 0.075646 (-0.066893) | 0.316734 / 0.419271 (-0.102538) | 0.043464 / 0.043533 (-0.000069) | 0.300944 / 0.255139 (0.045805) | 0.330091 / 0.283200 (0.046891) | 0.088593 / 0.141683 (-0.053090) | 1.588958 / 1.452155 (0.136803) | 1.641376 / 1.492716 (0.148660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220290 / 0.018006 (0.202284) | 0.445430 / 0.000490 (0.444940) | 0.004800 / 0.000200 (0.004600) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023828 / 0.037411 (-0.013583) | 0.103446 / 0.014526 (0.088920) | 0.110668 / 0.176557 (-0.065889) | 0.169604 / 0.737135 (-0.567531) | 0.114818 / 0.296338 (-0.181520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416951 / 0.215209 (0.201742) | 4.138917 / 2.077655 (2.061263) | 1.891265 / 1.504120 (0.387145) | 1.687068 / 1.541195 (0.145873) | 1.726618 / 1.468490 (0.258128) | 0.546977 / 4.584777 (-4.037800) | 3.536153 / 3.745712 (-0.209560) | 1.795206 / 5.269862 (-3.474656) | 1.019845 / 4.565676 (-3.545831) | 0.067040 / 0.424275 (-0.357235) | 0.012038 / 0.007607 (0.004431) | 0.520583 / 0.226044 (0.294539) | 5.211520 / 2.268929 (2.942591) | 2.336136 / 55.444624 (-53.108488) | 2.011262 / 6.876477 (-4.865215) | 2.137311 / 2.142072 (-0.004762) | 0.654779 / 4.805227 (-4.150448) | 0.134555 / 6.500664 (-6.366109) | 0.066427 / 0.075469 (-0.009042) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240187 / 1.841788 (-0.601600) | 14.104063 / 8.074308 (6.029755) | 13.369572 / 10.191392 (3.178180) | 0.147891 / 0.680424 (-0.532533) | 0.016993 / 0.534201 (-0.517208) | 0.364863 / 0.579283 (-0.214420) | 0.398684 / 0.434364 (-0.035680) | 0.430524 / 0.540337 (-0.109813) | 0.520920 / 1.386936 (-0.866016) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006845 / 0.011353 (-0.004508) | 0.004420 / 0.011008 (-0.006588) | 0.078334 / 0.038508 (0.039825) | 0.030566 / 0.023109 (0.007457) | 0.409568 / 0.275898 (0.133670) | 0.458389 / 0.323480 (0.134910) | 0.005739 / 0.007986 (-0.002247) | 0.005222 / 0.004328 (0.000893) | 0.076066 / 0.004250 (0.071816) | 0.049239 / 0.037052 (0.012187) | 0.409841 / 0.258489 (0.151352) | 0.472250 / 0.293841 (0.178409) | 0.025463 / 0.128546 (-0.103084) | 0.008738 / 0.075646 (-0.066909) | 0.083114 / 0.419271 (-0.336157) | 0.041233 / 0.043533 (-0.002300) | 0.407158 / 0.255139 (0.152019) | 0.438724 / 0.283200 (0.155524) | 0.097974 / 0.141683 (-0.043709) | 1.536514 / 1.452155 (0.084360) | 1.636704 / 1.492716 (0.143987) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240589 / 0.018006 (0.222583) | 0.440328 / 0.000490 (0.439838) | 0.000937 / 0.000200 (0.000737) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027559 / 0.037411 (-0.009853) | 0.109930 / 0.014526 (0.095405) | 0.113366 / 0.176557 (-0.063190) | 0.166849 / 0.737135 (-0.570286) | 0.118872 / 0.296338 (-0.177467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474120 / 0.215209 (0.258911) | 4.739222 / 2.077655 (2.661567) | 2.484386 / 1.504120 (0.980266) | 2.281937 / 1.541195 (0.740742) | 2.362974 / 1.468490 (0.894484) | 0.549897 / 4.584777 (-4.034879) | 3.425540 / 3.745712 (-0.320172) | 1.765810 / 5.269862 (-3.504051) | 1.008277 / 4.565676 (-3.557400) | 0.067288 / 0.424275 (-0.356987) | 0.011954 / 0.007607 (0.004347) | 0.577216 / 0.226044 (0.351172) | 5.790659 / 2.268929 (3.521731) | 2.946732 / 55.444624 (-52.497892) | 2.608835 / 6.876477 (-4.267641) | 2.642987 / 2.142072 (0.500915) | 0.652798 / 4.805227 (-4.152429) | 0.135909 / 6.500664 (-6.364755) | 0.068480 / 0.075469 (-0.006989) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.353550 / 1.841788 (-0.488237) | 14.732084 / 8.074308 (6.657775) | 14.439174 / 10.191392 (4.247782) | 0.131445 / 0.680424 (-0.548979) | 0.016608 / 0.534201 (-0.517593) | 0.368103 / 0.579283 (-0.211180) | 0.393918 / 0.434364 (-0.040446) | 0.423562 / 0.540337 (-0.116776) | 0.515041 / 1.386936 (-0.871895) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8907bdb23f78545303eb3bb0561e33ec6787f96c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006414 / 0.011353 (-0.004938) | 0.004704 / 0.011008 (-0.006305) | 0.096012 / 0.038508 (0.057504) | 0.032910 / 0.023109 (0.009800) | 0.290676 / 0.275898 (0.014778) | 0.319646 / 0.323480 (-0.003834) | 0.005806 / 0.007986 (-0.002180) | 0.004008 / 0.004328 (-0.000320) | 0.073982 / 0.004250 (0.069731) | 0.048985 / 0.037052 (0.011933) | 0.299498 / 0.258489 (0.041009) | 0.338118 / 0.293841 (0.044277) | 0.027680 / 0.128546 (-0.100866) | 0.009051 / 0.075646 (-0.066595) | 0.325051 / 0.419271 (-0.094221) | 0.051011 / 0.043533 (0.007478) | 0.292249 / 0.255139 (0.037110) | 0.315733 / 0.283200 (0.032533) | 0.100327 / 0.141683 (-0.041356) | 1.481862 / 1.452155 (0.029707) | 1.544884 / 1.492716 (0.052168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289610 / 0.018006 (0.271603) | 0.510164 / 0.000490 (0.509675) | 0.004726 / 0.000200 (0.004526) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027617 / 0.037411 (-0.009794) | 0.107593 / 0.014526 (0.093068) | 0.122783 / 0.176557 (-0.053774) | 0.181086 / 0.737135 (-0.556049) | 0.128030 / 0.296338 (-0.168308) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403571 / 0.215209 (0.188362) | 4.002881 / 2.077655 (1.925227) | 1.805550 / 1.504120 (0.301430) | 1.619165 / 1.541195 (0.077971) | 1.606536 / 1.468490 (0.138046) | 0.518917 / 4.584777 (-4.065860) | 3.731498 / 3.745712 (-0.014214) | 3.206645 / 5.269862 (-2.063217) | 1.641615 / 4.565676 (-2.924062) | 0.065100 / 0.424275 (-0.359175) | 0.011396 / 0.007607 (0.003789) | 0.500597 / 0.226044 (0.274553) | 4.992293 / 2.268929 (2.723364) | 2.278726 / 55.444624 (-53.165898) | 1.960823 / 6.876477 (-4.915654) | 2.038684 / 2.142072 (-0.103388) | 0.640910 / 4.805227 (-4.164318) | 0.140597 / 6.500664 (-6.360067) | 0.062114 / 0.075469 (-0.013355) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.167366 / 1.841788 (-0.674422) | 14.748193 / 8.074308 (6.673884) | 13.592381 / 10.191392 (3.400989) | 0.165341 / 0.680424 (-0.515083) | 0.017360 / 0.534201 (-0.516841) | 0.393448 / 0.579283 (-0.185836) | 0.422951 / 0.434364 (-0.011413) | 0.460491 / 0.540337 (-0.079847) | 0.558238 / 1.386936 (-0.828698) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006373 / 0.011353 (-0.004980) | 0.004587 / 0.011008 (-0.006421) | 0.076421 / 0.038508 (0.037913) | 0.032162 / 0.023109 (0.009052) | 0.385531 / 0.275898 (0.109633) | 0.410424 / 0.323480 (0.086944) | 0.006154 / 0.007986 (-0.001832) | 0.005533 / 0.004328 (0.001205) | 0.077035 / 0.004250 (0.072784) | 0.051571 / 0.037052 (0.014519) | 0.393283 / 0.258489 (0.134794) | 0.433756 / 0.293841 (0.139915) | 0.028381 / 0.128546 (-0.100165) | 0.009034 / 0.075646 (-0.066613) | 0.083836 / 0.419271 (-0.335435) | 0.048246 / 0.043533 (0.004713) | 0.385437 / 0.255139 (0.130298) | 0.394187 / 0.283200 (0.110987) | 0.105453 / 0.141683 (-0.036230) | 1.459173 / 1.452155 (0.007018) | 1.575083 / 1.492716 (0.082367) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.320324 / 0.018006 (0.302318) | 0.502945 / 0.000490 (0.502455) | 0.004470 / 0.000200 (0.004270) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028118 / 0.037411 (-0.009293) | 0.111430 / 0.014526 (0.096904) | 0.123141 / 0.176557 (-0.053415) | 0.175215 / 0.737135 (-0.561920) | 0.126429 / 0.296338 (-0.169909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433407 / 0.215209 (0.218198) | 4.329945 / 2.077655 (2.252291) | 2.096822 / 1.504120 (0.592702) | 1.908173 / 1.541195 (0.366978) | 1.967167 / 1.468490 (0.498676) | 0.529207 / 4.584777 (-4.055570) | 3.798424 / 3.745712 (0.052712) | 3.050716 / 5.269862 (-2.219146) | 1.445009 / 4.565676 (-3.120668) | 0.066467 / 0.424275 (-0.357809) | 0.011698 / 0.007607 (0.004090) | 0.528660 / 0.226044 (0.302615) | 5.282069 / 2.268929 (3.013141) | 2.535501 / 55.444624 (-52.909124) | 2.202856 / 6.876477 (-4.673621) | 2.293225 / 2.142072 (0.151153) | 0.640216 / 4.805227 (-4.165011) | 0.140884 / 6.500664 (-6.359780) | 0.064231 / 0.075469 (-0.011238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292129 / 1.841788 (-0.549659) | 15.371370 / 8.074308 (7.297062) | 15.114854 / 10.191392 (4.923462) | 0.176870 / 0.680424 (-0.503554) | 0.017380 / 0.534201 (-0.516821) | 0.398156 / 0.579283 (-0.181127) | 0.442277 / 0.434364 (0.007913) | 0.467093 / 0.540337 (-0.073244) | 0.561599 / 1.386936 (-0.825337) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#323747a5ff7d9b204ea3c4989d658af7102f7bbd \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009360 / 0.011353 (-0.001993) | 0.006297 / 0.011008 (-0.004712) | 0.133131 / 0.038508 (0.094623) | 0.040261 / 0.023109 (0.017152) | 0.419101 / 0.275898 (0.143203) | 0.453087 / 0.323480 (0.129607) | 0.007718 / 0.007986 (-0.000268) | 0.005698 / 0.004328 (0.001369) | 0.102261 / 0.004250 (0.098010) | 0.055147 / 0.037052 (0.018095) | 0.428355 / 0.258489 (0.169866) | 0.505241 / 0.293841 (0.211400) | 0.046745 / 0.128546 (-0.081802) | 0.015559 / 0.075646 (-0.060088) | 0.441775 / 0.419271 (0.022503) | 0.070165 / 0.043533 (0.026632) | 0.421957 / 0.255139 (0.166818) | 0.445156 / 0.283200 (0.161957) | 0.126321 / 0.141683 (-0.015362) | 1.900486 / 1.452155 (0.448331) | 2.088630 / 1.492716 (0.595913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260244 / 0.018006 (0.242237) | 0.606317 / 0.000490 (0.605828) | 0.006827 / 0.000200 (0.006627) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031958 / 0.037411 (-0.005453) | 0.139362 / 0.014526 (0.124836) | 0.148748 / 0.176557 (-0.027809) | 0.226269 / 0.737135 (-0.510866) | 0.161145 / 0.296338 (-0.135194) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666287 / 0.215209 (0.451078) | 6.588707 / 2.077655 (4.511053) | 2.736155 / 1.504120 (1.232035) | 2.329601 / 1.541195 (0.788406) | 2.324991 / 1.468490 (0.856501) | 0.943608 / 4.584777 (-3.641169) | 6.051653 / 3.745712 (2.305941) | 2.929150 / 5.269862 (-2.340711) | 1.804461 / 4.565676 (-2.761216) | 0.113302 / 0.424275 (-0.310973) | 0.015245 / 0.007607 (0.007638) | 0.827029 / 0.226044 (0.600984) | 8.211536 / 2.268929 (5.942608) | 3.445231 / 55.444624 (-51.999393) | 2.756728 / 6.876477 (-4.119748) | 2.904039 / 2.142072 (0.761966) | 1.162339 / 4.805227 (-3.642888) | 0.231168 / 6.500664 (-6.269496) | 0.089038 / 0.075469 (0.013569) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.640619 / 1.841788 (-0.201169) | 20.034157 / 8.074308 (11.959849) | 22.346006 / 10.191392 (12.154614) | 0.255300 / 0.680424 (-0.425124) | 0.031452 / 0.534201 (-0.502749) | 0.563290 / 0.579283 (-0.015993) | 0.653556 / 0.434364 (0.219192) | 0.687663 / 0.540337 (0.147326) | 0.816432 / 1.386936 (-0.570504) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010340 / 0.011353 (-0.001013) | 0.006245 / 0.011008 (-0.004764) | 0.128012 / 0.038508 (0.089504) | 0.041799 / 0.023109 (0.018690) | 0.533340 / 0.275898 (0.257442) | 0.592243 / 0.323480 (0.268763) | 0.009256 / 0.007986 (0.001271) | 0.005310 / 0.004328 (0.000982) | 0.110973 / 0.004250 (0.106722) | 0.065465 / 0.037052 (0.028412) | 0.533845 / 0.258489 (0.275356) | 0.602190 / 0.293841 (0.308349) | 0.060245 / 0.128546 (-0.068301) | 0.016954 / 0.075646 (-0.058693) | 0.119727 / 0.419271 (-0.299545) | 0.064628 / 0.043533 (0.021095) | 0.558229 / 0.255139 (0.303090) | 0.563696 / 0.283200 (0.280496) | 0.137225 / 0.141683 (-0.004458) | 2.038605 / 1.452155 (0.586451) | 2.158655 / 1.492716 (0.665939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.327067 / 0.018006 (0.309061) | 0.628812 / 0.000490 (0.628323) | 0.010259 / 0.000200 (0.010059) | 0.000123 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037023 / 0.037411 (-0.000388) | 0.142462 / 0.014526 (0.127936) | 0.158165 / 0.176557 (-0.018392) | 0.220808 / 0.737135 (-0.516328) | 0.163608 / 0.296338 (-0.132731) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.776119 / 0.215209 (0.560910) | 7.813044 / 2.077655 (5.735389) | 3.610901 / 1.504120 (2.106781) | 3.195144 / 1.541195 (1.653950) | 3.218245 / 1.468490 (1.749755) | 1.092732 / 4.584777 (-3.492045) | 5.965526 / 3.745712 (2.219813) | 2.914683 / 5.269862 (-2.355179) | 1.848397 / 4.565676 (-2.717280) | 0.114436 / 0.424275 (-0.309839) | 0.014794 / 0.007607 (0.007187) | 0.887141 / 0.226044 (0.661096) | 9.009743 / 2.268929 (6.740815) | 4.180143 / 55.444624 (-51.264481) | 3.452194 / 6.876477 (-3.424283) | 3.493520 / 2.142072 (1.351448) | 1.233327 / 4.805227 (-3.571900) | 0.235390 / 6.500664 (-6.265274) | 0.099544 / 0.075469 (0.024075) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.853482 / 1.841788 (0.011694) | 20.071177 / 8.074308 (11.996869) | 24.507618 / 10.191392 (14.316226) | 0.260164 / 0.680424 (-0.420260) | 0.028433 / 0.534201 (-0.505768) | 0.549181 / 0.579283 (-0.030102) | 0.650069 / 0.434364 (0.215705) | 0.629541 / 0.540337 (0.089203) | 0.808932 / 1.386936 (-0.578004) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f39ba76af62c8037de3f464e87cbb095f8729062 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009537 / 0.011353 (-0.001816) | 0.006036 / 0.011008 (-0.004972) | 0.141210 / 0.038508 (0.102701) | 0.037493 / 0.023109 (0.014384) | 0.404285 / 0.275898 (0.128386) | 0.458906 / 0.323480 (0.135427) | 0.007224 / 0.007986 (-0.000761) | 0.005148 / 0.004328 (0.000819) | 0.103889 / 0.004250 (0.099639) | 0.048877 / 0.037052 (0.011824) | 0.413220 / 0.258489 (0.154731) | 0.458153 / 0.293841 (0.164312) | 0.046008 / 0.128546 (-0.082538) | 0.015116 / 0.075646 (-0.060531) | 0.439836 / 0.419271 (0.020565) | 0.067527 / 0.043533 (0.023994) | 0.435794 / 0.255139 (0.180656) | 0.451687 / 0.283200 (0.168487) | 0.121274 / 0.141683 (-0.020409) | 1.950199 / 1.452155 (0.498044) | 2.035589 / 1.492716 (0.542873) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247056 / 0.018006 (0.229050) | 0.550348 / 0.000490 (0.549858) | 0.005504 / 0.000200 (0.005305) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032171 / 0.037411 (-0.005240) | 0.135983 / 0.014526 (0.121457) | 0.149587 / 0.176557 (-0.026970) | 0.233414 / 0.737135 (-0.503722) | 0.152598 / 0.296338 (-0.143740) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634813 / 0.215209 (0.419604) | 6.453619 / 2.077655 (4.375964) | 2.582070 / 1.504120 (1.077951) | 2.214292 / 1.541195 (0.673097) | 2.220012 / 1.468490 (0.751522) | 0.987374 / 4.584777 (-3.597403) | 5.543760 / 3.745712 (1.798047) | 2.808865 / 5.269862 (-2.460996) | 1.714713 / 4.565676 (-2.850963) | 0.111016 / 0.424275 (-0.313259) | 0.014688 / 0.007607 (0.007081) | 0.842542 / 0.226044 (0.616498) | 8.414336 / 2.268929 (6.145407) | 3.501021 / 55.444624 (-51.943604) | 2.665335 / 6.876477 (-4.211142) | 2.843706 / 2.142072 (0.701633) | 1.196398 / 4.805227 (-3.608829) | 0.245508 / 6.500664 (-6.255156) | 0.086970 / 0.075469 (0.011501) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.590244 / 1.841788 (-0.251544) | 18.694141 / 8.074308 (10.619833) | 21.752463 / 10.191392 (11.561071) | 0.264511 / 0.680424 (-0.415913) | 0.028713 / 0.534201 (-0.505488) | 0.531102 / 0.579283 (-0.048181) | 0.626302 / 0.434364 (0.191938) | 0.624541 / 0.540337 (0.084203) | 0.745745 / 1.386936 (-0.641191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010097 / 0.011353 (-0.001256) | 0.005558 / 0.011008 (-0.005451) | 0.111326 / 0.038508 (0.072818) | 0.036465 / 0.023109 (0.013356) | 0.472116 / 0.275898 (0.196218) | 0.524479 / 0.323480 (0.200999) | 0.007466 / 0.007986 (-0.000520) | 0.005440 / 0.004328 (0.001112) | 0.103482 / 0.004250 (0.099231) | 0.053217 / 0.037052 (0.016165) | 0.476685 / 0.258489 (0.218196) | 0.554011 / 0.293841 (0.260170) | 0.047157 / 0.128546 (-0.081390) | 0.015895 / 0.075646 (-0.059751) | 0.115997 / 0.419271 (-0.303274) | 0.062290 / 0.043533 (0.018758) | 0.474166 / 0.255139 (0.219027) | 0.498854 / 0.283200 (0.215655) | 0.121798 / 0.141683 (-0.019885) | 1.956583 / 1.452155 (0.504428) | 2.069620 / 1.492716 (0.576904) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278637 / 0.018006 (0.260631) | 0.555295 / 0.000490 (0.554805) | 0.007401 / 0.000200 (0.007201) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033576 / 0.037411 (-0.003835) | 0.136479 / 0.014526 (0.121954) | 0.153960 / 0.176557 (-0.022597) | 0.203422 / 0.737135 (-0.533713) | 0.154159 / 0.296338 (-0.142180) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.672561 / 0.215209 (0.457352) | 6.956675 / 2.077655 (4.879020) | 3.063636 / 1.504120 (1.559516) | 2.668256 / 1.541195 (1.127061) | 2.794793 / 1.468490 (1.326303) | 0.964242 / 4.584777 (-3.620535) | 5.785992 / 3.745712 (2.040279) | 2.850079 / 5.269862 (-2.419782) | 1.782491 / 4.565676 (-2.783186) | 0.114859 / 0.424275 (-0.309416) | 0.015229 / 0.007607 (0.007622) | 0.858406 / 0.226044 (0.632362) | 8.646296 / 2.268929 (6.377367) | 3.842133 / 55.444624 (-51.602492) | 3.180017 / 6.876477 (-3.696460) | 3.241315 / 2.142072 (1.099243) | 1.248988 / 4.805227 (-3.556239) | 0.235075 / 6.500664 (-6.265589) | 0.087192 / 0.075469 (0.011723) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.783877 / 1.841788 (-0.057910) | 19.477223 / 8.074308 (11.402914) | 22.926734 / 10.191392 (12.735342) | 0.246970 / 0.680424 (-0.433454) | 0.026386 / 0.534201 (-0.507815) | 0.517599 / 0.579283 (-0.061684) | 0.626504 / 0.434364 (0.192140) | 0.606943 / 0.540337 (0.066606) | 0.739115 / 1.386936 (-0.647821) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e8f051a41454f8625091338e6b53119a5eb9b2a0 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008085 / 0.011353 (-0.003268) | 0.005568 / 0.011008 (-0.005440) | 0.119674 / 0.038508 (0.081166) | 0.040452 / 0.023109 (0.017343) | 0.360288 / 0.275898 (0.084390) | 0.409448 / 0.323480 (0.085968) | 0.007281 / 0.007986 (-0.000705) | 0.004931 / 0.004328 (0.000602) | 0.089956 / 0.004250 (0.085706) | 0.056088 / 0.037052 (0.019036) | 0.384708 / 0.258489 (0.126219) | 0.423506 / 0.293841 (0.129665) | 0.033280 / 0.128546 (-0.095266) | 0.010696 / 0.075646 (-0.064951) | 0.394851 / 0.419271 (-0.024421) | 0.058412 / 0.043533 (0.014879) | 0.361514 / 0.255139 (0.106375) | 0.399121 / 0.283200 (0.115921) | 0.117927 / 0.141683 (-0.023756) | 1.791499 / 1.452155 (0.339344) | 1.889000 / 1.492716 (0.396284) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253324 / 0.018006 (0.235318) | 0.536151 / 0.000490 (0.535661) | 0.010450 / 0.000200 (0.010250) | 0.000171 / 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034646 / 0.037411 (-0.002765) | 0.145999 / 0.014526 (0.131473) | 0.153793 / 0.176557 (-0.022763) | 0.232871 / 0.737135 (-0.504265) | 0.161151 / 0.296338 (-0.135188) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471407 / 0.215209 (0.256197) | 4.715702 / 2.077655 (2.638047) | 2.228939 / 1.504120 (0.724819) | 2.008511 / 1.541195 (0.467317) | 2.135182 / 1.468490 (0.666692) | 0.620720 / 4.584777 (-3.964057) | 4.960731 / 3.745712 (1.215019) | 2.222469 / 5.269862 (-3.047393) | 1.284467 / 4.565676 (-3.281209) | 0.077931 / 0.424275 (-0.346344) | 0.013935 / 0.007607 (0.006328) | 0.593164 / 0.226044 (0.367120) | 5.940829 / 2.268929 (3.671900) | 2.664277 / 55.444624 (-52.780347) | 2.290655 / 6.876477 (-4.585822) | 2.496664 / 2.142072 (0.354592) | 0.759166 / 4.805227 (-4.046061) | 0.168011 / 6.500664 (-6.332653) | 0.077993 / 0.075469 (0.002524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.440663 / 1.841788 (-0.401125) | 19.105377 / 8.074308 (11.031069) | 16.068118 / 10.191392 (5.876726) | 0.193024 / 0.680424 (-0.487400) | 0.022348 / 0.534201 (-0.511853) | 0.517454 / 0.579283 (-0.061829) | 0.528072 / 0.434364 (0.093708) | 0.565293 / 0.540337 (0.024955) | 0.676578 / 1.386936 (-0.710358) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008089 / 0.011353 (-0.003264) | 0.005287 / 0.011008 (-0.005721) | 0.087964 / 0.038508 (0.049456) | 0.041548 / 0.023109 (0.018439) | 0.437733 / 0.275898 (0.161835) | 0.487878 / 0.323480 (0.164398) | 0.006898 / 0.007986 (-0.001087) | 0.004649 / 0.004328 (0.000320) | 0.086982 / 0.004250 (0.082732) | 0.056874 / 0.037052 (0.019822) | 0.437397 / 0.258489 (0.178908) | 0.490636 / 0.293841 (0.196795) | 0.033550 / 0.128546 (-0.094997) | 0.010430 / 0.075646 (-0.065216) | 0.096076 / 0.419271 (-0.323196) | 0.054028 / 0.043533 (0.010495) | 0.450262 / 0.255139 (0.195123) | 0.465566 / 0.283200 (0.182366) | 0.119987 / 0.141683 (-0.021696) | 1.764428 / 1.452155 (0.312273) | 1.841547 / 1.492716 (0.348831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271427 / 0.018006 (0.253420) | 0.506386 / 0.000490 (0.505896) | 0.001213 / 0.000200 (0.001013) | 0.000125 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036159 / 0.037411 (-0.001253) | 0.140578 / 0.014526 (0.126053) | 0.147517 / 0.176557 (-0.029040) | 0.206215 / 0.737135 (-0.530921) | 0.152560 / 0.296338 (-0.143779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522833 / 0.215209 (0.307624) | 5.215732 / 2.077655 (3.138077) | 2.553406 / 1.504120 (1.049286) | 2.344815 / 1.541195 (0.803620) | 2.422377 / 1.468490 (0.953886) | 0.631197 / 4.584777 (-3.953580) | 4.906216 / 3.745712 (1.160504) | 2.212923 / 5.269862 (-3.056938) | 1.352937 / 4.565676 (-3.212740) | 0.079141 / 0.424275 (-0.345135) | 0.013691 / 0.007607 (0.006084) | 0.634939 / 0.226044 (0.408895) | 6.578770 / 2.268929 (4.309842) | 3.080339 / 55.444624 (-52.364286) | 2.710243 / 6.876477 (-4.166234) | 2.740476 / 2.142072 (0.598404) | 0.783610 / 4.805227 (-4.021617) | 0.171589 / 6.500664 (-6.329075) | 0.077311 / 0.075469 (0.001842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584847 / 1.841788 (-0.256941) | 19.510132 / 8.074308 (11.435824) | 18.074572 / 10.191392 (7.883180) | 0.173494 / 0.680424 (-0.506930) | 0.021149 / 0.534201 (-0.513052) | 0.469026 / 0.579283 (-0.110258) | 0.518463 / 0.434364 (0.084099) | 0.550363 / 0.540337 (0.010026) | 0.667087 / 1.386936 (-0.719849) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5dfcd876c25cc0ffbd6b5b518b017419390a8ada \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007144 / 0.011353 (-0.004209) | 0.004783 / 0.011008 (-0.006225) | 0.103991 / 0.038508 (0.065483) | 0.039098 / 0.023109 (0.015989) | 0.319851 / 0.275898 (0.043952) | 0.356104 / 0.323480 (0.032625) | 0.007077 / 0.007986 (-0.000909) | 0.004188 / 0.004328 (-0.000141) | 0.078360 / 0.004250 (0.074109) | 0.050951 / 0.037052 (0.013899) | 0.321791 / 0.258489 (0.063302) | 0.356123 / 0.293841 (0.062283) | 0.028967 / 0.128546 (-0.099579) | 0.009091 / 0.075646 (-0.066555) | 0.355265 / 0.419271 (-0.064007) | 0.052521 / 0.043533 (0.008988) | 0.317333 / 0.255139 (0.062194) | 0.340747 / 0.283200 (0.057547) | 0.104354 / 0.141683 (-0.037329) | 1.522791 / 1.452155 (0.070636) | 1.579835 / 1.492716 (0.087118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260539 / 0.018006 (0.242532) | 0.454230 / 0.000490 (0.453740) | 0.036588 / 0.000200 (0.036388) | 0.000289 / 0.000054 (0.000235) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028375 / 0.037411 (-0.009036) | 0.118939 / 0.014526 (0.104413) | 0.126553 / 0.176557 (-0.050004) | 0.184596 / 0.737135 (-0.552539) | 0.130583 / 0.296338 (-0.165755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417353 / 0.215209 (0.202144) | 4.171595 / 2.077655 (2.093940) | 1.855096 / 1.504120 (0.350976) | 1.673941 / 1.541195 (0.132747) | 1.761370 / 1.468490 (0.292880) | 0.544081 / 4.584777 (-4.040696) | 3.851877 / 3.745712 (0.106165) | 1.896661 / 5.269862 (-3.373200) | 1.093303 / 4.565676 (-3.472373) | 0.067967 / 0.424275 (-0.356308) | 0.012313 / 0.007607 (0.004706) | 0.532316 / 0.226044 (0.306272) | 5.336016 / 2.268929 (3.067087) | 2.344780 / 55.444624 (-53.099845) | 1.993909 / 6.876477 (-4.882568) | 2.167324 / 2.142072 (0.025251) | 0.670334 / 4.805227 (-4.134893) | 0.147705 / 6.500664 (-6.352959) | 0.067634 / 0.075469 (-0.007835) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251005 / 1.841788 (-0.590783) | 15.405531 / 8.074308 (7.331223) | 14.197019 / 10.191392 (4.005627) | 0.144230 / 0.680424 (-0.536193) | 0.018352 / 0.534201 (-0.515849) | 0.427536 / 0.579283 (-0.151748) | 0.433135 / 0.434364 (-0.001229) | 0.502624 / 0.540337 (-0.037713) | 0.612312 / 1.386936 (-0.774624) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007011 / 0.011353 (-0.004342) | 0.004857 / 0.011008 (-0.006151) | 0.077797 / 0.038508 (0.039289) | 0.035411 / 0.023109 (0.012302) | 0.368234 / 0.275898 (0.092336) | 0.408359 / 0.323480 (0.084879) | 0.005883 / 0.007986 (-0.002102) | 0.004311 / 0.004328 (-0.000017) | 0.077216 / 0.004250 (0.072966) | 0.052062 / 0.037052 (0.015010) | 0.368502 / 0.258489 (0.110013) | 0.428681 / 0.293841 (0.134840) | 0.028889 / 0.128546 (-0.099657) | 0.009146 / 0.075646 (-0.066501) | 0.085515 / 0.419271 (-0.333756) | 0.050216 / 0.043533 (0.006683) | 0.359562 / 0.255139 (0.104423) | 0.378335 / 0.283200 (0.095135) | 0.106351 / 0.141683 (-0.035332) | 1.538943 / 1.452155 (0.086788) | 1.663572 / 1.492716 (0.170855) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216917 / 0.018006 (0.198911) | 0.444130 / 0.000490 (0.443641) | 0.002640 / 0.000200 (0.002440) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032509 / 0.037411 (-0.004902) | 0.123955 / 0.014526 (0.109430) | 0.133236 / 0.176557 (-0.043321) | 0.187408 / 0.737135 (-0.549727) | 0.136696 / 0.296338 (-0.159643) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443714 / 0.215209 (0.228505) | 4.416973 / 2.077655 (2.339318) | 2.145279 / 1.504120 (0.641159) | 1.946669 / 1.541195 (0.405474) | 2.044105 / 1.468490 (0.575614) | 0.534463 / 4.584777 (-4.050314) | 3.824926 / 3.745712 (0.079214) | 3.151796 / 5.269862 (-2.118066) | 1.497513 / 4.565676 (-3.068164) | 0.066799 / 0.424275 (-0.357476) | 0.012408 / 0.007607 (0.004801) | 0.544182 / 0.226044 (0.318138) | 5.419403 / 2.268929 (3.150474) | 2.605191 / 55.444624 (-52.839433) | 2.285354 / 6.876477 (-4.591123) | 2.359520 / 2.142072 (0.217448) | 0.655489 / 4.805227 (-4.149738) | 0.143496 / 6.500664 (-6.357168) | 0.066782 / 0.075469 (-0.008687) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329370 / 1.841788 (-0.512418) | 16.058019 / 8.074308 (7.983711) | 15.119769 / 10.191392 (4.928377) | 0.147967 / 0.680424 (-0.532457) | 0.018360 / 0.534201 (-0.515841) | 0.436847 / 0.579283 (-0.142436) | 0.435136 / 0.434364 (0.000773) | 0.507176 / 0.540337 (-0.033161) | 0.610627 / 1.386936 (-0.776309) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b4cc3ee6d8945052283076854eb77575d52b7432 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006425 / 0.011353 (-0.004927) | 0.003710 / 0.011008 (-0.007298) | 0.102072 / 0.038508 (0.063564) | 0.033974 / 0.023109 (0.010865) | 0.273146 / 0.275898 (-0.002752) | 0.313254 / 0.323480 (-0.010226) | 0.004889 / 0.007986 (-0.003096) | 0.004803 / 0.004328 (0.000475) | 0.067359 / 0.004250 (0.063109) | 0.040281 / 0.037052 (0.003228) | 0.302106 / 0.258489 (0.043617) | 0.318039 / 0.293841 (0.024198) | 0.028839 / 0.128546 (-0.099707) | 0.008726 / 0.075646 (-0.066921) | 0.322532 / 0.419271 (-0.096739) | 0.048845 / 0.043533 (0.005312) | 0.299836 / 0.255139 (0.044697) | 0.300983 / 0.283200 (0.017784) | 0.103384 / 0.141683 (-0.038299) | 1.417245 / 1.452155 (-0.034910) | 1.538819 / 1.492716 (0.046102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219798 / 0.018006 (0.201792) | 0.442297 / 0.000490 (0.441807) | 0.013792 / 0.000200 (0.013592) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024996 / 0.037411 (-0.012416) | 0.098558 / 0.014526 (0.084032) | 0.116423 / 0.176557 (-0.060133) | 0.163481 / 0.737135 (-0.573654) | 0.115031 / 0.296338 (-0.181308) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392411 / 0.215209 (0.177202) | 4.025992 / 2.077655 (1.948337) | 1.850809 / 1.504120 (0.346690) | 1.668330 / 1.541195 (0.127136) | 1.627041 / 1.468490 (0.158551) | 0.510721 / 4.584777 (-4.074055) | 3.841318 / 3.745712 (0.095606) | 3.416979 / 5.269862 (-1.852883) | 1.640796 / 4.565676 (-2.924880) | 0.061968 / 0.424275 (-0.362307) | 0.010281 / 0.007607 (0.002674) | 0.485592 / 0.226044 (0.259548) | 4.872205 / 2.268929 (2.603277) | 2.146753 / 55.444624 (-53.297871) | 1.832087 / 6.876477 (-5.044390) | 1.920928 / 2.142072 (-0.221144) | 0.606363 / 4.805227 (-4.198864) | 0.134351 / 6.500664 (-6.366313) | 0.057583 / 0.075469 (-0.017886) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.153048 / 1.841788 (-0.688739) | 14.165743 / 8.074308 (6.091435) | 12.237798 / 10.191392 (2.046406) | 0.159815 / 0.680424 (-0.520608) | 0.018226 / 0.534201 (-0.515975) | 0.372390 / 0.579283 (-0.206893) | 0.396552 / 0.434364 (-0.037811) | 0.439445 / 0.540337 (-0.100892) | 0.521924 / 1.386936 (-0.865012) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006162 / 0.011353 (-0.005191) | 0.004006 / 0.011008 (-0.007002) | 0.067226 / 0.038508 (0.028718) | 0.030285 / 0.023109 (0.007176) | 0.361220 / 0.275898 (0.085322) | 0.386783 / 0.323480 (0.063303) | 0.005202 / 0.007986 (-0.002784) | 0.003453 / 0.004328 (-0.000876) | 0.068299 / 0.004250 (0.064048) | 0.041433 / 0.037052 (0.004381) | 0.360222 / 0.258489 (0.101733) | 0.399327 / 0.293841 (0.105486) | 0.026066 / 0.128546 (-0.102480) | 0.008025 / 0.075646 (-0.067621) | 0.079588 / 0.419271 (-0.339683) | 0.042616 / 0.043533 (-0.000917) | 0.347639 / 0.255139 (0.092500) | 0.386092 / 0.283200 (0.102893) | 0.100869 / 0.141683 (-0.040814) | 1.386901 / 1.452155 (-0.065254) | 1.471523 / 1.492716 (-0.021193) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217020 / 0.018006 (0.199014) | 0.431033 / 0.000490 (0.430543) | 0.002902 / 0.000200 (0.002702) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027396 / 0.037411 (-0.010015) | 0.114154 / 0.014526 (0.099629) | 0.117918 / 0.176557 (-0.058638) | 0.173342 / 0.737135 (-0.563794) | 0.125812 / 0.296338 (-0.170526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424843 / 0.215209 (0.209634) | 4.324828 / 2.077655 (2.247174) | 2.188263 / 1.504120 (0.684143) | 1.912288 / 1.541195 (0.371094) | 2.011621 / 1.468490 (0.543131) | 0.560944 / 4.584777 (-4.023833) | 3.975047 / 3.745712 (0.229335) | 3.130242 / 5.269862 (-2.139619) | 1.667902 / 4.565676 (-2.897775) | 0.062245 / 0.424275 (-0.362030) | 0.011300 / 0.007607 (0.003692) | 0.498571 / 0.226044 (0.272527) | 5.024887 / 2.268929 (2.755958) | 2.482967 / 55.444624 (-52.961657) | 2.216125 / 6.876477 (-4.660352) | 2.175856 / 2.142072 (0.033783) | 0.615207 / 4.805227 (-4.190021) | 0.133808 / 6.500664 (-6.366856) | 0.058681 / 0.075469 (-0.016788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.370150 / 1.841788 (-0.471637) | 14.580907 / 8.074308 (6.506599) | 14.209955 / 10.191392 (4.018563) | 0.139738 / 0.680424 (-0.540686) | 0.018722 / 0.534201 (-0.515479) | 0.375755 / 0.579283 (-0.203528) | 0.428335 / 0.434364 (-0.006029) | 0.438957 / 0.540337 (-0.101380) | 0.541130 / 1.386936 (-0.845806) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c14806a42a20f44a60f3663642bae1de199ab1ec \"CML watermark\")\n"
] | 2023-05-15T15:28:34 | 2023-06-08T16:40:18 | 2023-06-08T16:32:51 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5863",
"html_url": "https://github.com/huggingface/datasets/pull/5863",
"diff_url": "https://github.com/huggingface/datasets/pull/5863.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5863.patch",
"merged_at": "2023-06-08T16:32:50"
} | This PR tries out a new approach to generating the index tensor in `to_tf_dataset`, which should reduce memory usage for very large datasets. I'll need to do some testing before merging it!
Fixes #5855 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5863/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5862/comments | https://api.github.com/repos/huggingface/datasets/issues/5862/events | https://github.com/huggingface/datasets/issues/5862 | 1,710,140,646 | I_kwDODunzps5l7qzm | 5,862 | IndexError: list index out of range with data hosted on Zenodo | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This error is also raised when data is hosted on Google Drive:\r\n- https://huggingface.co/datasets/docred/discussions/5\r\n- https://huggingface.co/datasets/linnaeus/discussions/3\r\n- https://huggingface.co/datasets/poleval2019_mt/discussions/3\r\n- https://huggingface.co/datasets/reddit_tifu/discussions/2\r\n- https://huggingface.co/datasets/species_800/discussions/3\r\n- https://huggingface.co/datasets/wiki_lingua/discussions/1\r\n- https://huggingface.co/datasets/yoruba_text_c3/discussions/1"
] | 2023-05-15T13:47:19 | 2023-06-16T14:54:02 | null | MEMBER | null | null | null | The dataset viewer sometimes raises an `IndexError`:
```
IndexError: list index out of range
```
See:
- huggingface/datasets-server#1151
- https://huggingface.co/datasets/reddit/discussions/5
- huggingface/datasets-server#1118
- https://huggingface.co/datasets/krr-oxford/OntoLAMA/discussions/1
- https://huggingface.co/datasets/hyperpartisan_news_detection/discussions/3
- https://huggingface.co/datasets/um005/discussions/2
- https://huggingface.co/datasets/tapaco/discussions/2
- https://huggingface.co/datasets/common_language/discussions/3
- https://huggingface.co/datasets/pass/discussions/1
After investigation:
- This happens with data files hosted on Zenodo
- Indeed, there is an underlying 429 HTTP error: Too Many Requests
Note that some time ago, it also happened with data files hosted on Google Drive. See:
- #4581
- #4580
The reason then was that there was a 403 HTTP error: Forbidden
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5862/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5861/comments | https://api.github.com/repos/huggingface/datasets/issues/5861/events | https://github.com/huggingface/datasets/pull/5861 | 1,709,807,340 | PR_kwDODunzps5Qf55q | 5,861 | Better error message when combining dataset dicts instead of datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007167 / 0.011353 (-0.004185) | 0.004914 / 0.011008 (-0.006094) | 0.096858 / 0.038508 (0.058350) | 0.033468 / 0.023109 (0.010359) | 0.297276 / 0.275898 (0.021378) | 0.344289 / 0.323480 (0.020809) | 0.005703 / 0.007986 (-0.002282) | 0.003972 / 0.004328 (-0.000357) | 0.075191 / 0.004250 (0.070940) | 0.046247 / 0.037052 (0.009194) | 0.317857 / 0.258489 (0.059368) | 0.347263 / 0.293841 (0.053422) | 0.035017 / 0.128546 (-0.093529) | 0.012036 / 0.075646 (-0.063611) | 0.332522 / 0.419271 (-0.086750) | 0.050188 / 0.043533 (0.006655) | 0.296627 / 0.255139 (0.041488) | 0.319196 / 0.283200 (0.035997) | 0.101100 / 0.141683 (-0.040583) | 1.484536 / 1.452155 (0.032382) | 1.606364 / 1.492716 (0.113648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203954 / 0.018006 (0.185948) | 0.436505 / 0.000490 (0.436015) | 0.003853 / 0.000200 (0.003654) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025834 / 0.037411 (-0.011578) | 0.105759 / 0.014526 (0.091233) | 0.114289 / 0.176557 (-0.062268) | 0.174388 / 0.737135 (-0.562748) | 0.122248 / 0.296338 (-0.174090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404218 / 0.215209 (0.189009) | 4.027900 / 2.077655 (1.950245) | 1.854757 / 1.504120 (0.350637) | 1.668882 / 1.541195 (0.127687) | 1.731451 / 1.468490 (0.262961) | 0.707843 / 4.584777 (-3.876934) | 3.756386 / 3.745712 (0.010674) | 2.067751 / 5.269862 (-3.202110) | 1.313039 / 4.565676 (-3.252638) | 0.086442 / 0.424275 (-0.337833) | 0.012329 / 0.007607 (0.004722) | 0.505964 / 0.226044 (0.279919) | 5.050788 / 2.268929 (2.781860) | 2.353936 / 55.444624 (-53.090688) | 2.055560 / 6.876477 (-4.820917) | 2.162948 / 2.142072 (0.020876) | 0.850532 / 4.805227 (-3.954696) | 0.168560 / 6.500664 (-6.332104) | 0.063143 / 0.075469 (-0.012326) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182723 / 1.841788 (-0.659065) | 14.779342 / 8.074308 (6.705034) | 14.461572 / 10.191392 (4.270180) | 0.163120 / 0.680424 (-0.517303) | 0.017978 / 0.534201 (-0.516223) | 0.419168 / 0.579283 (-0.160115) | 0.420955 / 0.434364 (-0.013409) | 0.509710 / 0.540337 (-0.030628) | 0.619586 / 1.386936 (-0.767350) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.005136 / 0.011008 (-0.005872) | 0.074910 / 0.038508 (0.036402) | 0.032552 / 0.023109 (0.009443) | 0.374998 / 0.275898 (0.099100) | 0.399219 / 0.323480 (0.075739) | 0.005615 / 0.007986 (-0.002371) | 0.004118 / 0.004328 (-0.000210) | 0.074219 / 0.004250 (0.069969) | 0.045924 / 0.037052 (0.008871) | 0.383228 / 0.258489 (0.124739) | 0.407195 / 0.293841 (0.113354) | 0.035460 / 0.128546 (-0.093086) | 0.012460 / 0.075646 (-0.063187) | 0.087077 / 0.419271 (-0.332195) | 0.050507 / 0.043533 (0.006974) | 0.369001 / 0.255139 (0.113862) | 0.385761 / 0.283200 (0.102561) | 0.106999 / 0.141683 (-0.034684) | 1.465456 / 1.452155 (0.013302) | 1.556962 / 1.492716 (0.064246) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214926 / 0.018006 (0.196920) | 0.436893 / 0.000490 (0.436403) | 0.003388 / 0.000200 (0.003188) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029919 / 0.037411 (-0.007492) | 0.110859 / 0.014526 (0.096333) | 0.120617 / 0.176557 (-0.055939) | 0.171781 / 0.737135 (-0.565355) | 0.125627 / 0.296338 (-0.170712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436024 / 0.215209 (0.220815) | 4.359167 / 2.077655 (2.281512) | 2.188399 / 1.504120 (0.684279) | 2.001196 / 1.541195 (0.460001) | 2.023710 / 1.468490 (0.555220) | 0.713799 / 4.584777 (-3.870978) | 3.832217 / 3.745712 (0.086504) | 3.269351 / 5.269862 (-2.000510) | 1.534608 / 4.565676 (-3.031068) | 0.088505 / 0.424275 (-0.335770) | 0.012345 / 0.007607 (0.004738) | 0.542446 / 0.226044 (0.316401) | 5.377757 / 2.268929 (3.108828) | 2.659837 / 55.444624 (-52.784787) | 2.272356 / 6.876477 (-4.604120) | 2.297289 / 2.142072 (0.155217) | 0.855276 / 4.805227 (-3.949952) | 0.170666 / 6.500664 (-6.329998) | 0.064549 / 0.075469 (-0.010920) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255938 / 1.841788 (-0.585850) | 15.151471 / 8.074308 (7.077163) | 12.905762 / 10.191392 (2.714370) | 0.162425 / 0.680424 (-0.517999) | 0.017504 / 0.534201 (-0.516697) | 0.448671 / 0.579283 (-0.130612) | 0.422424 / 0.434364 (-0.011940) | 0.551772 / 0.540337 (0.011434) | 0.649115 / 1.386936 (-0.737821) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#be73d9f192149727c5542ff257df81b03024fa39 \"CML watermark\")\n",
"Having those different checks helps providing an appropriate error message.\r\n\r\nIf the input is a dict, we suggest to select a split. If the input lists is a mix of iterable and non-iterable, we mention that it must be one or the other.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006559 / 0.011353 (-0.004794) | 0.004569 / 0.011008 (-0.006439) | 0.104503 / 0.038508 (0.065995) | 0.028220 / 0.023109 (0.005111) | 0.365507 / 0.275898 (0.089609) | 0.400238 / 0.323480 (0.076758) | 0.004968 / 0.007986 (-0.003017) | 0.003271 / 0.004328 (-0.001057) | 0.082804 / 0.004250 (0.078554) | 0.036299 / 0.037052 (-0.000754) | 0.361201 / 0.258489 (0.102712) | 0.410962 / 0.293841 (0.117121) | 0.030423 / 0.128546 (-0.098123) | 0.011612 / 0.075646 (-0.064034) | 0.331820 / 0.419271 (-0.087452) | 0.043822 / 0.043533 (0.000289) | 0.356242 / 0.255139 (0.101103) | 0.393035 / 0.283200 (0.109836) | 0.088426 / 0.141683 (-0.053257) | 1.484139 / 1.452155 (0.031984) | 1.566712 / 1.492716 (0.073995) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195887 / 0.018006 (0.177880) | 0.402720 / 0.000490 (0.402231) | 0.003516 / 0.000200 (0.003316) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023270 / 0.037411 (-0.014141) | 0.095834 / 0.014526 (0.081308) | 0.102924 / 0.176557 (-0.073632) | 0.161397 / 0.737135 (-0.575738) | 0.105225 / 0.296338 (-0.191114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451701 / 0.215209 (0.236491) | 4.495171 / 2.077655 (2.417517) | 2.223203 / 1.504120 (0.719083) | 2.035533 / 1.541195 (0.494338) | 2.076182 / 1.468490 (0.607692) | 0.697317 / 4.584777 (-3.887460) | 3.406309 / 3.745712 (-0.339403) | 1.847179 / 5.269862 (-3.422683) | 1.158762 / 4.565676 (-3.406914) | 0.083067 / 0.424275 (-0.341208) | 0.012453 / 0.007607 (0.004846) | 0.546502 / 0.226044 (0.320458) | 5.455712 / 2.268929 (3.186784) | 2.654142 / 55.444624 (-52.790483) | 2.298722 / 6.876477 (-4.577755) | 2.383467 / 2.142072 (0.241395) | 0.805950 / 4.805227 (-3.999278) | 0.152479 / 6.500664 (-6.348185) | 0.066784 / 0.075469 (-0.008685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239129 / 1.841788 (-0.602659) | 13.603707 / 8.074308 (5.529398) | 14.062004 / 10.191392 (3.870612) | 0.130928 / 0.680424 (-0.549495) | 0.016907 / 0.534201 (-0.517294) | 0.381614 / 0.579283 (-0.197670) | 0.386770 / 0.434364 (-0.047594) | 0.455792 / 0.540337 (-0.084545) | 0.526092 / 1.386936 (-0.860844) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006202 / 0.011353 (-0.005151) | 0.004478 / 0.011008 (-0.006531) | 0.076492 / 0.038508 (0.037984) | 0.026703 / 0.023109 (0.003594) | 0.355134 / 0.275898 (0.079236) | 0.391207 / 0.323480 (0.067727) | 0.004852 / 0.007986 (-0.003133) | 0.003271 / 0.004328 (-0.001057) | 0.075080 / 0.004250 (0.070830) | 0.038803 / 0.037052 (0.001750) | 0.359530 / 0.258489 (0.101041) | 0.409044 / 0.293841 (0.115203) | 0.030366 / 0.128546 (-0.098180) | 0.011544 / 0.075646 (-0.064102) | 0.084849 / 0.419271 (-0.334423) | 0.040076 / 0.043533 (-0.003457) | 0.357359 / 0.255139 (0.102220) | 0.384075 / 0.283200 (0.100875) | 0.089130 / 0.141683 (-0.052552) | 1.520400 / 1.452155 (0.068246) | 1.604403 / 1.492716 (0.111687) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257127 / 0.018006 (0.239121) | 0.403691 / 0.000490 (0.403202) | 0.006894 / 0.000200 (0.006694) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024653 / 0.037411 (-0.012758) | 0.098834 / 0.014526 (0.084309) | 0.107276 / 0.176557 (-0.069281) | 0.158256 / 0.737135 (-0.578879) | 0.111339 / 0.296338 (-0.184999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445006 / 0.215209 (0.229797) | 4.452953 / 2.077655 (2.375299) | 2.168291 / 1.504120 (0.664171) | 1.969457 / 1.541195 (0.428262) | 2.003505 / 1.468490 (0.535015) | 0.695857 / 4.584777 (-3.888920) | 3.433424 / 3.745712 (-0.312288) | 2.466977 / 5.269862 (-2.802885) | 1.528167 / 4.565676 (-3.037509) | 0.082425 / 0.424275 (-0.341850) | 0.012470 / 0.007607 (0.004863) | 0.559039 / 0.226044 (0.332995) | 5.609496 / 2.268929 (3.340568) | 2.602898 / 55.444624 (-52.841726) | 2.273971 / 6.876477 (-4.602506) | 2.303370 / 2.142072 (0.161298) | 0.803875 / 4.805227 (-4.001352) | 0.151069 / 6.500664 (-6.349595) | 0.067956 / 0.075469 (-0.007513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334443 / 1.841788 (-0.507345) | 13.773252 / 8.074308 (5.698944) | 13.007042 / 10.191392 (2.815650) | 0.127939 / 0.680424 (-0.552485) | 0.016412 / 0.534201 (-0.517789) | 0.374744 / 0.579283 (-0.204539) | 0.396912 / 0.434364 (-0.037452) | 0.443197 / 0.540337 (-0.097140) | 0.528338 / 1.386936 (-0.858598) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#51d9f2a3064aa89a780e3d02c6cc34000c51c4fb \"CML watermark\")\n",
"Just modified it to use only one loop. I think I managed to keep it readable as well",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007382 / 0.011353 (-0.003971) | 0.005143 / 0.011008 (-0.005865) | 0.097635 / 0.038508 (0.059127) | 0.034726 / 0.023109 (0.011616) | 0.315556 / 0.275898 (0.039658) | 0.355951 / 0.323480 (0.032472) | 0.006055 / 0.007986 (-0.001931) | 0.004264 / 0.004328 (-0.000065) | 0.073636 / 0.004250 (0.069386) | 0.050480 / 0.037052 (0.013428) | 0.316031 / 0.258489 (0.057542) | 0.363933 / 0.293841 (0.070092) | 0.035138 / 0.128546 (-0.093408) | 0.012407 / 0.075646 (-0.063239) | 0.333677 / 0.419271 (-0.085595) | 0.050586 / 0.043533 (0.007053) | 0.309507 / 0.255139 (0.054369) | 0.327043 / 0.283200 (0.043844) | 0.108975 / 0.141683 (-0.032708) | 1.447778 / 1.452155 (-0.004377) | 1.519971 / 1.492716 (0.027255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248770 / 0.018006 (0.230764) | 0.603036 / 0.000490 (0.602546) | 0.000383 / 0.000200 (0.000183) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027094 / 0.037411 (-0.010317) | 0.104427 / 0.014526 (0.089901) | 0.120627 / 0.176557 (-0.055929) | 0.178790 / 0.737135 (-0.558346) | 0.124877 / 0.296338 (-0.171461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414442 / 0.215209 (0.199233) | 4.138009 / 2.077655 (2.060355) | 1.964642 / 1.504120 (0.460523) | 1.775940 / 1.541195 (0.234745) | 1.899719 / 1.468490 (0.431228) | 0.695406 / 4.584777 (-3.889371) | 3.760470 / 3.745712 (0.014758) | 3.906958 / 5.269862 (-1.362904) | 2.028164 / 4.565676 (-2.537513) | 0.086704 / 0.424275 (-0.337571) | 0.012465 / 0.007607 (0.004857) | 0.512336 / 0.226044 (0.286292) | 5.108587 / 2.268929 (2.839659) | 2.435273 / 55.444624 (-53.009352) | 2.142387 / 6.876477 (-4.734090) | 2.258234 / 2.142072 (0.116162) | 0.854035 / 4.805227 (-3.951193) | 0.170443 / 6.500664 (-6.330222) | 0.065762 / 0.075469 (-0.009707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187529 / 1.841788 (-0.654259) | 15.151164 / 8.074308 (7.076856) | 14.577545 / 10.191392 (4.386153) | 0.166973 / 0.680424 (-0.513450) | 0.017883 / 0.534201 (-0.516318) | 0.427607 / 0.579283 (-0.151676) | 0.417050 / 0.434364 (-0.017314) | 0.508116 / 0.540337 (-0.032221) | 0.590173 / 1.386936 (-0.796763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007499 / 0.011353 (-0.003854) | 0.005195 / 0.011008 (-0.005813) | 0.073600 / 0.038508 (0.035091) | 0.033574 / 0.023109 (0.010464) | 0.377506 / 0.275898 (0.101608) | 0.432752 / 0.323480 (0.109272) | 0.006042 / 0.007986 (-0.001944) | 0.006427 / 0.004328 (0.002098) | 0.071666 / 0.004250 (0.067416) | 0.053243 / 0.037052 (0.016190) | 0.363972 / 0.258489 (0.105483) | 0.454988 / 0.293841 (0.161147) | 0.035118 / 0.128546 (-0.093428) | 0.012395 / 0.075646 (-0.063251) | 0.084308 / 0.419271 (-0.334963) | 0.048589 / 0.043533 (0.005057) | 0.368036 / 0.255139 (0.112897) | 0.399414 / 0.283200 (0.116215) | 0.109043 / 0.141683 (-0.032640) | 1.462972 / 1.452155 (0.010817) | 1.574443 / 1.492716 (0.081726) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215107 / 0.018006 (0.197101) | 0.550255 / 0.000490 (0.549765) | 0.004630 / 0.000200 (0.004430) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029948 / 0.037411 (-0.007463) | 0.111866 / 0.014526 (0.097340) | 0.126559 / 0.176557 (-0.049997) | 0.181443 / 0.737135 (-0.555693) | 0.130559 / 0.296338 (-0.165779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441410 / 0.215209 (0.226201) | 4.403406 / 2.077655 (2.325752) | 2.180276 / 1.504120 (0.676156) | 2.003729 / 1.541195 (0.462534) | 2.079394 / 1.468490 (0.610904) | 0.706061 / 4.584777 (-3.878716) | 3.805668 / 3.745712 (0.059956) | 3.864941 / 5.269862 (-1.404921) | 1.970468 / 4.565676 (-2.595208) | 0.086033 / 0.424275 (-0.338242) | 0.012261 / 0.007607 (0.004654) | 0.550427 / 0.226044 (0.324383) | 5.542270 / 2.268929 (3.273342) | 2.717047 / 55.444624 (-52.727577) | 2.449022 / 6.876477 (-4.427455) | 2.549567 / 2.142072 (0.407495) | 0.854981 / 4.805227 (-3.950247) | 0.169756 / 6.500664 (-6.330908) | 0.067082 / 0.075469 (-0.008387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281369 / 1.841788 (-0.560419) | 15.445090 / 8.074308 (7.370781) | 13.205652 / 10.191392 (3.014260) | 0.170070 / 0.680424 (-0.510354) | 0.017815 / 0.534201 (-0.516385) | 0.425193 / 0.579283 (-0.154090) | 0.425205 / 0.434364 (-0.009159) | 0.493561 / 0.540337 (-0.046776) | 0.588994 / 1.386936 (-0.797942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e427105fc68fce04d0f3c74efb942cbf3a65d166 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006345 / 0.011353 (-0.005008) | 0.004330 / 0.011008 (-0.006678) | 0.096327 / 0.038508 (0.057819) | 0.032964 / 0.023109 (0.009855) | 0.335600 / 0.275898 (0.059702) | 0.365635 / 0.323480 (0.042155) | 0.005435 / 0.007986 (-0.002551) | 0.005005 / 0.004328 (0.000677) | 0.071107 / 0.004250 (0.066856) | 0.044363 / 0.037052 (0.007311) | 0.339988 / 0.258489 (0.081498) | 0.375575 / 0.293841 (0.081734) | 0.028343 / 0.128546 (-0.100203) | 0.008587 / 0.075646 (-0.067059) | 0.324349 / 0.419271 (-0.094922) | 0.050105 / 0.043533 (0.006573) | 0.327398 / 0.255139 (0.072259) | 0.348479 / 0.283200 (0.065279) | 0.102357 / 0.141683 (-0.039326) | 1.419905 / 1.452155 (-0.032250) | 1.534887 / 1.492716 (0.042171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212418 / 0.018006 (0.194412) | 0.433183 / 0.000490 (0.432693) | 0.000595 / 0.000200 (0.000395) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027520 / 0.037411 (-0.009891) | 0.109503 / 0.014526 (0.094977) | 0.118202 / 0.176557 (-0.058355) | 0.177236 / 0.737135 (-0.559899) | 0.123736 / 0.296338 (-0.172602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405734 / 0.215209 (0.190525) | 4.039566 / 2.077655 (1.961911) | 1.838211 / 1.504120 (0.334091) | 1.652650 / 1.541195 (0.111456) | 1.753488 / 1.468490 (0.284998) | 0.525258 / 4.584777 (-4.059519) | 3.704509 / 3.745712 (-0.041203) | 1.826794 / 5.269862 (-3.443067) | 1.236361 / 4.565676 (-3.329315) | 0.065619 / 0.424275 (-0.358656) | 0.011606 / 0.007607 (0.003999) | 0.505954 / 0.226044 (0.279910) | 5.054140 / 2.268929 (2.785211) | 2.352587 / 55.444624 (-53.092037) | 2.050601 / 6.876477 (-4.825875) | 2.097222 / 2.142072 (-0.044850) | 0.641044 / 4.805227 (-4.164183) | 0.140676 / 6.500664 (-6.359988) | 0.063217 / 0.075469 (-0.012253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.177750 / 1.841788 (-0.664038) | 14.819346 / 8.074308 (6.745038) | 14.085937 / 10.191392 (3.894545) | 0.168618 / 0.680424 (-0.511806) | 0.017189 / 0.534201 (-0.517011) | 0.393415 / 0.579283 (-0.185868) | 0.422879 / 0.434364 (-0.011485) | 0.477289 / 0.540337 (-0.063048) | 0.569078 / 1.386936 (-0.817858) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004850) | 0.004640 / 0.011008 (-0.006368) | 0.073272 / 0.038508 (0.034764) | 0.033225 / 0.023109 (0.010116) | 0.359165 / 0.275898 (0.083267) | 0.391659 / 0.323480 (0.068179) | 0.005684 / 0.007986 (-0.002302) | 0.004045 / 0.004328 (-0.000284) | 0.072880 / 0.004250 (0.068629) | 0.046260 / 0.037052 (0.009208) | 0.361772 / 0.258489 (0.103283) | 0.402905 / 0.293841 (0.109064) | 0.027732 / 0.128546 (-0.100814) | 0.008864 / 0.075646 (-0.066783) | 0.081961 / 0.419271 (-0.337310) | 0.046170 / 0.043533 (0.002637) | 0.364198 / 0.255139 (0.109059) | 0.387468 / 0.283200 (0.104269) | 0.105456 / 0.141683 (-0.036227) | 1.457176 / 1.452155 (0.005021) | 1.564899 / 1.492716 (0.072183) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179129 / 0.018006 (0.161123) | 0.439699 / 0.000490 (0.439209) | 0.002882 / 0.000200 (0.002682) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029123 / 0.037411 (-0.008288) | 0.112046 / 0.014526 (0.097520) | 0.122773 / 0.176557 (-0.053784) | 0.178404 / 0.737135 (-0.558732) | 0.127904 / 0.296338 (-0.168434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440413 / 0.215209 (0.225204) | 4.407334 / 2.077655 (2.329680) | 2.112932 / 1.504120 (0.608812) | 1.911034 / 1.541195 (0.369840) | 2.057168 / 1.468490 (0.588677) | 0.525472 / 4.584777 (-4.059305) | 3.738894 / 3.745712 (-0.006818) | 1.807592 / 5.269862 (-3.462270) | 1.053837 / 4.565676 (-3.511839) | 0.066203 / 0.424275 (-0.358072) | 0.011965 / 0.007607 (0.004358) | 0.541137 / 0.226044 (0.315093) | 5.415040 / 2.268929 (3.146112) | 2.580476 / 55.444624 (-52.864148) | 2.234144 / 6.876477 (-4.642333) | 2.306014 / 2.142072 (0.163942) | 0.644221 / 4.805227 (-4.161006) | 0.142870 / 6.500664 (-6.357794) | 0.065015 / 0.075469 (-0.010454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303465 / 1.841788 (-0.538323) | 14.949683 / 8.074308 (6.875375) | 14.370871 / 10.191392 (4.179478) | 0.142714 / 0.680424 (-0.537710) | 0.017372 / 0.534201 (-0.516829) | 0.403898 / 0.579283 (-0.175385) | 0.424781 / 0.434364 (-0.009583) | 0.465984 / 0.540337 (-0.074353) | 0.570863 / 1.386936 (-0.816074) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#22d1d533e8ab831b1aa1aab3e7d3c72ba42a83e8 \"CML watermark\")\n"
] | 2023-05-15T10:36:24 | 2023-05-23T10:40:13 | 2023-05-23T10:32:58 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5861",
"html_url": "https://github.com/huggingface/datasets/pull/5861",
"diff_url": "https://github.com/huggingface/datasets/pull/5861.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5861.patch",
"merged_at": "2023-05-23T10:32:58"
} | close https://github.com/huggingface/datasets/issues/5851 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5861/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5860/comments | https://api.github.com/repos/huggingface/datasets/issues/5860/events | https://github.com/huggingface/datasets/pull/5860 | 1,709,727,460 | PR_kwDODunzps5QfojD | 5,860 | Minor tqdm optim | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006917 / 0.011353 (-0.004436) | 0.004803 / 0.011008 (-0.006205) | 0.097082 / 0.038508 (0.058574) | 0.035105 / 0.023109 (0.011996) | 0.325911 / 0.275898 (0.050013) | 0.371858 / 0.323480 (0.048378) | 0.006451 / 0.007986 (-0.001534) | 0.004421 / 0.004328 (0.000093) | 0.075738 / 0.004250 (0.071487) | 0.053624 / 0.037052 (0.016572) | 0.332661 / 0.258489 (0.074172) | 0.372729 / 0.293841 (0.078888) | 0.028279 / 0.128546 (-0.100267) | 0.009318 / 0.075646 (-0.066328) | 0.328505 / 0.419271 (-0.090766) | 0.066962 / 0.043533 (0.023429) | 0.316863 / 0.255139 (0.061724) | 0.344296 / 0.283200 (0.061096) | 0.120575 / 0.141683 (-0.021108) | 1.457867 / 1.452155 (0.005712) | 1.597361 / 1.492716 (0.104644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296399 / 0.018006 (0.278392) | 0.507196 / 0.000490 (0.506706) | 0.003036 / 0.000200 (0.002836) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028535 / 0.037411 (-0.008876) | 0.110566 / 0.014526 (0.096040) | 0.122078 / 0.176557 (-0.054479) | 0.182926 / 0.737135 (-0.554210) | 0.125546 / 0.296338 (-0.170792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211742) | 4.255608 / 2.077655 (2.177953) | 2.063865 / 1.504120 (0.559745) | 1.867198 / 1.541195 (0.326004) | 2.058236 / 1.468490 (0.589746) | 0.525885 / 4.584777 (-4.058892) | 3.723607 / 3.745712 (-0.022105) | 1.919144 / 5.269862 (-3.350718) | 1.235308 / 4.565676 (-3.330368) | 0.066423 / 0.424275 (-0.357852) | 0.012045 / 0.007607 (0.004438) | 0.528432 / 0.226044 (0.302388) | 5.268723 / 2.268929 (2.999794) | 2.504071 / 55.444624 (-52.940553) | 2.137999 / 6.876477 (-4.738477) | 2.229987 / 2.142072 (0.087914) | 0.641739 / 4.805227 (-4.163488) | 0.142635 / 6.500664 (-6.358029) | 0.065649 / 0.075469 (-0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182710 / 1.841788 (-0.659078) | 15.339777 / 8.074308 (7.265469) | 14.722308 / 10.191392 (4.530916) | 0.145914 / 0.680424 (-0.534510) | 0.017861 / 0.534201 (-0.516340) | 0.393092 / 0.579283 (-0.186191) | 0.431179 / 0.434364 (-0.003185) | 0.485712 / 0.540337 (-0.054625) | 0.602634 / 1.386936 (-0.784302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006792 / 0.011353 (-0.004561) | 0.005118 / 0.011008 (-0.005890) | 0.073440 / 0.038508 (0.034932) | 0.033751 / 0.023109 (0.010642) | 0.389243 / 0.275898 (0.113345) | 0.397083 / 0.323480 (0.073603) | 0.005989 / 0.007986 (-0.001997) | 0.004289 / 0.004328 (-0.000040) | 0.073228 / 0.004250 (0.068977) | 0.053490 / 0.037052 (0.016438) | 0.396070 / 0.258489 (0.137581) | 0.415134 / 0.293841 (0.121293) | 0.028649 / 0.128546 (-0.099897) | 0.009159 / 0.075646 (-0.066487) | 0.080813 / 0.419271 (-0.338458) | 0.048200 / 0.043533 (0.004667) | 0.388009 / 0.255139 (0.132870) | 0.382174 / 0.283200 (0.098975) | 0.107807 / 0.141683 (-0.033876) | 1.467276 / 1.452155 (0.015121) | 1.568091 / 1.492716 (0.075375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328030 / 0.018006 (0.310024) | 0.498058 / 0.000490 (0.497568) | 0.002513 / 0.000200 (0.002313) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029835 / 0.037411 (-0.007576) | 0.113859 / 0.014526 (0.099333) | 0.130813 / 0.176557 (-0.045743) | 0.183646 / 0.737135 (-0.553490) | 0.136561 / 0.296338 (-0.159777) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438901 / 0.215209 (0.223692) | 4.376426 / 2.077655 (2.298771) | 2.220932 / 1.504120 (0.716812) | 2.043585 / 1.541195 (0.502390) | 2.161383 / 1.468490 (0.692893) | 0.523224 / 4.584777 (-4.061553) | 3.730589 / 3.745712 (-0.015123) | 1.859602 / 5.269862 (-3.410260) | 1.073415 / 4.565676 (-3.492261) | 0.066363 / 0.424275 (-0.357912) | 0.012491 / 0.007607 (0.004884) | 0.542052 / 0.226044 (0.316008) | 5.426246 / 2.268929 (3.157318) | 2.673884 / 55.444624 (-52.770740) | 2.372611 / 6.876477 (-4.503865) | 2.482216 / 2.142072 (0.340143) | 0.705669 / 4.805227 (-4.099558) | 0.141075 / 6.500664 (-6.359589) | 0.065339 / 0.075469 (-0.010130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316403 / 1.841788 (-0.525385) | 15.832870 / 8.074308 (7.758562) | 13.307045 / 10.191392 (3.115653) | 0.147258 / 0.680424 (-0.533166) | 0.017966 / 0.534201 (-0.516235) | 0.414396 / 0.579283 (-0.164887) | 0.431801 / 0.434364 (-0.002563) | 0.465483 / 0.540337 (-0.074855) | 0.577850 / 1.386936 (-0.809086) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c795c7e332a7c850c3e725f2034d4894b5e314f7 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006368 / 0.011353 (-0.004985) | 0.004274 / 0.011008 (-0.006734) | 0.098799 / 0.038508 (0.060291) | 0.029096 / 0.023109 (0.005986) | 0.308009 / 0.275898 (0.032111) | 0.345701 / 0.323480 (0.022221) | 0.005312 / 0.007986 (-0.002674) | 0.003435 / 0.004328 (-0.000894) | 0.075912 / 0.004250 (0.071662) | 0.041993 / 0.037052 (0.004941) | 0.320075 / 0.258489 (0.061586) | 0.347506 / 0.293841 (0.053665) | 0.025456 / 0.128546 (-0.103091) | 0.008461 / 0.075646 (-0.067185) | 0.322823 / 0.419271 (-0.096448) | 0.044650 / 0.043533 (0.001117) | 0.314118 / 0.255139 (0.058979) | 0.333436 / 0.283200 (0.050237) | 0.093811 / 0.141683 (-0.047871) | 1.464464 / 1.452155 (0.012310) | 1.548098 / 1.492716 (0.055382) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.015905 / 0.018006 (-0.002101) | 0.427847 / 0.000490 (0.427357) | 0.007600 / 0.000200 (0.007400) | 0.000421 / 0.000054 (0.000366) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024530 / 0.037411 (-0.012882) | 0.099907 / 0.014526 (0.085381) | 0.107282 / 0.176557 (-0.069275) | 0.168332 / 0.737135 (-0.568804) | 0.109875 / 0.296338 (-0.186464) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451064 / 0.215209 (0.235855) | 4.491434 / 2.077655 (2.413779) | 2.253251 / 1.504120 (0.749131) | 2.086740 / 1.541195 (0.545545) | 2.133288 / 1.468490 (0.664798) | 0.558801 / 4.584777 (-4.025976) | 3.463525 / 3.745712 (-0.282187) | 1.747657 / 5.269862 (-3.522205) | 1.005465 / 4.565676 (-3.560211) | 0.068341 / 0.424275 (-0.355934) | 0.012521 / 0.007607 (0.004914) | 0.567002 / 0.226044 (0.340957) | 5.689529 / 2.268929 (3.420601) | 2.700562 / 55.444624 (-52.744062) | 2.384888 / 6.876477 (-4.491589) | 2.503160 / 2.142072 (0.361088) | 0.667107 / 4.805227 (-4.138120) | 0.137253 / 6.500664 (-6.363412) | 0.068300 / 0.075469 (-0.007170) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202916 / 1.841788 (-0.638872) | 14.163393 / 8.074308 (6.089085) | 14.402463 / 10.191392 (4.211071) | 0.145273 / 0.680424 (-0.535151) | 0.016996 / 0.534201 (-0.517205) | 0.363520 / 0.579283 (-0.215763) | 0.421595 / 0.434364 (-0.012769) | 0.438413 / 0.540337 (-0.101925) | 0.508615 / 1.386936 (-0.878321) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006419 / 0.011353 (-0.004934) | 0.004346 / 0.011008 (-0.006662) | 0.076356 / 0.038508 (0.037848) | 0.029370 / 0.023109 (0.006260) | 0.371046 / 0.275898 (0.095148) | 0.398279 / 0.323480 (0.074799) | 0.005258 / 0.007986 (-0.002728) | 0.003528 / 0.004328 (-0.000800) | 0.076787 / 0.004250 (0.072537) | 0.041575 / 0.037052 (0.004522) | 0.362319 / 0.258489 (0.103830) | 0.402134 / 0.293841 (0.108293) | 0.025633 / 0.128546 (-0.102913) | 0.008826 / 0.075646 (-0.066820) | 0.082380 / 0.419271 (-0.336892) | 0.041655 / 0.043533 (-0.001878) | 0.357583 / 0.255139 (0.102444) | 0.383486 / 0.283200 (0.100287) | 0.093682 / 0.141683 (-0.048001) | 1.488522 / 1.452155 (0.036367) | 1.576090 / 1.492716 (0.083373) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185556 / 0.018006 (0.167550) | 0.431345 / 0.000490 (0.430855) | 0.002290 / 0.000200 (0.002090) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026030 / 0.037411 (-0.011382) | 0.102889 / 0.014526 (0.088364) | 0.109541 / 0.176557 (-0.067015) | 0.161050 / 0.737135 (-0.576085) | 0.113525 / 0.296338 (-0.182814) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445301 / 0.215209 (0.230092) | 4.437320 / 2.077655 (2.359666) | 2.174181 / 1.504120 (0.670061) | 1.977440 / 1.541195 (0.436245) | 2.036323 / 1.468490 (0.567832) | 0.554227 / 4.584777 (-4.030550) | 3.462746 / 3.745712 (-0.282966) | 1.765257 / 5.269862 (-3.504604) | 1.014515 / 4.565676 (-3.551161) | 0.068391 / 0.424275 (-0.355884) | 0.013154 / 0.007607 (0.005546) | 0.546696 / 0.226044 (0.320652) | 5.490628 / 2.268929 (3.221699) | 2.611947 / 55.444624 (-52.832677) | 2.282659 / 6.876477 (-4.593818) | 2.333972 / 2.142072 (0.191899) | 0.663140 / 4.805227 (-4.142087) | 0.137996 / 6.500664 (-6.362668) | 0.069063 / 0.075469 (-0.006407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332147 / 1.841788 (-0.509641) | 14.781592 / 8.074308 (6.707284) | 13.399190 / 10.191392 (3.207798) | 0.139370 / 0.680424 (-0.541054) | 0.016742 / 0.534201 (-0.517459) | 0.364138 / 0.579283 (-0.215146) | 0.402479 / 0.434364 (-0.031885) | 0.427591 / 0.540337 (-0.112746) | 0.520864 / 1.386936 (-0.866072) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a8279677b58b93f77995c7da67aea2a04b6a7395 \"CML watermark\")\n"
] | 2023-05-15T09:49:37 | 2023-05-17T18:46:46 | 2023-05-17T18:39:35 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5860",
"html_url": "https://github.com/huggingface/datasets/pull/5860",
"diff_url": "https://github.com/huggingface/datasets/pull/5860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5860.patch",
"merged_at": "2023-05-17T18:39:35"
} | Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`.
On my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5860/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5859/comments | https://api.github.com/repos/huggingface/datasets/issues/5859/events | https://github.com/huggingface/datasets/pull/5859 | 1,709,554,829 | PR_kwDODunzps5QfDLC | 5,859 | Raise TypeError when indexing a dataset with bool | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq any idea why this only fails (CI integration fails are unrelated) in \"Build PR Documentation / build / build_pr_documentation\" (which uses Python 3.8), with message:\r\n```\r\nTypeError: Type subscription requires python >= 3.9\r\n```\r\nwhereas the CI is green for unit tests, which use Python 3.7?",
"Hmm I don't know sorry :/",
"@lhoestq I am afraid I have to remove the generics I created for numpy and pandas (no subscriptable until Python 3.9) and just leave:\r\n```python\r\nListLike = Union[List[T], Tuple[T, ...]]\r\n```",
"Ok sounds good - no need to spend more time on this",
"I will merge once the CI is finished. The integration errors are unrelated: `502 Server Error: Bad Gateway`",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006637 / 0.011353 (-0.004716) | 0.004578 / 0.011008 (-0.006430) | 0.097346 / 0.038508 (0.058838) | 0.034171 / 0.023109 (0.011062) | 0.315060 / 0.275898 (0.039162) | 0.354386 / 0.323480 (0.030907) | 0.005778 / 0.007986 (-0.002207) | 0.004123 / 0.004328 (-0.000206) | 0.073839 / 0.004250 (0.069589) | 0.046418 / 0.037052 (0.009366) | 0.325910 / 0.258489 (0.067421) | 0.368909 / 0.293841 (0.075068) | 0.027975 / 0.128546 (-0.100571) | 0.008885 / 0.075646 (-0.066761) | 0.327956 / 0.419271 (-0.091316) | 0.049911 / 0.043533 (0.006378) | 0.309424 / 0.255139 (0.054285) | 0.346543 / 0.283200 (0.063343) | 0.103429 / 0.141683 (-0.038253) | 1.517606 / 1.452155 (0.065451) | 1.536685 / 1.492716 (0.043969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211552 / 0.018006 (0.193546) | 0.449583 / 0.000490 (0.449094) | 0.002949 / 0.000200 (0.002750) | 0.000140 / 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027603 / 0.037411 (-0.009808) | 0.108873 / 0.014526 (0.094347) | 0.117990 / 0.176557 (-0.058567) | 0.174202 / 0.737135 (-0.562933) | 0.123793 / 0.296338 (-0.172545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418449 / 0.215209 (0.203240) | 4.177753 / 2.077655 (2.100099) | 1.923446 / 1.504120 (0.419326) | 1.720576 / 1.541195 (0.179381) | 1.783723 / 1.468490 (0.315232) | 0.530068 / 4.584777 (-4.054709) | 3.709410 / 3.745712 (-0.036302) | 1.863924 / 5.269862 (-3.405938) | 1.149906 / 4.565676 (-3.415770) | 0.066595 / 0.424275 (-0.357680) | 0.011733 / 0.007607 (0.004126) | 0.519249 / 0.226044 (0.293205) | 5.179676 / 2.268929 (2.910748) | 2.389488 / 55.444624 (-53.055137) | 2.060006 / 6.876477 (-4.816471) | 2.160668 / 2.142072 (0.018596) | 0.641081 / 4.805227 (-4.164146) | 0.141962 / 6.500664 (-6.358702) | 0.063146 / 0.075469 (-0.012323) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197424 / 1.841788 (-0.644364) | 14.915321 / 8.074308 (6.841013) | 14.792302 / 10.191392 (4.600910) | 0.145436 / 0.680424 (-0.534988) | 0.017669 / 0.534201 (-0.516532) | 0.399060 / 0.579283 (-0.180223) | 0.416282 / 0.434364 (-0.018082) | 0.498392 / 0.540337 (-0.041946) | 0.600242 / 1.386936 (-0.786694) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007246 / 0.011353 (-0.004106) | 0.005353 / 0.011008 (-0.005656) | 0.076357 / 0.038508 (0.037849) | 0.037662 / 0.023109 (0.014553) | 0.387862 / 0.275898 (0.111964) | 0.421610 / 0.323480 (0.098130) | 0.006424 / 0.007986 (-0.001561) | 0.004397 / 0.004328 (0.000069) | 0.074212 / 0.004250 (0.069961) | 0.054147 / 0.037052 (0.017095) | 0.393171 / 0.258489 (0.134682) | 0.424082 / 0.293841 (0.130241) | 0.029001 / 0.128546 (-0.099546) | 0.009381 / 0.075646 (-0.066265) | 0.082562 / 0.419271 (-0.336710) | 0.048004 / 0.043533 (0.004472) | 0.386895 / 0.255139 (0.131756) | 0.386104 / 0.283200 (0.102904) | 0.113714 / 0.141683 (-0.027969) | 1.435601 / 1.452155 (-0.016553) | 1.554940 / 1.492716 (0.062224) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179288 / 0.018006 (0.161282) | 0.455301 / 0.000490 (0.454811) | 0.001469 / 0.000200 (0.001269) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030928 / 0.037411 (-0.006484) | 0.117833 / 0.014526 (0.103307) | 0.125088 / 0.176557 (-0.051468) | 0.178906 / 0.737135 (-0.558230) | 0.131264 / 0.296338 (-0.165075) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436900 / 0.215209 (0.221691) | 4.366094 / 2.077655 (2.288439) | 2.184398 / 1.504120 (0.680278) | 1.992779 / 1.541195 (0.451584) | 2.055260 / 1.468490 (0.586770) | 0.524136 / 4.584777 (-4.060641) | 3.750535 / 3.745712 (0.004823) | 2.985095 / 5.269862 (-2.284767) | 1.400291 / 4.565676 (-3.165385) | 0.065921 / 0.424275 (-0.358354) | 0.012110 / 0.007607 (0.004502) | 0.538239 / 0.226044 (0.312195) | 5.380613 / 2.268929 (3.111685) | 2.637509 / 55.444624 (-52.807116) | 2.352265 / 6.876477 (-4.524212) | 2.409829 / 2.142072 (0.267756) | 0.640428 / 4.805227 (-4.164799) | 0.142070 / 6.500664 (-6.358594) | 0.068171 / 0.075469 (-0.007298) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280080 / 1.841788 (-0.561707) | 15.588799 / 8.074308 (7.514491) | 14.648596 / 10.191392 (4.457204) | 0.147027 / 0.680424 (-0.533397) | 0.018981 / 0.534201 (-0.515220) | 0.394796 / 0.579283 (-0.184487) | 0.423686 / 0.434364 (-0.010678) | 0.467376 / 0.540337 (-0.072961) | 0.562247 / 1.386936 (-0.824689) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#680162303f4c5dae6ad2edef6b3efadded7d37bd \"CML watermark\")\n"
] | 2023-05-15T08:08:42 | 2023-05-25T16:31:24 | 2023-05-25T16:23:17 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5859",
"html_url": "https://github.com/huggingface/datasets/pull/5859",
"diff_url": "https://github.com/huggingface/datasets/pull/5859.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5859.patch",
"merged_at": "2023-05-25T16:23:17"
} | Fix #5858. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5859/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5858/comments | https://api.github.com/repos/huggingface/datasets/issues/5858/events | https://github.com/huggingface/datasets/issues/5858 | 1,709,332,632 | I_kwDODunzps5l4liY | 5,858 | Throw an error when dataset improperly indexed | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @sarahwie.\r\n\r\nPlease note that in `datasets` we do not have vectorized operation like `pandas`. Therefore, your equality comparisons above are `False`:\r\n- For example: `squad['question']` returns a `list`, and this list is not equal to `\"Who was the Norse leader?\"`\r\n\r\nThe `False` value is equivalent to `0` when indexing a dataset, thus the reason why you get the first element (with index 0): \r\n- For example: `squad[False]` is equivalent to `squad[0]`\r\n\r\nMaybe we should an exception instead of assuming that `False` is equivalent to `0` (and `True` is equivalent to `1`) in the context of indexing."
] | 2023-05-15T05:15:53 | 2023-05-25T16:23:19 | 2023-05-25T16:23:19 | NONE | null | null | null | ### Describe the bug
Pandas-style subset indexing on dataset does not throw an error, when maybe it should. Instead returns the first instance of the dataset regardless of index condition.
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. `squad = datasets.load_dataset("squad_v2", split="validation")`
2. `item = squad[squad['question'] == "Who was the Norse leader?"]`
or `it = squad[squad['id'] == '56ddde6b9a695914005b962b']`
3. returns the first item in the dataset, which does not satisfy the above conditions:
`{'id': '56ddde6b9a695914005b9628', 'title': 'Normans', 'context': 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ("Norman" comes from "Norseman") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.', 'question': 'In what country is Normandy located?', 'answers': {'text': ['France', 'France', 'France', 'France'], 'answer_start': [159, 159, 159, 159]}}`
### Expected behavior
Should either throw an error message, or return the dataset item that satisfies the condition.
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5858/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5857/comments | https://api.github.com/repos/huggingface/datasets/issues/5857/events | https://github.com/huggingface/datasets/issues/5857 | 1,709,326,622 | I_kwDODunzps5l4kEe | 5,857 | Adding chemistry dataset/models in huggingface | {
"login": "knc6",
"id": 16902896,
"node_id": "MDQ6VXNlcjE2OTAyODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/16902896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/knc6",
"html_url": "https://github.com/knc6",
"followers_url": "https://api.github.com/users/knc6/followers",
"following_url": "https://api.github.com/users/knc6/following{/other_user}",
"gists_url": "https://api.github.com/users/knc6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/knc6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/knc6/subscriptions",
"organizations_url": "https://api.github.com/users/knc6/orgs",
"repos_url": "https://api.github.com/users/knc6/repos",
"events_url": "https://api.github.com/users/knc6/events{/privacy}",
"received_events_url": "https://api.github.com/users/knc6/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\nThis would be a nice addition to the Hub! You can find the existing chemistry datasets/models on the Hub (using the `chemistry` tag) [here](https://huggingface.co/search/full-text?q=chemistry&type=model&type=dataset).\r\n\r\nFeel free to ping us here on the Hub if you need help adding the datasets.\r\n"
] | 2023-05-15T05:09:49 | 2023-07-21T13:45:40 | 2023-07-21T13:45:40 | NONE | null | null | null | ### Feature request
Huggingface is really amazing platform for open science.
In addition to computer vision, video and NLP, would it be of interest to add chemistry/materials science dataset/models in Huggingface? Or, if its already done, can you provide some pointers.
We have been working on a comprehensive benchmark on this topic: [JARVIS-Leaderboard](https://pages.nist.gov/jarvis_leaderboard/) and I am wondering if we could contribute/integrate this project as a part of huggingface.
### Motivation
Similar to the main stream AI field, there is need of large scale benchmarks/models/infrastructure for chemistry/materials data.
### Your contribution
We can start adding datasets as our [benchmarks](https://github.com/usnistgov/jarvis_leaderboard/tree/main/jarvis_leaderboard/benchmarks) should be easily convertible to the dataset format. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5857/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5856/comments | https://api.github.com/repos/huggingface/datasets/issues/5856/events | https://github.com/huggingface/datasets/issues/5856 | 1,709,218,242 | I_kwDODunzps5l4JnC | 5,856 | Error loading natural_questions | {
"login": "Crownor",
"id": 19185508,
"node_id": "MDQ6VXNlcjE5MTg1NTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/19185508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crownor",
"html_url": "https://github.com/Crownor",
"followers_url": "https://api.github.com/users/Crownor/followers",
"following_url": "https://api.github.com/users/Crownor/following{/other_user}",
"gists_url": "https://api.github.com/users/Crownor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Crownor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Crownor/subscriptions",
"organizations_url": "https://api.github.com/users/Crownor/orgs",
"repos_url": "https://api.github.com/users/Crownor/repos",
"events_url": "https://api.github.com/users/Crownor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Crownor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! You can avoid this error by using the preprocessed version:\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset('natural_questions')\r\n```\r\n\r\nPS: Once we finish https://github.com/huggingface/datasets/pull/5364, this error will no longer be a problem.",
"> Hi! You can avoid this error by using the preprocessed version:\r\n> \r\n> ```python\r\n> import datasets\r\n> ds = datasets.load_dataset('natural_questions')\r\n> ```\r\n> \r\n> PS: Once we finish #5364, this error will no longer be a problem.\r\n\r\nThanks, wish #5364 finish early"
] | 2023-05-15T02:46:04 | 2023-06-05T09:11:19 | 2023-06-05T09:11:18 | NONE | null | null | null | ### Describe the bug
When try to load natural_questions through datasets == 2.12.0 with python == 3.8.9:
```python
import datasets
datasets.load_dataset('natural_questions',beam_runner='DirectRunner')
```
It failed with following info:
`pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs`
### Steps to reproduce the bug
In python console:
```python
import datasets
datasets.load_dataset('natural_questions',beam_runner='DirectRunner')
```
Then the trace is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/builder.py", line 2019, in _download_and_prepare
num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/arrow_writer.py", line 694, in finalize
shard_num_bytes, _ = parquet_to_arrow(source, destination)
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/arrow_writer.py", line 737, in parquet_to_arrow
for record_batch in parquet_file.iter_batches():
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
```
### Expected behavior
load natural_question questions
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.9
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5856/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5855/comments | https://api.github.com/repos/huggingface/datasets/issues/5855/events | https://github.com/huggingface/datasets/issues/5855 | 1,708,784,943 | I_kwDODunzps5l2f0v | 5,855 | `to_tf_dataset` consumes too much memory | {
"login": "massquantity",
"id": 28751760,
"node_id": "MDQ6VXNlcjI4NzUxNzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/28751760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/massquantity",
"html_url": "https://github.com/massquantity",
"followers_url": "https://api.github.com/users/massquantity/followers",
"following_url": "https://api.github.com/users/massquantity/following{/other_user}",
"gists_url": "https://api.github.com/users/massquantity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/massquantity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/massquantity/subscriptions",
"organizations_url": "https://api.github.com/users/massquantity/orgs",
"repos_url": "https://api.github.com/users/massquantity/repos",
"events_url": "https://api.github.com/users/massquantity/events{/privacy}",
"received_events_url": "https://api.github.com/users/massquantity/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Cc @amyeroberts @Rocketknight1 \r\n\r\nIndded I think it's because it does something like this under the hood when there's no multiprocessing:\r\n\r\n```python\r\ntf_dataset = tf_dataset.shuffle(len(dataset))\r\n```\r\n\r\nPS: with multiprocessing it appears to be different:\r\n\r\n```python\r\nindices = np.arange(len(dataset))\r\nif shuffle:\r\n np.random.shuffle(indices)\r\n```",
"Hi @massquantity, the dataset being shuffled there is not the full dataset. If you look at [the line above](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/tf_utils.py#L182), the dataset is actually just a single indices array at that point, and that array is the only thing that gets fully loaded into memory and shuffled. We then load samples from the dataset by applying a transform function to the shuffled dataset, which fetches samples based on the indices it receives.\r\n\r\nIf your dataset is **really** gigantic, then this index tensor might be a memory issue, but since it's just an int64 tensor it will only use 1GB of memory per 125 million samples.\r\n\r\nStill, if you're encountering memory issues, there might be another cause here - can you share some code to reproduce the error, or does it depend on some internal/proprietary dataset?",
"Hi @Rocketknight1, you're right and I also noticed that only indices are used in shuffling. My data has shape (50000000, 10), but really the problem doesn't relate to a specific dataset. Simply running the following code costs me 10GB of memory.\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for i in range(50000000):\r\n yield {\"data\": i}\r\n\r\nds = Dataset.from_generator(gen, cache_dir=\"./huggingface\")\r\n\r\ntf_ds = ds.to_tf_dataset(\r\n batch_size=1,\r\n shuffle=True,\r\n drop_remainder=False,\r\n prefetch=True,\r\n)\r\ntf_ds = iter(tf_ds)\r\nnext(tf_ds)\r\n# {'data': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>}\r\n```\r\n\r\nI just realized maybe it was an issue from tensorflow (I'm using tf 2.12). So I tried the following code, and it used 10GB of memory too.\r\n```python\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\ndata_size = 50000000\r\ntf_dataset = tf.data.Dataset.from_tensor_slices(np.arange(data_size))\r\ntf_dataset = iter(tf_dataset.shuffle(data_size))\r\nnext(tf_dataset)\r\n# <tf.Tensor: shape=(), dtype=int64, numpy=24774043>\r\n```\r\n\r\nBy the way, as @lhoestq mentioned, multiprocessing uses numpy shuffling, and it uses less than 1 GB of memory:\r\n```python\r\ntf_ds_mp = ds.to_tf_dataset(\r\n batch_size=1,\r\n shuffle=True,\r\n drop_remainder=False,\r\n prefetch=True,\r\n num_workers=2,\r\n)\r\n```",
"Thanks for that reproduction script - I've confirmed the same issue is occurring for me. Investigating it now!",
"Update: The memory usage is occurring in creation of the index and shuffle buffer. You can reproduce it very simply with:\r\n\r\n```python\r\nimport tensorflow as tf\r\nindices = tf.range(50_000_000, dtype=tf.int64)\r\ndataset = tf.data.Dataset.from_tensor_slices(indices)\r\ndataset = dataset.shuffle(len(dataset))\r\nprint(next(iter(dataset))\r\n```\r\nWhen I wrote this code I thought `tf.data` had an optimization for shuffling an entire tensor that wouldn't create the entire shuffle buffer, but evidently it's just creating the enormous buffer in memory. I'll see if I can find a more efficient way to do this - we might end up moving everything to the `numpy` multiprocessing path to avoid it.",
"I opened a PR to fix this - will continue the discussion there!"
] | 2023-05-14T01:22:29 | 2023-06-08T16:32:52 | 2023-06-08T16:32:52 | NONE | null | null | null | ### Describe the bug
Hi, I'm using `to_tf_dataset` to convert a _large_ dataset to `tf.data.Dataset`. I observed that the data loading *before* training took a lot of time and memory, even with `batch_size=1`.
After some digging, i believe the reason lies in the shuffle behavior. The [source code](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/tf_utils.py#L185) uses `len(dataset)` as the `buffer_size`, which may load all the data into the memory, and the [tf.data doc](https://www.tensorflow.org/guide/data#randomly_shuffling_input_data) also states that "While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill".
### Steps to reproduce the bug
```python
from datasets import Dataset
def gen(): # some large data
for i in range(50000000):
yield {"data": i}
ds = Dataset.from_generator(gen, cache_dir="./huggingface")
tf_ds = ds.to_tf_dataset(
batch_size=64,
shuffle=False, # no shuffle
drop_remainder=False,
prefetch=True,
)
# fast and memory friendly 🤗
for batch in tf_ds:
...
tf_ds_shuffle = ds.to_tf_dataset(
batch_size=64,
shuffle=True,
drop_remainder=False,
prefetch=True,
)
# slow and memory hungry for simple iteration 😱
for batch in tf_ds_shuffle:
...
```
### Expected behavior
Shuffling should not load all the data into the memory. Would adding a `buffer_size` parameter in the `to_tf_dataset` API alleviate the problem?
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.17.1-051701-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5855/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5854/comments | https://api.github.com/repos/huggingface/datasets/issues/5854/events | https://github.com/huggingface/datasets/issues/5854 | 1,708,779,300 | I_kwDODunzps5l2eck | 5,854 | Can not load audiofolder dataset on kaggle | {
"login": "ILG2021",
"id": 93691919,
"node_id": "U_kgDOBZWgDw",
"avatar_url": "https://avatars.githubusercontent.com/u/93691919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ILG2021",
"html_url": "https://github.com/ILG2021",
"followers_url": "https://api.github.com/users/ILG2021/followers",
"following_url": "https://api.github.com/users/ILG2021/following{/other_user}",
"gists_url": "https://api.github.com/users/ILG2021/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ILG2021/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ILG2021/subscriptions",
"organizations_url": "https://api.github.com/users/ILG2021/orgs",
"repos_url": "https://api.github.com/users/ILG2021/repos",
"events_url": "https://api.github.com/users/ILG2021/events{/privacy}",
"received_events_url": "https://api.github.com/users/ILG2021/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! `audiofolder` requires `datasets>=2.5.0`, so please update the `datasets`' installation (`pip install -U datasets`) in the environment to resolve the issue.",
"> Hi! `audiofolder` requires `datasets>=2.5.0`, so please update the `datasets`' installation (`pip install -U datasets`) in the environment to resolve the issue.\r\n\r\nI don't think it is a problem of the version. It runs ok on colab or local machine. Only on kaggle will has this bug.",
"Based on your dataset info, the installed version is `2.1.0`, which does not include `audiofolder`.\r\n\r\nBy default, Kaggle preinstalls `datasets` into a new env, but the version it installs is outdated and does not contain newer features such as `audiofolder`"
] | 2023-05-14T00:50:47 | 2023-07-21T13:53:45 | 2023-07-21T13:53:45 | NONE | null | null | null | ### Describe the bug
It's crash log:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/audiofolder/audiofolder.py or any data file in the same directory. Couldn't find 'audiofolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/audiofolder/audiofolder.py
### Steps to reproduce the bug
![image](https://github.com/huggingface/datasets/assets/93691919/a2829d27-d15c-4acc-86fb-d1987c760468)
common_voice = load_dataset("audiofolder", data_dir="/kaggle/working/data")
### Expected behavior
load dataset without error. It works ok on colab, but on kaggle it happends.
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.31
- Python version: 3.10.10
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5854/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5853/comments | https://api.github.com/repos/huggingface/datasets/issues/5853/events | https://github.com/huggingface/datasets/pull/5853 | 1,708,092,786 | PR_kwDODunzps5QaZLP | 5,853 | [docs] Redirects, migrated from nginx | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mishig25 note that it's not exactly the same behavior as in nginx as here it interacts a bit with the `version` and the `language`\r\n\r\nShould be close enough, though.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007212 / 0.011353 (-0.004141) | 0.005125 / 0.011008 (-0.005883) | 0.098460 / 0.038508 (0.059952) | 0.034040 / 0.023109 (0.010931) | 0.320203 / 0.275898 (0.044305) | 0.357787 / 0.323480 (0.034307) | 0.006000 / 0.007986 (-0.001986) | 0.005644 / 0.004328 (0.001316) | 0.072654 / 0.004250 (0.068403) | 0.049393 / 0.037052 (0.012341) | 0.345686 / 0.258489 (0.087196) | 0.362345 / 0.293841 (0.068504) | 0.036597 / 0.128546 (-0.091949) | 0.012303 / 0.075646 (-0.063343) | 0.334374 / 0.419271 (-0.084897) | 0.062010 / 0.043533 (0.018477) | 0.312547 / 0.255139 (0.057408) | 0.336021 / 0.283200 (0.052821) | 0.112304 / 0.141683 (-0.029378) | 1.446706 / 1.452155 (-0.005449) | 1.523256 / 1.492716 (0.030540) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217658 / 0.018006 (0.199652) | 0.449208 / 0.000490 (0.448718) | 0.002878 / 0.000200 (0.002679) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025735 / 0.037411 (-0.011676) | 0.105876 / 0.014526 (0.091350) | 0.114887 / 0.176557 (-0.061669) | 0.170984 / 0.737135 (-0.566152) | 0.121420 / 0.296338 (-0.174918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419670 / 0.215209 (0.204461) | 4.189453 / 2.077655 (2.111798) | 1.938236 / 1.504120 (0.434116) | 1.769747 / 1.541195 (0.228553) | 1.910919 / 1.468490 (0.442429) | 0.705046 / 4.584777 (-3.879730) | 3.783774 / 3.745712 (0.038062) | 2.096504 / 5.269862 (-3.173358) | 1.339265 / 4.565676 (-3.226412) | 0.086670 / 0.424275 (-0.337605) | 0.012243 / 0.007607 (0.004636) | 0.524701 / 0.226044 (0.298657) | 5.240689 / 2.268929 (2.971760) | 2.473622 / 55.444624 (-52.971003) | 2.170568 / 6.876477 (-4.705909) | 2.289653 / 2.142072 (0.147581) | 0.848913 / 4.805227 (-3.956314) | 0.168332 / 6.500664 (-6.332332) | 0.064926 / 0.075469 (-0.010543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193614 / 1.841788 (-0.648173) | 14.920403 / 8.074308 (6.846095) | 14.475059 / 10.191392 (4.283667) | 0.164458 / 0.680424 (-0.515966) | 0.017613 / 0.534201 (-0.516588) | 0.426311 / 0.579283 (-0.152972) | 0.431478 / 0.434364 (-0.002886) | 0.520280 / 0.540337 (-0.020057) | 0.627738 / 1.386936 (-0.759198) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007458 / 0.011353 (-0.003895) | 0.005363 / 0.011008 (-0.005645) | 0.076713 / 0.038508 (0.038205) | 0.034189 / 0.023109 (0.011079) | 0.359938 / 0.275898 (0.084040) | 0.395532 / 0.323480 (0.072052) | 0.005977 / 0.007986 (-0.002008) | 0.004263 / 0.004328 (-0.000065) | 0.075971 / 0.004250 (0.071721) | 0.051924 / 0.037052 (0.014871) | 0.362818 / 0.258489 (0.104329) | 0.409897 / 0.293841 (0.116056) | 0.035494 / 0.128546 (-0.093053) | 0.012399 / 0.075646 (-0.063247) | 0.088335 / 0.419271 (-0.330937) | 0.047968 / 0.043533 (0.004435) | 0.355744 / 0.255139 (0.100606) | 0.376339 / 0.283200 (0.093139) | 0.104542 / 0.141683 (-0.037141) | 1.464826 / 1.452155 (0.012672) | 1.600665 / 1.492716 (0.107948) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220841 / 0.018006 (0.202834) | 0.446444 / 0.000490 (0.445954) | 0.000392 / 0.000200 (0.000192) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029402 / 0.037411 (-0.008009) | 0.116511 / 0.014526 (0.101986) | 0.122959 / 0.176557 (-0.053598) | 0.171674 / 0.737135 (-0.565462) | 0.129871 / 0.296338 (-0.166468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450411 / 0.215209 (0.235202) | 4.471859 / 2.077655 (2.394205) | 2.229439 / 1.504120 (0.725319) | 2.053308 / 1.541195 (0.512114) | 2.142476 / 1.468490 (0.673986) | 0.708299 / 4.584777 (-3.876478) | 3.797830 / 3.745712 (0.052118) | 2.142509 / 5.269862 (-3.127352) | 1.333357 / 4.565676 (-3.232320) | 0.086837 / 0.424275 (-0.337439) | 0.012102 / 0.007607 (0.004495) | 0.548428 / 0.226044 (0.322384) | 5.490611 / 2.268929 (3.221682) | 2.713882 / 55.444624 (-52.730742) | 2.399638 / 6.876477 (-4.476839) | 2.481549 / 2.142072 (0.339477) | 0.839812 / 4.805227 (-3.965415) | 0.168890 / 6.500664 (-6.331774) | 0.065564 / 0.075469 (-0.009906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275507 / 1.841788 (-0.566281) | 14.896343 / 8.074308 (6.822035) | 13.159701 / 10.191392 (2.968309) | 0.172065 / 0.680424 (-0.508359) | 0.017507 / 0.534201 (-0.516694) | 0.420031 / 0.579283 (-0.159252) | 0.438835 / 0.434364 (0.004471) | 0.490597 / 0.540337 (-0.049741) | 0.583952 / 1.386936 (-0.802984) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#48c9755d0ae9abe4c4d6cd8c1ce76eff849f0e5c \"CML watermark\")\n"
] | 2023-05-12T19:19:27 | 2023-05-15T10:37:19 | 2023-05-15T10:30:14 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5853",
"html_url": "https://github.com/huggingface/datasets/pull/5853",
"diff_url": "https://github.com/huggingface/datasets/pull/5853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5853.patch",
"merged_at": "2023-05-15T10:30:14"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5853/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5852/comments | https://api.github.com/repos/huggingface/datasets/issues/5852/events | https://github.com/huggingface/datasets/pull/5852 | 1,707,927,165 | PR_kwDODunzps5QZ1lj | 5,852 | Iterable torch formatting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006567 / 0.011353 (-0.004786) | 0.004479 / 0.011008 (-0.006530) | 0.028286 / 0.038508 (-0.010222) | 0.033137 / 0.023109 (0.010028) | 0.305249 / 0.275898 (0.029351) | 0.330306 / 0.323480 (0.006826) | 0.003747 / 0.007986 (-0.004238) | 0.004409 / 0.004328 (0.000081) | 0.004742 / 0.004250 (0.000491) | 0.040780 / 0.037052 (0.003728) | 0.302879 / 0.258489 (0.044390) | 0.346880 / 0.293841 (0.053039) | 0.032908 / 0.128546 (-0.095638) | 0.010617 / 0.075646 (-0.065029) | 0.257996 / 0.419271 (-0.161275) | 0.051044 / 0.043533 (0.007511) | 0.306113 / 0.255139 (0.050974) | 0.324444 / 0.283200 (0.041244) | 0.100820 / 0.141683 (-0.040863) | 1.478402 / 1.452155 (0.026248) | 1.599398 / 1.492716 (0.106682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216540 / 0.018006 (0.198534) | 0.433480 / 0.000490 (0.432991) | 0.004032 / 0.000200 (0.003832) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027807 / 0.037411 (-0.009604) | 0.107225 / 0.014526 (0.092699) | 0.120157 / 0.176557 (-0.056400) | 0.174130 / 0.737135 (-0.563005) | 0.128902 / 0.296338 (-0.167437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395996 / 0.215209 (0.180787) | 3.936254 / 2.077655 (1.858599) | 1.808864 / 1.504120 (0.304744) | 1.608935 / 1.541195 (0.067741) | 1.646427 / 1.468490 (0.177937) | 0.716026 / 4.584777 (-3.868751) | 3.815045 / 3.745712 (0.069333) | 2.271534 / 5.269862 (-2.998327) | 1.548728 / 4.565676 (-3.016948) | 0.076743 / 0.424275 (-0.347532) | 0.011575 / 0.007607 (0.003968) | 0.499202 / 0.226044 (0.273158) | 4.983754 / 2.268929 (2.714825) | 2.239319 / 55.444624 (-53.205306) | 1.919427 / 6.876477 (-4.957050) | 2.019664 / 2.142072 (-0.122408) | 0.866318 / 4.805227 (-3.938910) | 0.157309 / 6.500664 (-6.343355) | 0.063341 / 0.075469 (-0.012128) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180817 / 1.841788 (-0.660971) | 14.579869 / 8.074308 (6.505561) | 14.277848 / 10.191392 (4.086456) | 0.182560 / 0.680424 (-0.497863) | 0.017402 / 0.534201 (-0.516799) | 0.411549 / 0.579283 (-0.167734) | 0.432938 / 0.434364 (-0.001426) | 0.545067 / 0.540337 (0.004730) | 0.642173 / 1.386936 (-0.744763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004590 / 0.011008 (-0.006418) | 0.006111 / 0.038508 (-0.032397) | 0.032763 / 0.023109 (0.009654) | 0.401001 / 0.275898 (0.125103) | 0.428063 / 0.323480 (0.104583) | 0.003730 / 0.007986 (-0.004255) | 0.004617 / 0.004328 (0.000289) | 0.004770 / 0.004250 (0.000519) | 0.049718 / 0.037052 (0.012666) | 0.399724 / 0.258489 (0.141235) | 0.440292 / 0.293841 (0.146451) | 0.032846 / 0.128546 (-0.095700) | 0.010842 / 0.075646 (-0.064804) | 0.012642 / 0.419271 (-0.406630) | 0.046043 / 0.043533 (0.002510) | 0.390862 / 0.255139 (0.135723) | 0.407027 / 0.283200 (0.123828) | 0.099349 / 0.141683 (-0.042334) | 1.455739 / 1.452155 (0.003584) | 1.572214 / 1.492716 (0.079497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227186 / 0.018006 (0.209180) | 0.447404 / 0.000490 (0.446914) | 0.000400 / 0.000200 (0.000200) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029830 / 0.037411 (-0.007581) | 0.112365 / 0.014526 (0.097839) | 0.125736 / 0.176557 (-0.050821) | 0.174781 / 0.737135 (-0.562354) | 0.129439 / 0.296338 (-0.166900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444438 / 0.215209 (0.229229) | 4.459381 / 2.077655 (2.381726) | 2.264541 / 1.504120 (0.760421) | 2.075257 / 1.541195 (0.534062) | 2.181289 / 1.468490 (0.712799) | 0.725279 / 4.584777 (-3.859498) | 3.863253 / 3.745712 (0.117541) | 2.132498 / 5.269862 (-3.137364) | 1.402003 / 4.565676 (-3.163673) | 0.084268 / 0.424275 (-0.340007) | 0.011762 / 0.007607 (0.004155) | 0.556239 / 0.226044 (0.330194) | 5.617998 / 2.268929 (3.349070) | 2.754789 / 55.444624 (-52.689835) | 2.418418 / 6.876477 (-4.458059) | 2.479696 / 2.142072 (0.337624) | 0.870037 / 4.805227 (-3.935190) | 0.160480 / 6.500664 (-6.340184) | 0.064464 / 0.075469 (-0.011005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290916 / 1.841788 (-0.550872) | 14.783173 / 8.074308 (6.708865) | 13.355883 / 10.191392 (3.164491) | 0.169963 / 0.680424 (-0.510461) | 0.017657 / 0.534201 (-0.516544) | 0.409218 / 0.579283 (-0.170065) | 0.422942 / 0.434364 (-0.011422) | 0.494968 / 0.540337 (-0.045369) | 0.587044 / 1.386936 (-0.799892) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2051e912d9525bc38a1caf295df0620619c488eb \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007183 / 0.011353 (-0.004169) | 0.004586 / 0.011008 (-0.006423) | 0.032668 / 0.038508 (-0.005840) | 0.040896 / 0.023109 (0.017787) | 0.358225 / 0.275898 (0.082327) | 0.395063 / 0.323480 (0.071583) | 0.004540 / 0.007986 (-0.003446) | 0.003849 / 0.004328 (-0.000480) | 0.005521 / 0.004250 (0.001271) | 0.053314 / 0.037052 (0.016262) | 0.362417 / 0.258489 (0.103928) | 0.414337 / 0.293841 (0.120496) | 0.030698 / 0.128546 (-0.097849) | 0.008823 / 0.075646 (-0.066823) | 0.303583 / 0.419271 (-0.115689) | 0.060277 / 0.043533 (0.016744) | 0.365938 / 0.255139 (0.110799) | 0.379554 / 0.283200 (0.096354) | 0.122545 / 0.141683 (-0.019138) | 1.712098 / 1.452155 (0.259943) | 1.802036 / 1.492716 (0.309319) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239508 / 0.018006 (0.221502) | 0.492194 / 0.000490 (0.491704) | 0.003280 / 0.000200 (0.003081) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033301 / 0.037411 (-0.004110) | 0.125851 / 0.014526 (0.111325) | 0.137757 / 0.176557 (-0.038799) | 0.207603 / 0.737135 (-0.529533) | 0.143507 / 0.296338 (-0.152831) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470662 / 0.215209 (0.255453) | 4.736017 / 2.077655 (2.658363) | 2.154152 / 1.504120 (0.650032) | 1.954243 / 1.541195 (0.413048) | 2.080186 / 1.468490 (0.611696) | 0.622884 / 4.584777 (-3.961893) | 4.385885 / 3.745712 (0.640173) | 2.262085 / 5.269862 (-3.007776) | 1.454215 / 4.565676 (-3.111462) | 0.067342 / 0.424275 (-0.356933) | 0.012913 / 0.007607 (0.005306) | 0.600676 / 0.226044 (0.374631) | 5.915093 / 2.268929 (3.646164) | 2.664915 / 55.444624 (-52.779709) | 2.286986 / 6.876477 (-4.589490) | 2.387776 / 2.142072 (0.245704) | 0.757067 / 4.805227 (-4.048160) | 0.154625 / 6.500664 (-6.346039) | 0.074632 / 0.075469 (-0.000838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.413229 / 1.841788 (-0.428558) | 17.433012 / 8.074308 (9.358704) | 16.980340 / 10.191392 (6.788948) | 0.218943 / 0.680424 (-0.461481) | 0.020525 / 0.534201 (-0.513676) | 0.451847 / 0.579283 (-0.127436) | 0.495587 / 0.434364 (0.061223) | 0.548739 / 0.540337 (0.008402) | 0.662120 / 1.386936 (-0.724816) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006775 / 0.011353 (-0.004577) | 0.004556 / 0.011008 (-0.006452) | 0.006462 / 0.038508 (-0.032046) | 0.039073 / 0.023109 (0.015964) | 0.429249 / 0.275898 (0.153351) | 0.469946 / 0.323480 (0.146467) | 0.004402 / 0.007986 (-0.003584) | 0.003798 / 0.004328 (-0.000530) | 0.005347 / 0.004250 (0.001097) | 0.053743 / 0.037052 (0.016691) | 0.434635 / 0.258489 (0.176146) | 0.475661 / 0.293841 (0.181820) | 0.029891 / 0.128546 (-0.098656) | 0.009058 / 0.075646 (-0.066588) | 0.010987 / 0.419271 (-0.408284) | 0.053877 / 0.043533 (0.010344) | 0.434428 / 0.255139 (0.179289) | 0.449637 / 0.283200 (0.166437) | 0.124331 / 0.141683 (-0.017352) | 1.736083 / 1.452155 (0.283928) | 1.831632 / 1.492716 (0.338916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248428 / 0.018006 (0.230422) | 0.493113 / 0.000490 (0.492623) | 0.000429 / 0.000200 (0.000229) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031337 / 0.037411 (-0.006074) | 0.132360 / 0.014526 (0.117834) | 0.134734 / 0.176557 (-0.041822) | 0.193811 / 0.737135 (-0.543324) | 0.146883 / 0.296338 (-0.149456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.510876 / 0.215209 (0.295666) | 5.170198 / 2.077655 (3.092543) | 2.572105 / 1.504120 (1.067985) | 2.316918 / 1.541195 (0.775723) | 2.449316 / 1.468490 (0.980826) | 0.612219 / 4.584777 (-3.972558) | 4.456740 / 3.745712 (0.711028) | 2.099757 / 5.269862 (-3.170105) | 1.293017 / 4.565676 (-3.272660) | 0.067922 / 0.424275 (-0.356353) | 0.013467 / 0.007607 (0.005860) | 0.634240 / 0.226044 (0.408196) | 6.373111 / 2.268929 (4.104182) | 3.171567 / 55.444624 (-52.273057) | 2.763411 / 6.876477 (-4.113066) | 2.845557 / 2.142072 (0.703485) | 0.763431 / 4.805227 (-4.041797) | 0.155949 / 6.500664 (-6.344715) | 0.076264 / 0.075469 (0.000795) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.468075 / 1.841788 (-0.373713) | 17.582354 / 8.074308 (9.508046) | 16.565964 / 10.191392 (6.374572) | 0.163779 / 0.680424 (-0.516644) | 0.020472 / 0.534201 (-0.513728) | 0.444416 / 0.579283 (-0.134867) | 0.488471 / 0.434364 (0.054107) | 0.550661 / 0.540337 (0.010323) | 0.667230 / 1.386936 (-0.719706) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3655cbf1c627c945e393641d35298a166f1e4bf5 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006160 / 0.011353 (-0.005193) | 0.004093 / 0.011008 (-0.006915) | 0.056485 / 0.038508 (0.017977) | 0.033637 / 0.023109 (0.010528) | 0.296448 / 0.275898 (0.020550) | 0.332532 / 0.323480 (0.009052) | 0.003864 / 0.007986 (-0.004122) | 0.003446 / 0.004328 (-0.000883) | 0.034808 / 0.004250 (0.030558) | 0.048567 / 0.037052 (0.011514) | 0.296090 / 0.258489 (0.037601) | 0.336067 / 0.293841 (0.042226) | 0.026081 / 0.128546 (-0.102465) | 0.007875 / 0.075646 (-0.067771) | 0.286049 / 0.419271 (-0.133222) | 0.050411 / 0.043533 (0.006878) | 0.297016 / 0.255139 (0.041877) | 0.320030 / 0.283200 (0.036830) | 0.110374 / 0.141683 (-0.031308) | 1.432470 / 1.452155 (-0.019684) | 1.492479 / 1.492716 (-0.000238) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262352 / 0.018006 (0.244346) | 0.557956 / 0.000490 (0.557467) | 0.010296 / 0.000200 (0.010096) | 0.000315 / 0.000054 (0.000260) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028801 / 0.037411 (-0.008611) | 0.109844 / 0.014526 (0.095318) | 0.122333 / 0.176557 (-0.054224) | 0.180571 / 0.737135 (-0.556564) | 0.125990 / 0.296338 (-0.170348) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401643 / 0.215209 (0.186434) | 4.020993 / 2.077655 (1.943338) | 1.815256 / 1.504120 (0.311136) | 1.619579 / 1.541195 (0.078384) | 1.708889 / 1.468490 (0.240398) | 0.537847 / 4.584777 (-4.046930) | 3.743331 / 3.745712 (-0.002381) | 1.779891 / 5.269862 (-3.489970) | 1.021423 / 4.565676 (-3.544253) | 0.058869 / 0.424275 (-0.365406) | 0.011826 / 0.007607 (0.004218) | 0.499665 / 0.226044 (0.273621) | 4.980928 / 2.268929 (2.712000) | 2.285664 / 55.444624 (-53.158960) | 1.936553 / 6.876477 (-4.939923) | 2.090428 / 2.142072 (-0.051645) | 0.655218 / 4.805227 (-4.150009) | 0.133178 / 6.500664 (-6.367486) | 0.062991 / 0.075469 (-0.012478) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.168895 / 1.841788 (-0.672892) | 14.656773 / 8.074308 (6.582465) | 13.737921 / 10.191392 (3.546529) | 0.145383 / 0.680424 (-0.535041) | 0.017614 / 0.534201 (-0.516587) | 0.386499 / 0.579283 (-0.192784) | 0.425626 / 0.434364 (-0.008738) | 0.389572 / 0.540337 (-0.150766) | 0.386753 / 1.386936 (-1.000183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005998 / 0.011353 (-0.005355) | 0.004265 / 0.011008 (-0.006743) | 0.034743 / 0.038508 (-0.003766) | 0.033929 / 0.023109 (0.010820) | 0.405535 / 0.275898 (0.129636) | 0.407235 / 0.323480 (0.083755) | 0.003972 / 0.007986 (-0.004013) | 0.003616 / 0.004328 (-0.000712) | 0.035278 / 0.004250 (0.031027) | 0.052990 / 0.037052 (0.015937) | 0.405228 / 0.258489 (0.146739) | 0.415007 / 0.293841 (0.121166) | 0.025951 / 0.128546 (-0.102595) | 0.007990 / 0.075646 (-0.067656) | 0.040492 / 0.419271 (-0.378779) | 0.049123 / 0.043533 (0.005591) | 0.399282 / 0.255139 (0.144143) | 0.384303 / 0.283200 (0.101103) | 0.115234 / 0.141683 (-0.026448) | 1.476904 / 1.452155 (0.024749) | 1.627191 / 1.492716 (0.134475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209211 / 0.018006 (0.191205) | 0.566718 / 0.000490 (0.566228) | 0.002094 / 0.000200 (0.001894) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030885 / 0.037411 (-0.006526) | 0.110777 / 0.014526 (0.096251) | 0.124382 / 0.176557 (-0.052174) | 0.175081 / 0.737135 (-0.562054) | 0.130263 / 0.296338 (-0.166075) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448091 / 0.215209 (0.232882) | 4.484404 / 2.077655 (2.406749) | 2.278438 / 1.504120 (0.774318) | 2.087933 / 1.541195 (0.546738) | 2.186709 / 1.468490 (0.718219) | 0.534822 / 4.584777 (-4.049955) | 3.778229 / 3.745712 (0.032517) | 3.312334 / 5.269862 (-1.957528) | 1.557209 / 4.565676 (-3.008467) | 0.058923 / 0.424275 (-0.365352) | 0.011350 / 0.007607 (0.003743) | 0.550470 / 0.226044 (0.324426) | 5.480347 / 2.268929 (3.211419) | 2.781709 / 55.444624 (-52.662915) | 2.478729 / 6.876477 (-4.397748) | 2.492001 / 2.142072 (0.349929) | 0.652649 / 4.805227 (-4.152578) | 0.131334 / 6.500664 (-6.369330) | 0.065619 / 0.075469 (-0.009850) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253998 / 1.841788 (-0.587790) | 15.207433 / 8.074308 (7.133124) | 14.627842 / 10.191392 (4.436450) | 0.146947 / 0.680424 (-0.533477) | 0.017533 / 0.534201 (-0.516668) | 0.391627 / 0.579283 (-0.187656) | 0.431113 / 0.434364 (-0.003251) | 0.413886 / 0.540337 (-0.126451) | 0.414483 / 1.386936 (-0.972453) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3f4e98701590a4922050051eb0f4d63e6125723d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007741 / 0.011353 (-0.003612) | 0.004584 / 0.011008 (-0.006424) | 0.067869 / 0.038508 (0.029361) | 0.041612 / 0.023109 (0.018503) | 0.377878 / 0.275898 (0.101980) | 0.421633 / 0.323480 (0.098153) | 0.004614 / 0.007986 (-0.003371) | 0.003824 / 0.004328 (-0.000504) | 0.041479 / 0.004250 (0.037229) | 0.053309 / 0.037052 (0.016256) | 0.390147 / 0.258489 (0.131658) | 0.437706 / 0.293841 (0.143865) | 0.035951 / 0.128546 (-0.092595) | 0.009231 / 0.075646 (-0.066415) | 0.357572 / 0.419271 (-0.061699) | 0.081332 / 0.043533 (0.037799) | 0.370076 / 0.255139 (0.114937) | 0.423653 / 0.283200 (0.140453) | 0.141401 / 0.141683 (-0.000282) | 1.722744 / 1.452155 (0.270589) | 1.914668 / 1.492716 (0.421952) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256568 / 0.018006 (0.238562) | 0.512243 / 0.000490 (0.511753) | 0.019913 / 0.000200 (0.019713) | 0.000136 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031742 / 0.037411 (-0.005670) | 0.128537 / 0.014526 (0.114011) | 0.139962 / 0.176557 (-0.036594) | 0.210711 / 0.737135 (-0.526424) | 0.147162 / 0.296338 (-0.149177) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509518 / 0.215209 (0.294309) | 5.083788 / 2.077655 (3.006134) | 2.455381 / 1.504120 (0.951262) | 2.208078 / 1.541195 (0.666883) | 2.341807 / 1.468490 (0.873317) | 0.580014 / 4.584777 (-4.004763) | 4.599492 / 3.745712 (0.853780) | 2.403249 / 5.269862 (-2.866612) | 1.559177 / 4.565676 (-3.006500) | 0.072846 / 0.424275 (-0.351429) | 0.017327 / 0.007607 (0.009720) | 0.627747 / 0.226044 (0.401703) | 6.242586 / 2.268929 (3.973657) | 2.982875 / 55.444624 (-52.461750) | 2.588645 / 6.876477 (-4.287832) | 2.765915 / 2.142072 (0.623843) | 0.720455 / 4.805227 (-4.084772) | 0.157474 / 6.500664 (-6.343190) | 0.074295 / 0.075469 (-0.001174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.540799 / 1.841788 (-0.300988) | 18.054632 / 8.074308 (9.980324) | 16.544036 / 10.191392 (6.352644) | 0.201423 / 0.680424 (-0.479001) | 0.020497 / 0.534201 (-0.513704) | 0.496275 / 0.579283 (-0.083008) | 0.547380 / 0.434364 (0.113017) | 0.614605 / 0.540337 (0.074267) | 0.749889 / 1.386936 (-0.637047) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006963 / 0.011353 (-0.004389) | 0.004543 / 0.011008 (-0.006465) | 0.039530 / 0.038508 (0.001022) | 0.038420 / 0.023109 (0.015311) | 0.454885 / 0.275898 (0.178987) | 0.491731 / 0.323480 (0.168251) | 0.004211 / 0.007986 (-0.003775) | 0.003673 / 0.004328 (-0.000655) | 0.038735 / 0.004250 (0.034484) | 0.052085 / 0.037052 (0.015032) | 0.448924 / 0.258489 (0.190435) | 0.499254 / 0.293841 (0.205413) | 0.030069 / 0.128546 (-0.098477) | 0.009082 / 0.075646 (-0.066565) | 0.047181 / 0.419271 (-0.372090) | 0.054758 / 0.043533 (0.011225) | 0.445035 / 0.255139 (0.189896) | 0.475090 / 0.283200 (0.191891) | 0.122641 / 0.141683 (-0.019042) | 1.706514 / 1.452155 (0.254360) | 1.855726 / 1.492716 (0.363010) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246028 / 0.018006 (0.228022) | 0.486382 / 0.000490 (0.485892) | 0.003038 / 0.000200 (0.002838) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034298 / 0.037411 (-0.003113) | 0.135364 / 0.014526 (0.120838) | 0.146102 / 0.176557 (-0.030455) | 0.207997 / 0.737135 (-0.529139) | 0.153119 / 0.296338 (-0.143219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.528758 / 0.215209 (0.313549) | 5.243303 / 2.077655 (3.165648) | 2.617194 / 1.504120 (1.113074) | 2.400740 / 1.541195 (0.859545) | 2.534692 / 1.468490 (1.066202) | 0.585825 / 4.584777 (-3.998952) | 4.879766 / 3.745712 (1.134054) | 2.377419 / 5.269862 (-2.892443) | 1.460711 / 4.565676 (-3.104966) | 0.075572 / 0.424275 (-0.348703) | 0.013650 / 0.007607 (0.006042) | 0.697103 / 0.226044 (0.471058) | 6.444984 / 2.268929 (4.176055) | 3.227662 / 55.444624 (-52.216963) | 2.875163 / 6.876477 (-4.001314) | 2.860953 / 2.142072 (0.718881) | 0.718908 / 4.805227 (-4.086319) | 0.158005 / 6.500664 (-6.342659) | 0.077581 / 0.075469 (0.002112) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.653027 / 1.841788 (-0.188760) | 18.789342 / 8.074308 (10.715034) | 16.762678 / 10.191392 (6.571286) | 0.238920 / 0.680424 (-0.441504) | 0.020698 / 0.534201 (-0.513502) | 0.512634 / 0.579283 (-0.066649) | 0.542235 / 0.434364 (0.107871) | 0.626634 / 0.540337 (0.086297) | 0.753324 / 1.386936 (-0.633612) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f978ad8bec6e5e77868c6ffcc6f514354a03901d \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005737 / 0.011353 (-0.005616) | 0.003767 / 0.011008 (-0.007241) | 0.097792 / 0.038508 (0.059284) | 0.028466 / 0.023109 (0.005356) | 0.317703 / 0.275898 (0.041805) | 0.359512 / 0.323480 (0.036032) | 0.003428 / 0.007986 (-0.004558) | 0.002848 / 0.004328 (-0.001481) | 0.075668 / 0.004250 (0.071418) | 0.037165 / 0.037052 (0.000113) | 0.329539 / 0.258489 (0.071050) | 0.361365 / 0.293841 (0.067524) | 0.024777 / 0.128546 (-0.103769) | 0.008324 / 0.075646 (-0.067323) | 0.317346 / 0.419271 (-0.101926) | 0.043296 / 0.043533 (-0.000237) | 0.315318 / 0.255139 (0.060179) | 0.347641 / 0.283200 (0.064441) | 0.089551 / 0.141683 (-0.052132) | 1.506335 / 1.452155 (0.054180) | 1.573931 / 1.492716 (0.081215) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208041 / 0.018006 (0.190034) | 0.428198 / 0.000490 (0.427708) | 0.002568 / 0.000200 (0.002369) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023745 / 0.037411 (-0.013667) | 0.096256 / 0.014526 (0.081730) | 0.104917 / 0.176557 (-0.071639) | 0.164341 / 0.737135 (-0.572794) | 0.107972 / 0.296338 (-0.188367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453995 / 0.215209 (0.238786) | 4.546892 / 2.077655 (2.469238) | 2.185498 / 1.504120 (0.681378) | 1.989156 / 1.541195 (0.447962) | 2.053443 / 1.468490 (0.584953) | 0.559940 / 4.584777 (-4.024837) | 3.420759 / 3.745712 (-0.324954) | 1.771528 / 5.269862 (-3.498333) | 1.139692 / 4.565676 (-3.425984) | 0.067686 / 0.424275 (-0.356589) | 0.011729 / 0.007607 (0.004122) | 0.558001 / 0.226044 (0.331957) | 5.583886 / 2.268929 (3.314957) | 2.678726 / 55.444624 (-52.765899) | 2.324127 / 6.876477 (-4.552350) | 2.472805 / 2.142072 (0.330733) | 0.663163 / 4.805227 (-4.142065) | 0.134892 / 6.500664 (-6.365772) | 0.066722 / 0.075469 (-0.008747) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195200 / 1.841788 (-0.646587) | 13.602517 / 8.074308 (5.528209) | 14.036344 / 10.191392 (3.844952) | 0.143759 / 0.680424 (-0.536665) | 0.017215 / 0.534201 (-0.516986) | 0.383749 / 0.579283 (-0.195534) | 0.388229 / 0.434364 (-0.046134) | 0.469366 / 0.540337 (-0.070971) | 0.560408 / 1.386936 (-0.826528) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005953 / 0.011353 (-0.005400) | 0.003840 / 0.011008 (-0.007168) | 0.077481 / 0.038508 (0.038973) | 0.028318 / 0.023109 (0.005209) | 0.403991 / 0.275898 (0.128093) | 0.433374 / 0.323480 (0.109894) | 0.003572 / 0.007986 (-0.004414) | 0.003033 / 0.004328 (-0.001295) | 0.075873 / 0.004250 (0.071623) | 0.039321 / 0.037052 (0.002269) | 0.416790 / 0.258489 (0.158301) | 0.459368 / 0.293841 (0.165527) | 0.025270 / 0.128546 (-0.103276) | 0.008574 / 0.075646 (-0.067072) | 0.083376 / 0.419271 (-0.335896) | 0.043206 / 0.043533 (-0.000327) | 0.404831 / 0.255139 (0.149692) | 0.418559 / 0.283200 (0.135360) | 0.099135 / 0.141683 (-0.042548) | 1.501315 / 1.452155 (0.049160) | 1.583912 / 1.492716 (0.091195) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241510 / 0.018006 (0.223504) | 0.410473 / 0.000490 (0.409983) | 0.001857 / 0.000200 (0.001657) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025366 / 0.037411 (-0.012045) | 0.103353 / 0.014526 (0.088828) | 0.107934 / 0.176557 (-0.068622) | 0.162388 / 0.737135 (-0.574747) | 0.113550 / 0.296338 (-0.182789) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463529 / 0.215209 (0.248320) | 4.657688 / 2.077655 (2.580034) | 2.455088 / 1.504120 (0.950968) | 2.304833 / 1.541195 (0.763638) | 2.317520 / 1.468490 (0.849029) | 0.563395 / 4.584777 (-4.021382) | 3.408489 / 3.745712 (-0.337223) | 2.636379 / 5.269862 (-2.633482) | 1.425355 / 4.565676 (-3.140322) | 0.068335 / 0.424275 (-0.355940) | 0.011713 / 0.007607 (0.004106) | 0.550230 / 0.226044 (0.324186) | 5.519843 / 2.268929 (3.250915) | 2.864986 / 55.444624 (-52.579639) | 2.604821 / 6.876477 (-4.271655) | 2.701501 / 2.142072 (0.559428) | 0.668193 / 4.805227 (-4.137034) | 0.134739 / 6.500664 (-6.365925) | 0.067110 / 0.075469 (-0.008359) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.326358 / 1.841788 (-0.515430) | 14.184172 / 8.074308 (6.109864) | 14.139245 / 10.191392 (3.947853) | 0.151881 / 0.680424 (-0.528542) | 0.016718 / 0.534201 (-0.517483) | 0.367035 / 0.579283 (-0.212248) | 0.393512 / 0.434364 (-0.040852) | 0.441261 / 0.540337 (-0.099076) | 0.533907 / 1.386936 (-0.853029) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#54098759d023f0b3e8eccd2dd98d46a1c6d19cce \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006275 / 0.011353 (-0.005078) | 0.003980 / 0.011008 (-0.007028) | 0.097617 / 0.038508 (0.059109) | 0.034089 / 0.023109 (0.010980) | 0.297381 / 0.275898 (0.021483) | 0.330106 / 0.323480 (0.006626) | 0.003838 / 0.007986 (-0.004148) | 0.004042 / 0.004328 (-0.000287) | 0.074305 / 0.004250 (0.070055) | 0.048318 / 0.037052 (0.011265) | 0.295585 / 0.258489 (0.037096) | 0.346924 / 0.293841 (0.053083) | 0.027397 / 0.128546 (-0.101150) | 0.008452 / 0.075646 (-0.067194) | 0.326837 / 0.419271 (-0.092435) | 0.049515 / 0.043533 (0.005982) | 0.303931 / 0.255139 (0.048792) | 0.317647 / 0.283200 (0.034447) | 0.098280 / 0.141683 (-0.043403) | 1.442603 / 1.452155 (-0.009552) | 1.524050 / 1.492716 (0.031334) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215095 / 0.018006 (0.197089) | 0.437662 / 0.000490 (0.437173) | 0.009771 / 0.000200 (0.009571) | 0.000401 / 0.000054 (0.000346) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027169 / 0.037411 (-0.010243) | 0.111383 / 0.014526 (0.096857) | 0.116163 / 0.176557 (-0.060394) | 0.173134 / 0.737135 (-0.564001) | 0.122376 / 0.296338 (-0.173962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398332 / 0.215209 (0.183123) | 3.974166 / 2.077655 (1.896511) | 1.793847 / 1.504120 (0.289727) | 1.615117 / 1.541195 (0.073922) | 1.660288 / 1.468490 (0.191798) | 0.523833 / 4.584777 (-4.060944) | 3.704273 / 3.745712 (-0.041439) | 1.873308 / 5.269862 (-3.396554) | 1.203546 / 4.565676 (-3.362131) | 0.064949 / 0.424275 (-0.359326) | 0.011830 / 0.007607 (0.004223) | 0.497294 / 0.226044 (0.271250) | 4.948663 / 2.268929 (2.679735) | 2.233391 / 55.444624 (-53.211234) | 1.903208 / 6.876477 (-4.973269) | 2.067908 / 2.142072 (-0.074164) | 0.644256 / 4.805227 (-4.160971) | 0.142798 / 6.500664 (-6.357866) | 0.064734 / 0.075469 (-0.010735) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172313 / 1.841788 (-0.669475) | 14.665853 / 8.074308 (6.591545) | 13.147051 / 10.191392 (2.955659) | 0.139338 / 0.680424 (-0.541086) | 0.017452 / 0.534201 (-0.516749) | 0.395660 / 0.579283 (-0.183623) | 0.410138 / 0.434364 (-0.024226) | 0.460357 / 0.540337 (-0.079980) | 0.555670 / 1.386936 (-0.831266) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006247 / 0.011353 (-0.005106) | 0.004098 / 0.011008 (-0.006910) | 0.075050 / 0.038508 (0.036542) | 0.033232 / 0.023109 (0.010122) | 0.384139 / 0.275898 (0.108241) | 0.420865 / 0.323480 (0.097385) | 0.003889 / 0.007986 (-0.004096) | 0.003336 / 0.004328 (-0.000993) | 0.073837 / 0.004250 (0.069587) | 0.048775 / 0.037052 (0.011723) | 0.386373 / 0.258489 (0.127884) | 0.421718 / 0.293841 (0.127878) | 0.027553 / 0.128546 (-0.100993) | 0.008724 / 0.075646 (-0.066922) | 0.080970 / 0.419271 (-0.338302) | 0.045981 / 0.043533 (0.002448) | 0.364381 / 0.255139 (0.109242) | 0.391203 / 0.283200 (0.108004) | 0.101681 / 0.141683 (-0.040002) | 1.469533 / 1.452155 (0.017378) | 1.562016 / 1.492716 (0.069300) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222318 / 0.018006 (0.204312) | 0.441395 / 0.000490 (0.440905) | 0.000408 / 0.000200 (0.000208) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030291 / 0.037411 (-0.007120) | 0.114053 / 0.014526 (0.099527) | 0.123124 / 0.176557 (-0.053433) | 0.173474 / 0.737135 (-0.563661) | 0.129946 / 0.296338 (-0.166393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430342 / 0.215209 (0.215133) | 4.309782 / 2.077655 (2.232128) | 2.110668 / 1.504120 (0.606548) | 1.922881 / 1.541195 (0.381687) | 1.993562 / 1.468490 (0.525072) | 0.523682 / 4.584777 (-4.061095) | 3.774152 / 3.745712 (0.028440) | 3.354783 / 5.269862 (-1.915079) | 1.489793 / 4.565676 (-3.075884) | 0.065169 / 0.424275 (-0.359107) | 0.011626 / 0.007607 (0.004019) | 0.539126 / 0.226044 (0.313081) | 5.372593 / 2.268929 (3.103664) | 2.570652 / 55.444624 (-52.873973) | 2.253353 / 6.876477 (-4.623123) | 2.312876 / 2.142072 (0.170804) | 0.644241 / 4.805227 (-4.160986) | 0.138326 / 6.500664 (-6.362338) | 0.064491 / 0.075469 (-0.010979) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.344164 / 1.841788 (-0.497624) | 15.124679 / 8.074308 (7.050371) | 14.799310 / 10.191392 (4.607918) | 0.149054 / 0.680424 (-0.531370) | 0.017564 / 0.534201 (-0.516637) | 0.394593 / 0.579283 (-0.184690) | 0.428768 / 0.434364 (-0.005596) | 0.468235 / 0.540337 (-0.072103) | 0.557384 / 1.386936 (-0.829552) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a8bfac259e2b5047bb8a0cdcefc8357477ebf93c \"CML watermark\")\n",
"@albertvillanova could you take a look at this one ? It directly follows the arrow formatting PR",
"I added tests for the `__array__` case which lets you go from any tensor format to any other tensor format.\r\n\r\nI also properly deprecated format_type and added a warning message.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007838 / 0.011353 (-0.003515) | 0.005177 / 0.011008 (-0.005831) | 0.131058 / 0.038508 (0.092550) | 0.035959 / 0.023109 (0.012850) | 0.414071 / 0.275898 (0.138173) | 0.429628 / 0.323480 (0.106148) | 0.005151 / 0.007986 (-0.002834) | 0.003979 / 0.004328 (-0.000349) | 0.103209 / 0.004250 (0.098958) | 0.046200 / 0.037052 (0.009148) | 0.414020 / 0.258489 (0.155531) | 0.475748 / 0.293841 (0.181907) | 0.041031 / 0.128546 (-0.087515) | 0.014462 / 0.075646 (-0.061185) | 0.423706 / 0.419271 (0.004434) | 0.063488 / 0.043533 (0.019955) | 0.404937 / 0.255139 (0.149798) | 0.404973 / 0.283200 (0.121773) | 0.114982 / 0.141683 (-0.026701) | 1.911867 / 1.452155 (0.459713) | 1.925274 / 1.492716 (0.432557) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284656 / 0.018006 (0.266650) | 0.588329 / 0.000490 (0.587840) | 0.007092 / 0.000200 (0.006892) | 0.000143 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025136 / 0.037411 (-0.012275) | 0.109514 / 0.014526 (0.094988) | 0.117953 / 0.176557 (-0.058603) | 0.195454 / 0.737135 (-0.541682) | 0.134243 / 0.296338 (-0.162096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584045 / 0.215209 (0.368836) | 6.456922 / 2.077655 (4.379267) | 2.759728 / 1.504120 (1.255608) | 2.260913 / 1.541195 (0.719718) | 2.292535 / 1.468490 (0.824045) | 0.906873 / 4.584777 (-3.677904) | 5.554455 / 3.745712 (1.808743) | 4.881557 / 5.269862 (-0.388305) | 2.509121 / 4.565676 (-2.056555) | 0.107191 / 0.424275 (-0.317084) | 0.014684 / 0.007607 (0.007077) | 0.761625 / 0.226044 (0.535580) | 7.582708 / 2.268929 (5.313780) | 3.150160 / 55.444624 (-52.294464) | 2.792284 / 6.876477 (-4.084193) | 2.881321 / 2.142072 (0.739248) | 1.108353 / 4.805227 (-3.696874) | 0.220129 / 6.500664 (-6.280535) | 0.075877 / 0.075469 (0.000408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.465743 / 1.841788 (-0.376045) | 17.679219 / 8.074308 (9.604911) | 18.929399 / 10.191392 (8.738007) | 0.219488 / 0.680424 (-0.460935) | 0.028435 / 0.534201 (-0.505766) | 0.512623 / 0.579283 (-0.066660) | 0.619983 / 0.434364 (0.185619) | 0.603430 / 0.540337 (0.063092) | 0.730416 / 1.386936 (-0.656520) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008285 / 0.011353 (-0.003068) | 0.005771 / 0.011008 (-0.005237) | 0.106444 / 0.038508 (0.067936) | 0.035078 / 0.023109 (0.011969) | 0.441198 / 0.275898 (0.165300) | 0.536279 / 0.323480 (0.212800) | 0.004561 / 0.007986 (-0.003424) | 0.006623 / 0.004328 (0.002294) | 0.102392 / 0.004250 (0.098142) | 0.051736 / 0.037052 (0.014684) | 0.479113 / 0.258489 (0.220624) | 0.535088 / 0.293841 (0.241247) | 0.041805 / 0.128546 (-0.086741) | 0.014031 / 0.075646 (-0.061615) | 0.115795 / 0.419271 (-0.303477) | 0.057913 / 0.043533 (0.014380) | 0.435847 / 0.255139 (0.180708) | 0.524831 / 0.283200 (0.241632) | 0.119419 / 0.141683 (-0.022263) | 1.835577 / 1.452155 (0.383423) | 1.936990 / 1.492716 (0.444273) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288422 / 0.018006 (0.270416) | 0.569776 / 0.000490 (0.569287) | 0.005652 / 0.000200 (0.005452) | 0.000139 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034632 / 0.037411 (-0.002779) | 0.136217 / 0.014526 (0.121691) | 0.139468 / 0.176557 (-0.037089) | 0.206804 / 0.737135 (-0.530331) | 0.148733 / 0.296338 (-0.147606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.667728 / 0.215209 (0.452518) | 6.548972 / 2.077655 (4.471317) | 3.051537 / 1.504120 (1.547417) | 2.581173 / 1.541195 (1.039978) | 2.653443 / 1.468490 (1.184953) | 0.906606 / 4.584777 (-3.678171) | 5.704384 / 3.745712 (1.958672) | 2.848618 / 5.269862 (-2.421244) | 1.821402 / 4.565676 (-2.744274) | 0.118018 / 0.424275 (-0.306257) | 0.014821 / 0.007607 (0.007214) | 0.821967 / 0.226044 (0.595923) | 8.165818 / 2.268929 (5.896889) | 3.744509 / 55.444624 (-51.700116) | 2.901097 / 6.876477 (-3.975380) | 3.018068 / 2.142072 (0.875996) | 1.106155 / 4.805227 (-3.699072) | 0.263118 / 6.500664 (-6.237546) | 0.088508 / 0.075469 (0.013039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.725860 / 1.841788 (-0.115928) | 19.411246 / 8.074308 (11.336938) | 20.807499 / 10.191392 (10.616107) | 0.238417 / 0.680424 (-0.442007) | 0.026550 / 0.534201 (-0.507651) | 0.500715 / 0.579283 (-0.078568) | 0.615547 / 0.434364 (0.181183) | 0.614361 / 0.540337 (0.074023) | 0.720365 / 1.386936 (-0.666571) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ae2e77f8344cdcc1c4c876f67936bec33087b19a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004079 / 0.011008 (-0.006930) | 0.100555 / 0.038508 (0.062046) | 0.037318 / 0.023109 (0.014209) | 0.320050 / 0.275898 (0.044152) | 0.358860 / 0.323480 (0.035380) | 0.003828 / 0.007986 (-0.004158) | 0.003215 / 0.004328 (-0.001113) | 0.076577 / 0.004250 (0.072326) | 0.048080 / 0.037052 (0.011028) | 0.324759 / 0.258489 (0.066270) | 0.361862 / 0.293841 (0.068021) | 0.030759 / 0.128546 (-0.097787) | 0.008998 / 0.075646 (-0.066648) | 0.329105 / 0.419271 (-0.090167) | 0.051407 / 0.043533 (0.007875) | 0.311067 / 0.255139 (0.055928) | 0.334401 / 0.283200 (0.051201) | 0.098307 / 0.141683 (-0.043376) | 1.500931 / 1.452155 (0.048776) | 1.574646 / 1.492716 (0.081930) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219080 / 0.018006 (0.201073) | 0.447117 / 0.000490 (0.446627) | 0.009091 / 0.000200 (0.008891) | 0.000396 / 0.000054 (0.000341) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026048 / 0.037411 (-0.011363) | 0.112714 / 0.014526 (0.098188) | 0.116426 / 0.176557 (-0.060131) | 0.172187 / 0.737135 (-0.564948) | 0.121707 / 0.296338 (-0.174632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.358898 / 0.215209 (0.143689) | 3.589212 / 2.077655 (1.511557) | 1.677927 / 1.504120 (0.173807) | 1.515861 / 1.541195 (-0.025334) | 1.598479 / 1.468490 (0.129989) | 0.478265 / 4.584777 (-4.106512) | 3.834982 / 3.745712 (0.089270) | 1.933815 / 5.269862 (-3.336047) | 1.122769 / 4.565676 (-3.442908) | 0.066984 / 0.424275 (-0.357291) | 0.011276 / 0.007607 (0.003669) | 0.512530 / 0.226044 (0.286486) | 5.112667 / 2.268929 (2.843739) | 2.266336 / 55.444624 (-53.178288) | 1.929671 / 6.876477 (-4.946806) | 2.127231 / 2.142072 (-0.014842) | 0.671307 / 4.805227 (-4.133920) | 0.143919 / 6.500664 (-6.356745) | 0.066086 / 0.075469 (-0.009383) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208767 / 1.841788 (-0.633021) | 15.008415 / 8.074308 (6.934106) | 14.085442 / 10.191392 (3.894050) | 0.184164 / 0.680424 (-0.496260) | 0.017619 / 0.534201 (-0.516582) | 0.394443 / 0.579283 (-0.184840) | 0.457653 / 0.434364 (0.023289) | 0.473169 / 0.540337 (-0.067169) | 0.571332 / 1.386936 (-0.815604) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007009 / 0.011353 (-0.004344) | 0.004330 / 0.011008 (-0.006678) | 0.077462 / 0.038508 (0.038954) | 0.034780 / 0.023109 (0.011671) | 0.395573 / 0.275898 (0.119675) | 0.425444 / 0.323480 (0.101964) | 0.004119 / 0.007986 (-0.003866) | 0.003597 / 0.004328 (-0.000731) | 0.075209 / 0.004250 (0.070958) | 0.050871 / 0.037052 (0.013819) | 0.402990 / 0.258489 (0.144500) | 0.445334 / 0.293841 (0.151493) | 0.032492 / 0.128546 (-0.096054) | 0.009066 / 0.075646 (-0.066581) | 0.083073 / 0.419271 (-0.336198) | 0.051661 / 0.043533 (0.008128) | 0.395207 / 0.255139 (0.140068) | 0.409556 / 0.283200 (0.126356) | 0.106035 / 0.141683 (-0.035648) | 1.506255 / 1.452155 (0.054101) | 1.598724 / 1.492716 (0.106008) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194733 / 0.018006 (0.176727) | 0.444920 / 0.000490 (0.444431) | 0.002402 / 0.000200 (0.002202) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030464 / 0.037411 (-0.006947) | 0.119153 / 0.014526 (0.104627) | 0.126081 / 0.176557 (-0.050476) | 0.179692 / 0.737135 (-0.557444) | 0.131834 / 0.296338 (-0.164504) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440153 / 0.215209 (0.224944) | 4.397504 / 2.077655 (2.319850) | 2.138320 / 1.504120 (0.634200) | 1.950596 / 1.541195 (0.409402) | 2.079792 / 1.468490 (0.611302) | 0.537606 / 4.584777 (-4.047171) | 3.689420 / 3.745712 (-0.056292) | 2.960732 / 5.269862 (-2.309129) | 1.585652 / 4.565676 (-2.980024) | 0.066102 / 0.424275 (-0.358173) | 0.011429 / 0.007607 (0.003821) | 0.537011 / 0.226044 (0.310967) | 5.342171 / 2.268929 (3.073242) | 2.624446 / 55.444624 (-52.820179) | 2.313311 / 6.876477 (-4.563166) | 2.389166 / 2.142072 (0.247094) | 0.657547 / 4.805227 (-4.147681) | 0.141640 / 6.500664 (-6.359025) | 0.066102 / 0.075469 (-0.009367) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.130471 / 1.841788 (-0.711317) | 14.824792 / 8.074308 (6.750484) | 13.436463 / 10.191392 (3.245071) | 0.155688 / 0.680424 (-0.524736) | 0.015811 / 0.534201 (-0.518390) | 0.355623 / 0.579283 (-0.223660) | 0.450604 / 0.434364 (0.016241) | 0.472542 / 0.540337 (-0.067796) | 0.563584 / 1.386936 (-0.823352) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#963ff6de6eae80a6de4aabf0092eb3dfbe43096e \"CML watermark\")\n"
] | 2023-05-12T16:48:49 | 2023-06-13T16:04:05 | 2023-06-13T15:57:05 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5852",
"html_url": "https://github.com/huggingface/datasets/pull/5852",
"diff_url": "https://github.com/huggingface/datasets/pull/5852.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5852.patch",
"merged_at": "2023-06-13T15:57:05"
} | Used the TorchFormatter to get torch tensors in iterable dataset with format set to "torch".
It uses the data from Arrow if possible, otherwise applies recursive_tensorize.
When set back to format_type=None, cast_to_python_objects is used.
requires https://github.com/huggingface/datasets/pull/5821
close https://github.com/huggingface/datasets/issues/5793 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5852/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5850/comments | https://api.github.com/repos/huggingface/datasets/issues/5850/events | https://github.com/huggingface/datasets/pull/5850 | 1,707,678,911 | PR_kwDODunzps5QZALv | 5,850 | Make packaged builders skip non-supported file formats | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5850). All of your documentation changes will be reflected on that endpoint.",
"Good idea. @mariosasko!!!\r\n\r\nPlease note that before this PR, the files are not evenly distributed for archives: `_generate_examples` gets a list of iterators, one for each archive (uncompressed to a directory).",
"This change could create silent problems when loading files with extensions that are not listed here. For example\r\n\r\n```python\r\nload_dataset(\"text\", data_files=[\"20230515.log\"])\r\n```\r\n\r\nwouldn't even log anything to say that the file was ignored.\r\n\r\nMaybe it's possible to do this at data files patterns resolution ?\r\n\r\ne.g. in get_data_patterns_in_dataset_repository / get_data_patterns_locally we could return patterns that include the most common extension",
"@lhoestq the issue you evoke (.log files skipped by text builder if .log is not added to .txt as supported extension) persists whether you perform the skip at the pattern resolution or in the builder itself.\r\n\r\nThe solution is to add the .log extension (besides the .txt) as supported by text, independently of where we perform the skip (at pattern resolution or in the builder itself).\r\n\r\nAdditionally, at the time we call for pattern resolution, we do not know the builder class yet, so that we cannot pass specific file extensions. First we call data files pattern resolution, and afterwards we call `infer_module_for_data_files` and then know the builder class.",
"> @lhoestq the issue you evoke (.log files skipped by text builder if .log is not added to .txt as supported extension) persists whether you perform the skip at the pattern resolution or in the builder itself.\r\n\r\nNo I simply think it's a bad breaking change to not support\r\n\r\n```python\r\nload_dataset(\"<builder_name>\", data_files=[\"path/to/file_with_unknown_or_no_extension\"])\r\n# or\r\nload_dataset(\"<builder_name>\", data_files=[\"https://url.to/file_with_unknown_or_no_extension\"])\r\n```\r\n\r\nIdk if it's the easiest solution, but maybe it's possible to do the change only when inferring the patterns of dataset repositories. This should avoid this breaking change.\r\n\r\nFor example it could do something like that in `get_data_patterns_locally`\r\n\r\n```python\r\n Input:\r\n\r\n my_dataset_repository/\r\n ├── README.md\r\n ├── banner.png\r\n ├── data0.csv\r\n ├── data1.csv\r\n └── data2.csv\r\n\r\n Output:\r\n\r\n {\"train\": [\"**.csv\"]}\r\n```\r\n\r\ninstead of \r\n\r\n```python\r\n Output:\r\n\r\n {\"train\": [\"**\"]}\r\n```",
"I agree with @lhoestq - it should still be possible to request parsing a file with a specific builder even if the file's extension is \"invalid\" for the builder, and only ignore non-supported file formats when inferring the patterns.",
"Therefore, if I understand correctly, what you suggest is:\r\n- if the user passes a packaged builder to `load_dataset` (e.g. `load_dataset(\"csv\",...`), then the *passed* `data_files` should not be filtered to remove unsupported extensions. No breaking change in this case\r\n- if the user passes a no-script repo/folder to `load_dataset` (e.g. `load_dataset(\"my_dataset_repository\",...`), then the *inferred* data files should be filtered to remove the extensions that are not supported by the inferred module name builder\r\n - if the user passes `data_files` as well, then I guess these should not be filtered, to avoid any breaking change as in the first case above",
"Yes that would be ideal imo !",
"I think this now fulfills all the requirements.",
"I find it a bit confusing to still be able to pass data_files that are going to be silently ignored based on the value of `only_supported_extensions`. My suggestion was to have the right data files pattern, not to filter a posteriori (sorry if my last message was confusing).\r\n\r\nHaving the right data files pattern would also allow users to inspect what's actually being loaded with\r\n```\r\nload_dataset_builder(...).config.data_files\r\n```\r\nand it would list exactly what data files are used."
] | 2023-05-12T13:52:34 | 2023-06-07T12:26:38 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5850",
"html_url": "https://github.com/huggingface/datasets/pull/5850",
"diff_url": "https://github.com/huggingface/datasets/pull/5850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5850.patch",
"merged_at": null
} | This PR makes packaged builders skip non-supported file formats:
- Csv builder skips non-CSV files
- Analogously for the other builders
Fix #5849. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5850/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5849/comments | https://api.github.com/repos/huggingface/datasets/issues/5849/events | https://github.com/huggingface/datasets/issues/5849 | 1,707,551,511 | I_kwDODunzps5lxysX | 5,849 | CSV datasets should only read the CSV data files in the repo | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-12T12:29:53 | 2023-06-22T14:16:27 | 2023-06-22T14:16:27 | MEMBER | null | null | null | When a no-script dataset has many CSV files and a JPG file, the library infers to use the Csv builder, but tries to read as CSV all files in the repo, also the JPG file.
I think the Csv builder should filter out non-CSV files when reading.
An analogue solution should be implemented for other packaged builders.
Related to:
- https://huggingface.co/datasets/abidlabs/img2text/discussions/1
- https://github.com/gradio-app/gradio/pull/3973#issuecomment-1545409061
CC: @abidlabs @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5849/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5849/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5848/comments | https://api.github.com/repos/huggingface/datasets/issues/5848/events | https://github.com/huggingface/datasets/pull/5848 | 1,707,506,734 | PR_kwDODunzps5QYa1B | 5,848 | Add `accelerate` as metric's test dependency to fix CI error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007565 / 0.011353 (-0.003788) | 0.005361 / 0.011008 (-0.005647) | 0.098963 / 0.038508 (0.060455) | 0.034271 / 0.023109 (0.011162) | 0.323421 / 0.275898 (0.047523) | 0.348495 / 0.323480 (0.025015) | 0.006244 / 0.007986 (-0.001741) | 0.004215 / 0.004328 (-0.000113) | 0.073614 / 0.004250 (0.069364) | 0.049334 / 0.037052 (0.012282) | 0.315277 / 0.258489 (0.056788) | 0.354325 / 0.293841 (0.060484) | 0.035001 / 0.128546 (-0.093545) | 0.012149 / 0.075646 (-0.063497) | 0.335614 / 0.419271 (-0.083657) | 0.050532 / 0.043533 (0.006999) | 0.308500 / 0.255139 (0.053361) | 0.324620 / 0.283200 (0.041421) | 0.110241 / 0.141683 (-0.031442) | 1.443923 / 1.452155 (-0.008232) | 1.559289 / 1.492716 (0.066573) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207629 / 0.018006 (0.189622) | 0.433251 / 0.000490 (0.432762) | 0.003021 / 0.000200 (0.002821) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028312 / 0.037411 (-0.009100) | 0.111829 / 0.014526 (0.097303) | 0.127099 / 0.176557 (-0.049458) | 0.184702 / 0.737135 (-0.552433) | 0.125062 / 0.296338 (-0.171277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399451 / 0.215209 (0.184242) | 3.966528 / 2.077655 (1.888874) | 1.826004 / 1.504120 (0.321884) | 1.669547 / 1.541195 (0.128353) | 1.751584 / 1.468490 (0.283094) | 0.688308 / 4.584777 (-3.896469) | 3.813275 / 3.745712 (0.067562) | 3.181554 / 5.269862 (-2.088307) | 1.750566 / 4.565676 (-2.815111) | 0.085038 / 0.424275 (-0.339237) | 0.011992 / 0.007607 (0.004385) | 0.502374 / 0.226044 (0.276330) | 4.970614 / 2.268929 (2.701686) | 2.309617 / 55.444624 (-53.135007) | 2.012427 / 6.876477 (-4.864050) | 2.156348 / 2.142072 (0.014276) | 0.834415 / 4.805227 (-3.970812) | 0.167912 / 6.500664 (-6.332752) | 0.065711 / 0.075469 (-0.009758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223132 / 1.841788 (-0.618656) | 15.126753 / 8.074308 (7.052445) | 14.829184 / 10.191392 (4.637792) | 0.142582 / 0.680424 (-0.537842) | 0.017483 / 0.534201 (-0.516718) | 0.429768 / 0.579283 (-0.149516) | 0.422745 / 0.434364 (-0.011619) | 0.508813 / 0.540337 (-0.031525) | 0.618716 / 1.386936 (-0.768220) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007749 / 0.011353 (-0.003604) | 0.005433 / 0.011008 (-0.005576) | 0.076223 / 0.038508 (0.037715) | 0.036334 / 0.023109 (0.013225) | 0.375339 / 0.275898 (0.099441) | 0.413674 / 0.323480 (0.090194) | 0.006207 / 0.007986 (-0.001778) | 0.004085 / 0.004328 (-0.000244) | 0.076154 / 0.004250 (0.071904) | 0.050324 / 0.037052 (0.013271) | 0.382919 / 0.258489 (0.124429) | 0.442508 / 0.293841 (0.148667) | 0.035951 / 0.128546 (-0.092595) | 0.012067 / 0.075646 (-0.063580) | 0.087649 / 0.419271 (-0.331623) | 0.048786 / 0.043533 (0.005253) | 0.373541 / 0.255139 (0.118402) | 0.400437 / 0.283200 (0.117237) | 0.102622 / 0.141683 (-0.039061) | 1.472443 / 1.452155 (0.020288) | 1.580178 / 1.492716 (0.087462) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222105 / 0.018006 (0.204098) | 0.445465 / 0.000490 (0.444975) | 0.003671 / 0.000200 (0.003471) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030808 / 0.037411 (-0.006603) | 0.116687 / 0.014526 (0.102161) | 0.124972 / 0.176557 (-0.051584) | 0.175621 / 0.737135 (-0.561514) | 0.129029 / 0.296338 (-0.167310) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434627 / 0.215209 (0.219418) | 4.330268 / 2.077655 (2.252613) | 2.140266 / 1.504120 (0.636146) | 1.960705 / 1.541195 (0.419510) | 2.035949 / 1.468490 (0.567459) | 0.696830 / 4.584777 (-3.887947) | 3.790468 / 3.745712 (0.044756) | 3.194112 / 5.269862 (-2.075750) | 1.577728 / 4.565676 (-2.987948) | 0.085445 / 0.424275 (-0.338830) | 0.012207 / 0.007607 (0.004600) | 0.555199 / 0.226044 (0.329154) | 5.551539 / 2.268929 (3.282610) | 2.630917 / 55.444624 (-52.813707) | 2.383362 / 6.876477 (-4.493114) | 2.476301 / 2.142072 (0.334229) | 0.845773 / 4.805227 (-3.959455) | 0.169229 / 6.500664 (-6.331435) | 0.066064 / 0.075469 (-0.009405) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277543 / 1.841788 (-0.564245) | 15.775637 / 8.074308 (7.701329) | 13.528588 / 10.191392 (3.337196) | 0.167428 / 0.680424 (-0.512996) | 0.017581 / 0.534201 (-0.516620) | 0.454472 / 0.579283 (-0.124811) | 0.427987 / 0.434364 (-0.006377) | 0.551512 / 0.540337 (0.011175) | 0.650811 / 1.386936 (-0.736125) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#96a6f5f526cc90330df597ae0097274742d5b84f \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009800 / 0.011353 (-0.001552) | 0.006443 / 0.011008 (-0.004565) | 0.144137 / 0.038508 (0.105629) | 0.037493 / 0.023109 (0.014383) | 0.482306 / 0.275898 (0.206408) | 0.467625 / 0.323480 (0.144145) | 0.006812 / 0.007986 (-0.001174) | 0.004810 / 0.004328 (0.000481) | 0.109047 / 0.004250 (0.104796) | 0.047169 / 0.037052 (0.010116) | 0.451253 / 0.258489 (0.192764) | 0.511339 / 0.293841 (0.217498) | 0.055583 / 0.128546 (-0.072963) | 0.021810 / 0.075646 (-0.053836) | 0.426522 / 0.419271 (0.007250) | 0.070282 / 0.043533 (0.026749) | 0.469631 / 0.255139 (0.214492) | 0.484951 / 0.283200 (0.201751) | 0.117370 / 0.141683 (-0.024313) | 1.809917 / 1.452155 (0.357763) | 1.882659 / 1.492716 (0.389943) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223843 / 0.018006 (0.205837) | 0.549216 / 0.000490 (0.548726) | 0.007120 / 0.000200 (0.006920) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033057 / 0.037411 (-0.004354) | 0.128242 / 0.014526 (0.113716) | 0.140906 / 0.176557 (-0.035650) | 0.213122 / 0.737135 (-0.524013) | 0.148115 / 0.296338 (-0.148224) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.638712 / 0.215209 (0.423503) | 6.383684 / 2.077655 (4.306029) | 2.477020 / 1.504120 (0.972900) | 2.129190 / 1.541195 (0.587996) | 2.230503 / 1.468490 (0.762013) | 1.367167 / 4.584777 (-3.217610) | 5.570586 / 3.745712 (1.824873) | 5.462857 / 5.269862 (0.192996) | 2.990604 / 4.565676 (-1.575073) | 0.146543 / 0.424275 (-0.277732) | 0.016060 / 0.007607 (0.008453) | 0.812691 / 0.226044 (0.586646) | 7.928041 / 2.268929 (5.659112) | 3.329494 / 55.444624 (-52.115130) | 2.523452 / 6.876477 (-4.353025) | 2.672374 / 2.142072 (0.530302) | 1.598554 / 4.805227 (-3.206673) | 0.284727 / 6.500664 (-6.215937) | 0.080359 / 0.075469 (0.004889) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.501112 / 1.841788 (-0.340675) | 17.553644 / 8.074308 (9.479335) | 22.704062 / 10.191392 (12.512670) | 0.225575 / 0.680424 (-0.454849) | 0.026531 / 0.534201 (-0.507670) | 0.520129 / 0.579283 (-0.059154) | 0.626220 / 0.434364 (0.191856) | 0.631740 / 0.540337 (0.091403) | 0.750611 / 1.386936 (-0.636325) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009866 / 0.011353 (-0.001487) | 0.005733 / 0.011008 (-0.005275) | 0.111529 / 0.038508 (0.073021) | 0.042001 / 0.023109 (0.018891) | 0.458578 / 0.275898 (0.182680) | 0.507796 / 0.323480 (0.184316) | 0.006547 / 0.007986 (-0.001438) | 0.005611 / 0.004328 (0.001282) | 0.115321 / 0.004250 (0.111070) | 0.048741 / 0.037052 (0.011689) | 0.447611 / 0.258489 (0.189122) | 0.531830 / 0.293841 (0.237989) | 0.052176 / 0.128546 (-0.076370) | 0.022431 / 0.075646 (-0.053216) | 0.120709 / 0.419271 (-0.298562) | 0.067301 / 0.043533 (0.023769) | 0.460577 / 0.255139 (0.205438) | 0.497805 / 0.283200 (0.214605) | 0.121830 / 0.141683 (-0.019853) | 1.876436 / 1.452155 (0.424281) | 1.983491 / 1.492716 (0.490775) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230982 / 0.018006 (0.212976) | 0.540643 / 0.000490 (0.540153) | 0.004646 / 0.000200 (0.004446) | 0.000131 / 0.000054 (0.000077) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034230 / 0.037411 (-0.003181) | 0.136454 / 0.014526 (0.121928) | 0.143370 / 0.176557 (-0.033187) | 0.206752 / 0.737135 (-0.530384) | 0.148722 / 0.296338 (-0.147617) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.704667 / 0.215209 (0.489458) | 7.112079 / 2.077655 (5.034424) | 3.083916 / 1.504120 (1.579797) | 2.606388 / 1.541195 (1.065193) | 2.738505 / 1.468490 (1.270015) | 1.314897 / 4.584777 (-3.269880) | 5.764442 / 3.745712 (2.018729) | 3.491890 / 5.269862 (-1.777972) | 2.299983 / 4.565676 (-2.265693) | 0.169655 / 0.424275 (-0.254620) | 0.015251 / 0.007607 (0.007643) | 0.977230 / 0.226044 (0.751186) | 9.697773 / 2.268929 (7.428844) | 3.826928 / 55.444624 (-51.617697) | 3.108238 / 6.876477 (-3.768239) | 3.103242 / 2.142072 (0.961169) | 1.586645 / 4.805227 (-3.218582) | 0.287181 / 6.500664 (-6.213483) | 0.107332 / 0.075469 (0.031863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.712710 / 1.841788 (-0.129077) | 19.169403 / 8.074308 (11.095095) | 21.777301 / 10.191392 (11.585909) | 0.216918 / 0.680424 (-0.463506) | 0.026551 / 0.534201 (-0.507650) | 0.570383 / 0.579283 (-0.008900) | 0.643885 / 0.434364 (0.209521) | 0.673906 / 0.540337 (0.133568) | 0.824573 / 1.386936 (-0.562363) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4ead18b6921c9576a3078d2fb685c38f1e1a4b8a \"CML watermark\")\n"
] | 2023-05-12T12:01:01 | 2023-05-12T13:48:47 | 2023-05-12T13:39:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5848",
"html_url": "https://github.com/huggingface/datasets/pull/5848",
"diff_url": "https://github.com/huggingface/datasets/pull/5848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5848.patch",
"merged_at": "2023-05-12T13:39:06"
} | The `frugalscore` metric uses Transformers' Trainer, which requires `accelerate` (as of recently).
Fixes the following [CI error](https://github.com/huggingface/datasets/actions/runs/4950900048/jobs/8855148703?pr=5845). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5848/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5847/comments | https://api.github.com/repos/huggingface/datasets/issues/5847/events | https://github.com/huggingface/datasets/issues/5847 | 1,706,616,634 | I_kwDODunzps5luOc6 | 5,847 | Streaming IterableDataset not working with translation pipeline | {
"login": "jlquinn",
"id": 826841,
"node_id": "MDQ6VXNlcjgyNjg0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/826841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlquinn",
"html_url": "https://github.com/jlquinn",
"followers_url": "https://api.github.com/users/jlquinn/followers",
"following_url": "https://api.github.com/users/jlquinn/following{/other_user}",
"gists_url": "https://api.github.com/users/jlquinn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlquinn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlquinn/subscriptions",
"organizations_url": "https://api.github.com/users/jlquinn/orgs",
"repos_url": "https://api.github.com/users/jlquinn/repos",
"events_url": "https://api.github.com/users/jlquinn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlquinn/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I wasn't sure to file this against transformers or datasets.",
"[`KeyDataset`](https://github.com/huggingface/transformers/blob/7f8b909189547944617741d8d3c6c84504701693/src/transformers/pipelines/pt_utils.py#L296) doesn't support iterable datasets, so you either need to implement a version that does (and also indexing nested (translation) fields):\r\n\r\n```python\r\nfrom torch.utils.data import Dataset, IterableDataset\r\n\r\ndef build_key_fetcher(key: str):\r\n def _key_fetcher(item):\r\n for sub_key in key.split(\".\"):\r\n item = item[sub_key]\r\n return item\r\n return _key_fetcher\r\n\r\nclass KeyDataset(Dataset):\r\n def __new__(cls, dataset: Dataset, key: str):\r\n cls = _KeyIterableDataset if isinstance(dataset, IterableDataset) else _KeyMapDataset\r\n self = object.__new__(cls)\r\n self.dataset = dataset\r\n self.key = key\r\n self._key_fetcher = build_key_fetcher(key)\r\n return self\r\n\r\nclass _KeyMapDataset(KeyDataset):\r\n def __getitem__(self, i):\r\n return self._key_fetcher(self.dataset[i])\r\n \r\n def __len__(self):\r\n return len(self.dataset)\r\n\r\n\r\nclass _KeyIterableDataset(KeyDataset):\r\n def __iter__(self):\r\n for ex in self.dataset:\r\n yield self._key_fetcher(ex)\r\n\r\nks = KeyDataset(ds, \"translation.en\")\r\n```\r\n\r\nor use `IterableDataset`'s `map`:\r\n```python\r\ndef fetch_en_translation(ex):\r\n return {\"en\": ex[\"translation\"][\"en\"]}\r\nks = ds.map(fetch_en_translation, remove_columns=ds.column_names) \r\n```\r\n\r\ncc @sgugger: Perhaps the `KeyDataset` + PyTorch `IterableDataset` case should be supported by Transformers",
"@mariosasko The map snippet didn't quite work, but gave me enough of a clue to get it working. The following snippet does work:\r\n```\r\ndef en_translation(x):\r\n return {\"en\":x['translation']['en']}\r\nks = ds.map(en_translation, remove_columns=['translation'])\r\ntest=[]\r\nfor x in iter(ks):\r\n test.append(x['en'])\r\nxx= mt(test)\r\nfor x in xx:\r\n print(x)\r\n```\r\n\r\nI tried just returning `x['translation']['en`]` in the helper function instead of the dict, but that didn't give me an iterator over strings that pipeline would work with either.\r\n\r\n\r\nThe snippet as is gives the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/pdb.py\", line 1704, in main\r\n pdb._runscript(mainpyfile)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/pdb.py\", line 1573, in _runscript\r\n self.run(statement)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/bdb.py\", line 580, in run\r\n exec(cmd, globals, locals)\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/jlquinn/models/hf/ende.t5.pipe.py\", line 1, in <module>\r\n from transformers import pipeline\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 335, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 138, in __call__\r\n result = super().__call__(*args, **kwargs)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 1027, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 1033, in run_single\r\n model_inputs = self.preprocess(inputs, **preprocess_params)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 287, in preprocess\r\n return super()._parse_and_tokenize(*args, truncation=truncation)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 100, in _parse_and_tokenize\r\n raise ValueError(\r\nValueError: `args[0]`: <datasets.iterable_dataset.IterableDataset object at 0x7f5fd38ef1c0> have the wrong format. The should be either of type `str` or type `list`\r\nUncaught exception. Entering post mortem debugging\r\nRunning 'cont' or 'step' will restart the program\r\n```\r\n",
"So perhaps there's no bug exactly, but I would love to see two things: 1) improve the documentation to better understand what's really getting returned. 2) update the example provided of using transformer pipeline with a dataset to include the oddball case that translation appears to be.",
"cc @Narsil ",
"Hi,\r\n\r\nfor the original snippet, the issue is that `streaming` datasets are not countable (they have no len) and therefore `KeyDataset` cannot work with them ( KeyDataset is a dataset and therefore requires a length).\r\n\r\nI modified slightly the original snippet to make it work:\r\n\r\n```python\r\nfrom transformers import pipeline\r\nfrom transformers.pipelines.pt_utils import KeyDataset\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(path=\"wmt14\", name=\"fr-en\", split=\"test\", streaming=True)\r\nbs = 1\r\nmt = pipeline(\r\n \"translation_en_to_fr\", model=\"hf-internal-testing/tiny-random-T5ForConditionalGeneration\", batch_size=bs\r\n)\r\n\r\n\r\ndef ks(ds):\r\n for item in ds:\r\n yield item[\"translation\"][\"en\"]\r\n\r\n\r\n# print(f\"{ks}\")\r\nxx = mt(ks(ds))\r\nfor x in xx:\r\n print(x)\r\n```\r\n\r\nThis is what the first example in the docs suggests to use (as it's the most flexible): https://huggingface.co/docs/transformers/v4.29.1/en/pipeline_tutorial#using-pipelines-on-a-dataset\r\n\r\n`KeyDataset` really exists only to get a `sized` dataset to work nicer with `tqdm` for instance.\r\n\r\n@sgugger should we update the docs to remove `KeyDataset` entirely ? (We can add a note to pass manually the length of the data to tqdm so that the progress bar option can still be easy to use ?)\r\n",
"Maybe moving `KeyDataset` later on in the guide and specify it's mostly for streaming then? Or is it also necessary for batch_size>1 (which is what the current doc implies)?",
"Hmm\r\n\r\nIterator (`yield`) :\r\n- Not countable\r\n- Super flexible\r\n- Cannot use `num_workers>1` (threading requires indexing at the correct location, iterators require to iterate in order,so each thread would iterate over the full thing being genuinely a bad idea)\r\n- Can batch\r\n- tqdm doesn't show a nice progress bar (it has no total)\r\n\r\nKeyDataset (Or any PyTorch like Dataset returning the correct object for the pipeline):\r\n- Countable\r\n- Less flexible (not applicable to datasets with streaming), can only work on single keys. But should be easy to read and write your own (like @mariosasko did)\r\n- Works with `num_workers > 1` (Every worker can fetch exactly what's needed)\r\n- Can batch \r\n- tqdm shows a nice progress bar\r\n\r\nIn the docs, if we update all the examples to use iterators, and include an example with\r\n\r\n```\r\nfor item in tqdm.tqdm(pipe(iterator(), total=len(dataset))))\r\n```\r\n\r\nWe can save the biggest feature that doesn't work out of the box with iterators which is the tqdm progress bar.\r\n\r\n`num_workers>1` we can mention it, but it tends to be an issues only on CPU intensive loads, like image (and maybe audio)\r\n"
] | 2023-05-11T21:52:38 | 2023-05-16T15:59:55 | null | NONE | null | null | null | ### Describe the bug
I'm trying to use a streaming dataset for translation inference to avoid downloading the training data.
I'm using a pipeline and a dataset, and following the guidance in the tutorial.
Instead I get an exception that IterableDataset has no len().
### Steps to reproduce the bug
CODE:
```
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from datasets import load_dataset
ds = load_dataset(path="wmt14", name="fr-en", split="test", streaming=True)
bs=1
mt = pipeline("translation_en_to_fr", model="t5-base", batch_size=bs)
#print(mt("hello")) THIS WORKS
ks = KeyDataset(ds, "translation")
print(f"{ks}")
xx= mt(ks)
for x in xx:
print(x)
```
RUN:
```
(watnlp) [jlquinn@bertdev01 hf]$ python ende.t5.pipe.py
2023-05-11 16:48:08.817572: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-05-11 16:48:08.821388: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-05-11 16:48:08.821407: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
<transformers.pipelines.pt_utils.KeyDataset object at 0x7f61ed5da9d0>
Traceback (most recent call last):
File "/home/jlquinn/models/hf/ende.t5.pipe.py", line 11, in <module>
for x in xx:
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
item = next(self.iterator)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
item = next(self.iterator)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 720, in _next_data
index = self._next_index() # may raise StopIteration
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 671, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 247, in __iter__
for idx in self.sampler:
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 76, in __iter__
return iter(range(len(self.data_source)))
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 13, in __len__
return len(self.dataset)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 289, in __len__
return len(self.dataset)
TypeError: object of type 'IterableDataset' has no len()
```
### Expected behavior
I'm expecting french translations of the english test set to be printed.
### Environment info
Run on CPU with no GPU.
RHEL 8.7 x86_64
python 3.9.0
transformers 4.17.0
datasets 2.0.0
tokenizers 0.12.1
```
(watnlp) [jlquinn@bertdev01 hf]$ datasets-cli env
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.0
- PyArrow version: 8.0.0
- Pandas version: 1.4.4
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5847/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5851/comments | https://api.github.com/repos/huggingface/datasets/issues/5851/events | https://github.com/huggingface/datasets/issues/5851 | 1,707,907,048 | I_kwDODunzps5lzJfo | 5,851 | Error message not clear in interleaving datasets | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-11T20:52:13 | 2023-05-23T10:32:59 | 2023-05-23T10:32:59 | NONE | null | null | null | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to interleave 'sciq', 'wiki' and the 'pile-enron' dataset. I think the error I made was that I loaded the train split of one, but for the other but the error is not too helpful-
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/home/suryahari/Vornoi/save_model_ops.py](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/Vornoi/save_model_ops.py) in line 3
[41](file:///home/suryahari/Vornoi/save_model_ops.py?line=40) # %%
----> [43](file:///home/suryahari/Vornoi/save_model_ops.py?line=42) dataset = interleave_datasets(datasets, stopping_strategy="all_exhausted")
File [~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124), in interleave_datasets(datasets, probabilities, seed, info, split, stopping_strategy)
[122](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=121) for dataset in datasets[1:]:
[123](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=122) if (map_style and not isinstance(dataset, Dataset)) or (iterable and not isinstance(dataset, IterableDataset)):
--> [124](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=123) raise ValueError(
[125](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=124) f"Unable to interleave a {type(datasets[0])} with a {type(dataset)}. Expected a list of Dataset objects or a list of IterableDataset objects."
[126](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=125) )
[127](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=126) if stopping_strategy not in ["first_exhausted", "all_exhausted"]:
[128](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=127) raise ValueError(f"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.")
ValueError: Unable to interleave a with a . Expected a list of Dataset objects or a list of IterableDataset objects.
```
### Expected behavior
the error message should hopefully be more clear | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5851/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5846/comments | https://api.github.com/repos/huggingface/datasets/issues/5846/events | https://github.com/huggingface/datasets/issues/5846 | 1,706,289,290 | I_kwDODunzps5ls-iK | 5,846 | load_dataset('bigcode/the-stack-dedup', streaming=True) very slow! | {
"login": "tbenthompson",
"id": 4241811,
"node_id": "MDQ6VXNlcjQyNDE4MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tbenthompson",
"html_url": "https://github.com/tbenthompson",
"followers_url": "https://api.github.com/users/tbenthompson/followers",
"following_url": "https://api.github.com/users/tbenthompson/following{/other_user}",
"gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions",
"organizations_url": "https://api.github.com/users/tbenthompson/orgs",
"repos_url": "https://api.github.com/users/tbenthompson/repos",
"events_url": "https://api.github.com/users/tbenthompson/events{/privacy}",
"received_events_url": "https://api.github.com/users/tbenthompson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This is due to the slow resolution of the data files: https://github.com/huggingface/datasets/issues/5537.\r\n\r\nWe plan to switch to `huggingface_hub`'s `HfFileSystem` soon to make the resolution faster (will be up to 20x faster once we merge https://github.com/huggingface/huggingface_hub/pull/1443)\r\n\r\n",
"You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.",
"> You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.\r\n\r\nThat's unrelated to the problem discussed in this issue. ",
"> > You're right, when I try to parse more than 50GB of text data, I also get very slow, usually taking hours or even tens of hours.\r\n> \r\n> That's unrelated to the problem discussed in this issue.\r\n\r\nSorry, I misunderstood it."
] | 2023-05-11T17:58:57 | 2023-05-16T03:23:46 | null | NONE | null | null | null | ### Describe the bug
Running
```
import datasets
ds = datasets.load_dataset('bigcode/the-stack-dedup', streaming=True)
```
takes about 2.5 minutes!
I would expect this to be near instantaneous. With other datasets, the runtime is one or two seconds.
### Environment info
- `datasets` version: 2.11.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5846/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5845/comments | https://api.github.com/repos/huggingface/datasets/issues/5845/events | https://github.com/huggingface/datasets/pull/5845 | 1,706,253,251 | PR_kwDODunzps5QUMjS | 5,845 | Add `date_format` param to the CSV reader | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007592 / 0.011353 (-0.003761) | 0.005223 / 0.011008 (-0.005786) | 0.110218 / 0.038508 (0.071710) | 0.027644 / 0.023109 (0.004534) | 0.335063 / 0.275898 (0.059165) | 0.347102 / 0.323480 (0.023623) | 0.005107 / 0.007986 (-0.002878) | 0.003932 / 0.004328 (-0.000396) | 0.086095 / 0.004250 (0.081845) | 0.034735 / 0.037052 (-0.002317) | 0.329029 / 0.258489 (0.070540) | 0.370282 / 0.293841 (0.076441) | 0.043040 / 0.128546 (-0.085507) | 0.019626 / 0.075646 (-0.056021) | 0.336452 / 0.419271 (-0.082819) | 0.070365 / 0.043533 (0.026832) | 0.326881 / 0.255139 (0.071742) | 0.354984 / 0.283200 (0.071785) | 0.102605 / 0.141683 (-0.039077) | 1.459161 / 1.452155 (0.007007) | 1.453599 / 1.492716 (-0.039117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201021 / 0.018006 (0.183015) | 0.456415 / 0.000490 (0.455926) | 0.012349 / 0.000200 (0.012149) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025199 / 0.037411 (-0.012213) | 0.098536 / 0.014526 (0.084010) | 0.107528 / 0.176557 (-0.069028) | 0.160492 / 0.737135 (-0.576643) | 0.108660 / 0.296338 (-0.187679) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.527020 / 0.215209 (0.311811) | 5.357635 / 2.077655 (3.279980) | 2.062930 / 1.504120 (0.558811) | 1.783009 / 1.541195 (0.241815) | 1.840225 / 1.468490 (0.371735) | 1.074278 / 4.584777 (-3.510499) | 4.710533 / 3.745712 (0.964821) | 2.611202 / 5.269862 (-2.658660) | 1.885487 / 4.565676 (-2.680189) | 0.123201 / 0.424275 (-0.301074) | 0.013880 / 0.007607 (0.006273) | 0.636511 / 0.226044 (0.410467) | 6.516075 / 2.268929 (4.247146) | 2.710138 / 55.444624 (-52.734486) | 2.046606 / 6.876477 (-4.829871) | 2.085907 / 2.142072 (-0.056166) | 1.199489 / 4.805227 (-3.605738) | 0.211668 / 6.500664 (-6.288996) | 0.075436 / 0.075469 (-0.000033) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219771 / 1.841788 (-0.622016) | 14.276215 / 8.074308 (6.201907) | 16.611529 / 10.191392 (6.420137) | 0.221091 / 0.680424 (-0.459333) | 0.024922 / 0.534201 (-0.509279) | 0.431906 / 0.579283 (-0.147377) | 0.518863 / 0.434364 (0.084499) | 0.515366 / 0.540337 (-0.024971) | 0.640411 / 1.386936 (-0.746525) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007955 / 0.011353 (-0.003398) | 0.004813 / 0.011008 (-0.006196) | 0.076508 / 0.038508 (0.038000) | 0.028137 / 0.023109 (0.005028) | 0.349609 / 0.275898 (0.073711) | 0.403588 / 0.323480 (0.080109) | 0.005456 / 0.007986 (-0.002530) | 0.005677 / 0.004328 (0.001349) | 0.076882 / 0.004250 (0.072632) | 0.039832 / 0.037052 (0.002779) | 0.351930 / 0.258489 (0.093440) | 0.390492 / 0.293841 (0.096651) | 0.045199 / 0.128546 (-0.083347) | 0.023945 / 0.075646 (-0.051701) | 0.091140 / 0.419271 (-0.328132) | 0.057728 / 0.043533 (0.014195) | 0.370663 / 0.255139 (0.115524) | 0.380649 / 0.283200 (0.097449) | 0.097017 / 0.141683 (-0.044666) | 1.362248 / 1.452155 (-0.089907) | 1.445699 / 1.492716 (-0.047018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204207 / 0.018006 (0.186201) | 0.474471 / 0.000490 (0.473981) | 0.012187 / 0.000200 (0.011987) | 0.000151 / 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023123 / 0.037411 (-0.014288) | 0.097547 / 0.014526 (0.083021) | 0.113877 / 0.176557 (-0.062679) | 0.158307 / 0.737135 (-0.578828) | 0.113876 / 0.296338 (-0.182462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.519920 / 0.215209 (0.304711) | 5.384371 / 2.077655 (3.306716) | 2.263276 / 1.504120 (0.759156) | 1.960604 / 1.541195 (0.419409) | 2.022864 / 1.468490 (0.554374) | 1.015430 / 4.584777 (-3.569347) | 4.774426 / 3.745712 (1.028714) | 4.549598 / 5.269862 (-0.720264) | 2.412638 / 4.565676 (-2.153039) | 0.117983 / 0.424275 (-0.306292) | 0.013340 / 0.007607 (0.005733) | 0.639826 / 0.226044 (0.413782) | 6.491622 / 2.268929 (4.222693) | 2.946892 / 55.444624 (-52.497732) | 2.376393 / 6.876477 (-4.500084) | 2.285592 / 2.142072 (0.143519) | 1.185049 / 4.805227 (-3.620178) | 0.204127 / 6.500664 (-6.296537) | 0.070285 / 0.075469 (-0.005184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.439736 / 1.841788 (-0.402052) | 14.852087 / 8.074308 (6.777779) | 15.675742 / 10.191392 (5.484350) | 0.206577 / 0.680424 (-0.473846) | 0.031688 / 0.534201 (-0.502513) | 0.471003 / 0.579283 (-0.108280) | 0.505449 / 0.434364 (0.071085) | 0.506114 / 0.540337 (-0.034224) | 0.583752 / 1.386936 (-0.803184) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d6fcff8a031db39cb31079bc1fa62ded6e35218c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012965 / 0.011353 (0.001612) | 0.006660 / 0.011008 (-0.004348) | 0.126060 / 0.038508 (0.087551) | 0.041154 / 0.023109 (0.018045) | 0.413428 / 0.275898 (0.137530) | 0.429035 / 0.323480 (0.105555) | 0.006680 / 0.007986 (-0.001305) | 0.005063 / 0.004328 (0.000734) | 0.092161 / 0.004250 (0.087911) | 0.056092 / 0.037052 (0.019039) | 0.421460 / 0.258489 (0.162971) | 0.450291 / 0.293841 (0.156450) | 0.050820 / 0.128546 (-0.077726) | 0.021392 / 0.075646 (-0.054255) | 0.426915 / 0.419271 (0.007643) | 0.064908 / 0.043533 (0.021375) | 0.406769 / 0.255139 (0.151630) | 0.434344 / 0.283200 (0.151144) | 0.127967 / 0.141683 (-0.013716) | 1.922414 / 1.452155 (0.470260) | 1.940717 / 1.492716 (0.448000) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288024 / 0.018006 (0.270017) | 0.615859 / 0.000490 (0.615369) | 0.007095 / 0.000200 (0.006895) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028182 / 0.037411 (-0.009230) | 0.126277 / 0.014526 (0.111752) | 0.131687 / 0.176557 (-0.044870) | 0.206191 / 0.737135 (-0.530944) | 0.141799 / 0.296338 (-0.154539) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.631580 / 0.215209 (0.416371) | 6.141942 / 2.077655 (4.064287) | 2.476721 / 1.504120 (0.972602) | 2.128850 / 1.541195 (0.587655) | 2.236468 / 1.468490 (0.767978) | 1.188665 / 4.584777 (-3.396112) | 5.481179 / 3.745712 (1.735467) | 3.120333 / 5.269862 (-2.149529) | 2.365889 / 4.565676 (-2.199787) | 0.145081 / 0.424275 (-0.279194) | 0.015866 / 0.007607 (0.008259) | 0.795650 / 0.226044 (0.569605) | 7.595289 / 2.268929 (5.326361) | 3.174418 / 55.444624 (-52.270207) | 2.905207 / 6.876477 (-3.971270) | 2.428263 / 2.142072 (0.286191) | 1.408900 / 4.805227 (-3.396328) | 0.265485 / 6.500664 (-6.235179) | 0.083882 / 0.075469 (0.008413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517025 / 1.841788 (-0.324762) | 18.110288 / 8.074308 (10.035980) | 20.810003 / 10.191392 (10.618611) | 0.210380 / 0.680424 (-0.470044) | 0.030180 / 0.534201 (-0.504021) | 0.523453 / 0.579283 (-0.055830) | 0.603896 / 0.434364 (0.169532) | 0.622554 / 0.540337 (0.082216) | 0.737973 / 1.386936 (-0.648963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009795 / 0.011353 (-0.001558) | 0.006269 / 0.011008 (-0.004739) | 0.099938 / 0.038508 (0.061430) | 0.035162 / 0.023109 (0.012052) | 0.506353 / 0.275898 (0.230455) | 0.527804 / 0.323480 (0.204324) | 0.007211 / 0.007986 (-0.000775) | 0.005498 / 0.004328 (0.001169) | 0.098325 / 0.004250 (0.094075) | 0.054513 / 0.037052 (0.017461) | 0.525764 / 0.258489 (0.267274) | 0.576699 / 0.293841 (0.282858) | 0.052800 / 0.128546 (-0.075747) | 0.021192 / 0.075646 (-0.054454) | 0.117676 / 0.419271 (-0.301596) | 0.055415 / 0.043533 (0.011882) | 0.516746 / 0.255139 (0.261607) | 0.528417 / 0.283200 (0.245217) | 0.116947 / 0.141683 (-0.024735) | 1.757864 / 1.452155 (0.305709) | 2.043632 / 1.492716 (0.550916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284018 / 0.018006 (0.266011) | 0.595086 / 0.000490 (0.594596) | 0.001945 / 0.000200 (0.001745) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032255 / 0.037411 (-0.005157) | 0.128201 / 0.014526 (0.113676) | 0.139189 / 0.176557 (-0.037367) | 0.199750 / 0.737135 (-0.537385) | 0.149406 / 0.296338 (-0.146933) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652184 / 0.215209 (0.436975) | 6.453319 / 2.077655 (4.375664) | 2.831566 / 1.504120 (1.327446) | 2.453064 / 1.541195 (0.911869) | 2.622056 / 1.468490 (1.153566) | 1.191279 / 4.584777 (-3.393498) | 5.504720 / 3.745712 (1.759007) | 5.916900 / 5.269862 (0.647038) | 2.974400 / 4.565676 (-1.591277) | 0.142851 / 0.424275 (-0.281424) | 0.015241 / 0.007607 (0.007634) | 0.917537 / 0.226044 (0.691493) | 8.277645 / 2.268929 (6.008717) | 3.700495 / 55.444624 (-51.744130) | 3.047127 / 6.876477 (-3.829350) | 3.093216 / 2.142072 (0.951143) | 1.413529 / 4.805227 (-3.391698) | 0.259395 / 6.500664 (-6.241270) | 0.083144 / 0.075469 (0.007675) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632240 / 1.841788 (-0.209548) | 18.687403 / 8.074308 (10.613095) | 20.134091 / 10.191392 (9.942699) | 0.238792 / 0.680424 (-0.441632) | 0.027645 / 0.534201 (-0.506556) | 0.518200 / 0.579283 (-0.061083) | 0.613535 / 0.434364 (0.179171) | 0.631414 / 0.540337 (0.091076) | 0.724658 / 1.386936 (-0.662278) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ac7caa5e195ad76c7e8ef98914813383f4f668cf \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006228 / 0.011353 (-0.005125) | 0.004517 / 0.011008 (-0.006492) | 0.097998 / 0.038508 (0.059490) | 0.027903 / 0.023109 (0.004793) | 0.309789 / 0.275898 (0.033891) | 0.332784 / 0.323480 (0.009304) | 0.004757 / 0.007986 (-0.003228) | 0.003348 / 0.004328 (-0.000981) | 0.075193 / 0.004250 (0.070942) | 0.037382 / 0.037052 (0.000330) | 0.306929 / 0.258489 (0.048440) | 0.347304 / 0.293841 (0.053463) | 0.030235 / 0.128546 (-0.098312) | 0.011516 / 0.075646 (-0.064131) | 0.322249 / 0.419271 (-0.097023) | 0.044125 / 0.043533 (0.000592) | 0.303874 / 0.255139 (0.048735) | 0.326808 / 0.283200 (0.043608) | 0.088137 / 0.141683 (-0.053546) | 1.521426 / 1.452155 (0.069272) | 1.573823 / 1.492716 (0.081107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203204 / 0.018006 (0.185197) | 0.402247 / 0.000490 (0.401757) | 0.003146 / 0.000200 (0.002946) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022955 / 0.037411 (-0.014456) | 0.096059 / 0.014526 (0.081533) | 0.105552 / 0.176557 (-0.071004) | 0.167459 / 0.737135 (-0.569676) | 0.106723 / 0.296338 (-0.189615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454626 / 0.215209 (0.239417) | 4.556346 / 2.077655 (2.478691) | 2.220349 / 1.504120 (0.716229) | 2.011820 / 1.541195 (0.470625) | 2.048149 / 1.468490 (0.579659) | 0.697583 / 4.584777 (-3.887194) | 3.428394 / 3.745712 (-0.317318) | 1.863872 / 5.269862 (-3.405989) | 1.159691 / 4.565676 (-3.405985) | 0.082598 / 0.424275 (-0.341677) | 0.012202 / 0.007607 (0.004594) | 0.555617 / 0.226044 (0.329572) | 5.545481 / 2.268929 (3.276553) | 2.650850 / 55.444624 (-52.793775) | 2.305864 / 6.876477 (-4.570613) | 2.392252 / 2.142072 (0.250179) | 0.808512 / 4.805227 (-3.996716) | 0.152086 / 6.500664 (-6.348578) | 0.066440 / 0.075469 (-0.009029) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211789 / 1.841788 (-0.629999) | 13.515546 / 8.074308 (5.441238) | 13.859870 / 10.191392 (3.668478) | 0.150335 / 0.680424 (-0.530088) | 0.016578 / 0.534201 (-0.517623) | 0.379145 / 0.579283 (-0.200138) | 0.393735 / 0.434364 (-0.040628) | 0.460219 / 0.540337 (-0.080118) | 0.555896 / 1.386936 (-0.831040) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006402 / 0.011353 (-0.004950) | 0.004558 / 0.011008 (-0.006450) | 0.077332 / 0.038508 (0.038824) | 0.027955 / 0.023109 (0.004846) | 0.407877 / 0.275898 (0.131979) | 0.432552 / 0.323480 (0.109072) | 0.004850 / 0.007986 (-0.003135) | 0.003329 / 0.004328 (-0.000999) | 0.075767 / 0.004250 (0.071517) | 0.035940 / 0.037052 (-0.001112) | 0.419544 / 0.258489 (0.161055) | 0.454672 / 0.293841 (0.160831) | 0.030461 / 0.128546 (-0.098085) | 0.011536 / 0.075646 (-0.064111) | 0.085774 / 0.419271 (-0.333498) | 0.039408 / 0.043533 (-0.004125) | 0.389909 / 0.255139 (0.134770) | 0.403287 / 0.283200 (0.120088) | 0.088385 / 0.141683 (-0.053298) | 1.596840 / 1.452155 (0.144686) | 1.659296 / 1.492716 (0.166580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216349 / 0.018006 (0.198342) | 0.394969 / 0.000490 (0.394479) | 0.000408 / 0.000200 (0.000208) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024346 / 0.037411 (-0.013066) | 0.099609 / 0.014526 (0.085084) | 0.106779 / 0.176557 (-0.069778) | 0.156889 / 0.737135 (-0.580247) | 0.110625 / 0.296338 (-0.185714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443809 / 0.215209 (0.228600) | 4.450524 / 2.077655 (2.372870) | 2.151694 / 1.504120 (0.647574) | 1.952521 / 1.541195 (0.411326) | 1.963320 / 1.468490 (0.494830) | 0.709291 / 4.584777 (-3.875486) | 3.415708 / 3.745712 (-0.330005) | 1.850498 / 5.269862 (-3.419363) | 1.164355 / 4.565676 (-3.401321) | 0.084977 / 0.424275 (-0.339298) | 0.013284 / 0.007607 (0.005677) | 0.555103 / 0.226044 (0.329059) | 5.583587 / 2.268929 (3.314658) | 2.608754 / 55.444624 (-52.835870) | 2.264079 / 6.876477 (-4.612398) | 2.272455 / 2.142072 (0.130382) | 0.820849 / 4.805227 (-3.984379) | 0.155063 / 6.500664 (-6.345601) | 0.069709 / 0.075469 (-0.005760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293285 / 1.841788 (-0.548503) | 14.181867 / 8.074308 (6.107559) | 13.021280 / 10.191392 (2.829888) | 0.130101 / 0.680424 (-0.550323) | 0.016461 / 0.534201 (-0.517740) | 0.383651 / 0.579283 (-0.195632) | 0.387353 / 0.434364 (-0.047011) | 0.443351 / 0.540337 (-0.096986) | 0.529448 / 1.386936 (-0.857488) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#05145d50b5bb1b7b42b76516cd6492d4868c46ba \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007513 / 0.011353 (-0.003840) | 0.005328 / 0.011008 (-0.005680) | 0.096937 / 0.038508 (0.058429) | 0.036230 / 0.023109 (0.013121) | 0.325808 / 0.275898 (0.049910) | 0.363601 / 0.323480 (0.040121) | 0.006130 / 0.007986 (-0.001855) | 0.004352 / 0.004328 (0.000023) | 0.073543 / 0.004250 (0.069293) | 0.054114 / 0.037052 (0.017062) | 0.328952 / 0.258489 (0.070463) | 0.366943 / 0.293841 (0.073102) | 0.035768 / 0.128546 (-0.092778) | 0.012505 / 0.075646 (-0.063142) | 0.332260 / 0.419271 (-0.087012) | 0.066673 / 0.043533 (0.023140) | 0.323866 / 0.255139 (0.068727) | 0.341311 / 0.283200 (0.058112) | 0.129898 / 0.141683 (-0.011785) | 1.456890 / 1.452155 (0.004735) | 1.546933 / 1.492716 (0.054217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299236 / 0.018006 (0.281229) | 0.496134 / 0.000490 (0.495645) | 0.004233 / 0.000200 (0.004033) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028089 / 0.037411 (-0.009322) | 0.104723 / 0.014526 (0.090197) | 0.121032 / 0.176557 (-0.055525) | 0.179916 / 0.737135 (-0.557220) | 0.126628 / 0.296338 (-0.169711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403497 / 0.215209 (0.188288) | 4.052481 / 2.077655 (1.974827) | 1.804419 / 1.504120 (0.300299) | 1.619833 / 1.541195 (0.078638) | 1.732438 / 1.468490 (0.263948) | 0.702474 / 4.584777 (-3.882303) | 3.808973 / 3.745712 (0.063261) | 3.682764 / 5.269862 (-1.587098) | 1.919184 / 4.565676 (-2.646493) | 0.086638 / 0.424275 (-0.337637) | 0.012265 / 0.007607 (0.004658) | 0.501273 / 0.226044 (0.275229) | 5.010918 / 2.268929 (2.741989) | 2.278114 / 55.444624 (-53.166510) | 1.942266 / 6.876477 (-4.934211) | 2.101982 / 2.142072 (-0.040091) | 0.847622 / 4.805227 (-3.957606) | 0.172973 / 6.500664 (-6.327691) | 0.066884 / 0.075469 (-0.008586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187609 / 1.841788 (-0.654179) | 15.089485 / 8.074308 (7.015177) | 14.787398 / 10.191392 (4.596006) | 0.168254 / 0.680424 (-0.512170) | 0.018266 / 0.534201 (-0.515935) | 0.423204 / 0.579283 (-0.156079) | 0.435238 / 0.434364 (0.000874) | 0.512473 / 0.540337 (-0.027864) | 0.618091 / 1.386936 (-0.768845) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007249 / 0.011353 (-0.004104) | 0.005297 / 0.011008 (-0.005711) | 0.076428 / 0.038508 (0.037920) | 0.033565 / 0.023109 (0.010456) | 0.373756 / 0.275898 (0.097858) | 0.407405 / 0.323480 (0.083925) | 0.006100 / 0.007986 (-0.001886) | 0.006482 / 0.004328 (0.002153) | 0.075884 / 0.004250 (0.071633) | 0.055338 / 0.037052 (0.018286) | 0.378721 / 0.258489 (0.120232) | 0.427065 / 0.293841 (0.133224) | 0.036285 / 0.128546 (-0.092261) | 0.012460 / 0.075646 (-0.063186) | 0.087641 / 0.419271 (-0.331630) | 0.048199 / 0.043533 (0.004666) | 0.386785 / 0.255139 (0.131646) | 0.386702 / 0.283200 (0.103503) | 0.110087 / 0.141683 (-0.031596) | 1.511204 / 1.452155 (0.059050) | 1.585671 / 1.492716 (0.092954) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313558 / 0.018006 (0.295552) | 0.496991 / 0.000490 (0.496501) | 0.001492 / 0.000200 (0.001292) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031814 / 0.037411 (-0.005597) | 0.113486 / 0.014526 (0.098960) | 0.125208 / 0.176557 (-0.051348) | 0.174469 / 0.737135 (-0.562666) | 0.131095 / 0.296338 (-0.165244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439282 / 0.215209 (0.224073) | 4.362286 / 2.077655 (2.284631) | 2.153271 / 1.504120 (0.649151) | 1.990482 / 1.541195 (0.449288) | 2.103322 / 1.468490 (0.634831) | 0.692522 / 4.584777 (-3.892254) | 3.861931 / 3.745712 (0.116219) | 3.686294 / 5.269862 (-1.583567) | 1.734525 / 4.565676 (-2.831152) | 0.085057 / 0.424275 (-0.339218) | 0.012116 / 0.007607 (0.004509) | 0.547996 / 0.226044 (0.321952) | 5.513835 / 2.268929 (3.244906) | 2.723829 / 55.444624 (-52.720795) | 2.404715 / 6.876477 (-4.471761) | 2.514768 / 2.142072 (0.372696) | 0.834972 / 4.805227 (-3.970255) | 0.168261 / 6.500664 (-6.332403) | 0.066464 / 0.075469 (-0.009005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259923 / 1.841788 (-0.581865) | 15.646277 / 8.074308 (7.571969) | 13.097598 / 10.191392 (2.906206) | 0.187991 / 0.680424 (-0.492433) | 0.017358 / 0.534201 (-0.516843) | 0.427979 / 0.579283 (-0.151304) | 0.425747 / 0.434364 (-0.008617) | 0.501907 / 0.540337 (-0.038431) | 0.595106 / 1.386936 (-0.791830) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009378 / 0.011353 (-0.001975) | 0.006434 / 0.011008 (-0.004574) | 0.120603 / 0.038508 (0.082095) | 0.042929 / 0.023109 (0.019820) | 0.366853 / 0.275898 (0.090955) | 0.436795 / 0.323480 (0.113315) | 0.007730 / 0.007986 (-0.000256) | 0.004842 / 0.004328 (0.000513) | 0.091058 / 0.004250 (0.086808) | 0.058256 / 0.037052 (0.021203) | 0.378692 / 0.258489 (0.120203) | 0.467384 / 0.293841 (0.173543) | 0.042948 / 0.128546 (-0.085598) | 0.015172 / 0.075646 (-0.060475) | 0.409225 / 0.419271 (-0.010046) | 0.083672 / 0.043533 (0.040140) | 0.390088 / 0.255139 (0.134949) | 0.406965 / 0.283200 (0.123765) | 0.142132 / 0.141683 (0.000449) | 1.765737 / 1.452155 (0.313582) | 1.895419 / 1.492716 (0.402703) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244052 / 0.018006 (0.226046) | 0.553383 / 0.000490 (0.552893) | 0.006798 / 0.000200 (0.006598) | 0.000227 / 0.000054 (0.000173) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032032 / 0.037411 (-0.005380) | 0.129990 / 0.014526 (0.115464) | 0.140338 / 0.176557 (-0.036219) | 0.212155 / 0.737135 (-0.524980) | 0.147395 / 0.296338 (-0.148943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478760 / 0.215209 (0.263551) | 4.751335 / 2.077655 (2.673680) | 2.164755 / 1.504120 (0.660635) | 1.944288 / 1.541195 (0.403094) | 2.077657 / 1.468490 (0.609167) | 0.818519 / 4.584777 (-3.766258) | 4.689013 / 3.745712 (0.943301) | 2.484079 / 5.269862 (-2.785782) | 1.788632 / 4.565676 (-2.777044) | 0.100484 / 0.424275 (-0.323791) | 0.013838 / 0.007607 (0.006231) | 0.589650 / 0.226044 (0.363605) | 5.859461 / 2.268929 (3.590533) | 2.670025 / 55.444624 (-52.774599) | 2.688709 / 6.876477 (-4.187768) | 2.408060 / 2.142072 (0.265988) | 0.972107 / 4.805227 (-3.833120) | 0.194425 / 6.500664 (-6.306239) | 0.076077 / 0.075469 (0.000608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430150 / 1.841788 (-0.411638) | 17.710507 / 8.074308 (9.636199) | 16.210789 / 10.191392 (6.019397) | 0.163940 / 0.680424 (-0.516484) | 0.020295 / 0.534201 (-0.513906) | 0.472596 / 0.579283 (-0.106687) | 0.483107 / 0.434364 (0.048743) | 0.585269 / 0.540337 (0.044931) | 0.705526 / 1.386936 (-0.681410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008864 / 0.011353 (-0.002489) | 0.006095 / 0.011008 (-0.004913) | 0.088702 / 0.038508 (0.050194) | 0.041596 / 0.023109 (0.018486) | 0.453515 / 0.275898 (0.177617) | 0.476217 / 0.323480 (0.152737) | 0.007574 / 0.007986 (-0.000412) | 0.004727 / 0.004328 (0.000398) | 0.087271 / 0.004250 (0.083021) | 0.059631 / 0.037052 (0.022578) | 0.449379 / 0.258489 (0.190890) | 0.494436 / 0.293841 (0.200595) | 0.043448 / 0.128546 (-0.085098) | 0.014580 / 0.075646 (-0.061067) | 0.103836 / 0.419271 (-0.315435) | 0.057537 / 0.043533 (0.014004) | 0.449359 / 0.255139 (0.194220) | 0.447577 / 0.283200 (0.164377) | 0.123600 / 0.141683 (-0.018083) | 1.748448 / 1.452155 (0.296294) | 1.902116 / 1.492716 (0.409399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237214 / 0.018006 (0.219207) | 0.497648 / 0.000490 (0.497158) | 0.003519 / 0.000200 (0.003319) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034477 / 0.037411 (-0.002934) | 0.132627 / 0.014526 (0.118101) | 0.139721 / 0.176557 (-0.036836) | 0.195705 / 0.737135 (-0.541430) | 0.150762 / 0.296338 (-0.145577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.521306 / 0.215209 (0.306097) | 5.184982 / 2.077655 (3.107328) | 2.503979 / 1.504120 (0.999859) | 2.301054 / 1.541195 (0.759860) | 2.352713 / 1.468490 (0.884222) | 0.819804 / 4.584777 (-3.764973) | 4.584011 / 3.745712 (0.838299) | 2.497311 / 5.269862 (-2.772550) | 1.561262 / 4.565676 (-3.004414) | 0.101814 / 0.424275 (-0.322461) | 0.014078 / 0.007607 (0.006471) | 0.666564 / 0.226044 (0.440520) | 6.616379 / 2.268929 (4.347450) | 3.263892 / 55.444624 (-52.180732) | 2.891774 / 6.876477 (-3.984703) | 2.945260 / 2.142072 (0.803188) | 1.014379 / 4.805227 (-3.790848) | 0.201762 / 6.500664 (-6.298902) | 0.078012 / 0.075469 (0.002543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567808 / 1.841788 (-0.273980) | 19.096552 / 8.074308 (11.022244) | 15.522285 / 10.191392 (5.330893) | 0.226568 / 0.680424 (-0.453856) | 0.021078 / 0.534201 (-0.513123) | 0.501686 / 0.579283 (-0.077597) | 0.517575 / 0.434364 (0.083211) | 0.589685 / 0.540337 (0.049348) | 0.705053 / 1.386936 (-0.681883) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db56f7f0d2f0b99af4da17d388c205152504c7d9 \"CML watermark\")\n"
] | 2023-05-11T17:29:57 | 2023-05-15T07:39:13 | 2023-05-12T15:14:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5845",
"html_url": "https://github.com/huggingface/datasets/pull/5845",
"diff_url": "https://github.com/huggingface/datasets/pull/5845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5845.patch",
"merged_at": "2023-05-12T15:14:48"
} | Adds the `date_format` param introduced in Pandas 2.0 to the CSV reader and improves its type hints. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5845/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5844/comments | https://api.github.com/repos/huggingface/datasets/issues/5844/events | https://github.com/huggingface/datasets/issues/5844 | 1,705,907,812 | I_kwDODunzps5lrhZk | 5,844 | TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to ... | {
"login": "chen-coding",
"id": 54010030,
"node_id": "MDQ6VXNlcjU0MDEwMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/54010030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chen-coding",
"html_url": "https://github.com/chen-coding",
"followers_url": "https://api.github.com/users/chen-coding/followers",
"following_url": "https://api.github.com/users/chen-coding/following{/other_user}",
"gists_url": "https://api.github.com/users/chen-coding/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chen-coding/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chen-coding/subscriptions",
"organizations_url": "https://api.github.com/users/chen-coding/orgs",
"repos_url": "https://api.github.com/users/chen-coding/repos",
"events_url": "https://api.github.com/users/chen-coding/events{/privacy}",
"received_events_url": "https://api.github.com/users/chen-coding/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-05-11T14:15:01 | 2023-05-11T14:15:01 | null | NONE | null | null | null | ### Describe the bug
TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to {'answer': {'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
When I use _load_dataset()_ I get the error
`from datasets import load_dataset
datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
`
Detailed error information is as follows:
Traceback (most recent call last):
File "C:/Users/CHENJIALEI/Desktop/NLPCC2023/NLPCC23_SciMRC-main/test2.py", line 9, in <module>
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\load.py", line 1747, in load_dataset
builder_instance.download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 814, in download_and_prepare
self._download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 905, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 1521, in _prepare_split
writer.write_table(table)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\arrow_writer.py", line 540, in write_table
pa_table = table_cast(pa_table, self._schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2069, in table_cast
return cast_table_to_schema(table, schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1913, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
It is successful when I load the data separately
`raw_data = load_dataset("json", data_files="./data/train.json", cache_dir="./cache")`
### Steps to reproduce the bug
1.from datasets import load_dataset
2.datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
3.raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
### Expected behavior
Successfully load dataset
### Environment info
datasets == 2.6.1
pyarrow == 8.0.0
python == 3.8
platform:windows11 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5844/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5841/comments | https://api.github.com/repos/huggingface/datasets/issues/5841/events | https://github.com/huggingface/datasets/issues/5841 | 1,705,286,639 | I_kwDODunzps5lpJvv | 5,841 | Abusurdly slow on iteration | {
"login": "fecet",
"id": 41792945,
"node_id": "MDQ6VXNlcjQxNzkyOTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/41792945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fecet",
"html_url": "https://github.com/fecet",
"followers_url": "https://api.github.com/users/fecet/followers",
"following_url": "https://api.github.com/users/fecet/following{/other_user}",
"gists_url": "https://api.github.com/users/fecet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fecet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fecet/subscriptions",
"organizations_url": "https://api.github.com/users/fecet/orgs",
"repos_url": "https://api.github.com/users/fecet/repos",
"events_url": "https://api.github.com/users/fecet/events{/privacy}",
"received_events_url": "https://api.github.com/users/fecet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! You can try to use the [Image](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Image) type which [decodes images on-the-fly](https://huggingface.co/docs/datasets/v2.12.0/en/about_dataset_features#image-feature) into pytorch tensors :)\r\n\r\n```python\r\nds = Dataset.from_dict({\"tensor\":a}).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 5.04 s, sys: 96.5 ms, total: 5.14 s\r\n# Wall time: 5.14 s\r\n# 10000\r\n```\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Image()})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 1.86 s, sys: 49 ms, total: 1.91 s\r\n# Wall time: 1.9 s\r\n# 10000\r\n```\r\n\r\n-> Speed x2.7\r\n\r\nAnd if you want to keep using arrays of integers, consider using the [Array2D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array2D) or [Array3D](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Array3D) types which are even faster (since it doesn't decode images):\r\n\r\n```python\r\nfeatures = Features({\"tensor\": Array2D(shape=(100, 224), dtype=\"float32\")})\r\nds = Dataset.from_dict({\"tensor\":a}, features=features).with_format(\"torch\")\r\n%time sum(1 for _ in ds)\r\n# CPU times: user 828 ms, sys: 68.4 ms, total: 896 ms\r\n# Wall time: 897 ms\r\n# 10000\r\n```\r\n\r\n-> Speed x5.7\r\n\r\nBatching also speeds up a lot\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\ndl = DataLoader(ds, batch_size=100)\r\n%time sum(1 for _ in dl)\r\n# CPU times: user 564 ms, sys: 83.5 ms, total: 648 ms\r\n# Wall time: 579 ms\r\n# 100\r\n```\r\n\r\n-> Speed x8.9\r\n\r\n```python\r\n%time sum(1 for _ in ds.iter(batch_size=100))\r\n# CPU times: user 119 ms, sys: 96.8 ms, total: 215 ms\r\n# Wall time: 117 ms\r\n# 100\r\n```\r\n\r\n-> Speed x46",
"Anyway, regarding the speed difference between numpy and pytorch, I think the issue is that we first convert numpy sub-arrays to pytorch and then consolidate into one tensor, while we should to the opposite. Indeed converting a numpy array to pytorch has a fix cost that seems to cause a slow down. The current pipeline is\r\n\r\n```\r\narrow -> nested numpy arrays -> lists of torch tensors -> one torch tensor\r\n```\r\n\r\nand we should do\r\n\r\n```\r\narrow -> nested numpy arrays -> one numpy array -> one torch tensor\r\n```",
"I have a similar issue: iterating over a dataset takes 5s without applying any transform, but takes ~30s after applying a transform.\r\nHere is the minimum code to reproduce the problem\r\n\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\nimport torchvision \r\nfrom torchvision.transforms import ToTensor, Normalize\r\n\r\n\r\n#################################\r\n# Without transform\r\n#################################\r\n \r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data, no transform\"):\r\n pass\r\n\r\n\r\n#################################\r\n# With transform\r\n#################################\r\n\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\n\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"img\", \"fine_label\"])\r\n\r\n\r\ntrain_loader= DataLoader(\r\n train_dataset,\r\n batch_size=100,\r\n pin_memory=False,\r\n shuffle=True,\r\n num_workers=8,\r\n)\r\n\r\n\r\nfor batch in tqdm(train_loader, desc=\"Loading data after transform\"):\r\n pass \r\n```\r\n\r\nI have also tried converting the Image column to an Array3D\r\n```python\r\nimg_shape = train_dataset[0][\"img\"].shape\r\n\r\nfeatures = train_dataset.features.copy()\r\nfeatures[\"x\"] = Array3D(shape=img_shape, dtype=\"float32\")\r\n\r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"x\": np.array(x[\"img\"], dtype=np.uint8)},\r\n features=features,\r\n)\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\ntrain_dataset.set_format(type=\"numpy\", columns=[\"x\", \"fine_label\"])\r\n```\r\nbut to no avail. Any clue?",
"Thanks! I convert my dataset feature to Array3D and this speed became awesome!"
] | 2023-05-11T08:04:09 | 2023-05-15T15:38:13 | 2023-05-15T15:38:13 | NONE | null | null | null | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?
Furthermore, if I increase the size of a to an image shape, like:
```python
a=torch.randn(3,224,224)
```
the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.
### Steps to reproduce the bug
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
### Expected behavior
iteration faster
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5841/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5841/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5840/comments | https://api.github.com/repos/huggingface/datasets/issues/5840/events | https://github.com/huggingface/datasets/issues/5840 | 1,705,212,085 | I_kwDODunzps5lo3i1 | 5,840 | load model error. | {
"login": "LanShanPi",
"id": 58167546,
"node_id": "MDQ6VXNlcjU4MTY3NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/58167546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LanShanPi",
"html_url": "https://github.com/LanShanPi",
"followers_url": "https://api.github.com/users/LanShanPi/followers",
"following_url": "https://api.github.com/users/LanShanPi/following{/other_user}",
"gists_url": "https://api.github.com/users/LanShanPi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LanShanPi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LanShanPi/subscriptions",
"organizations_url": "https://api.github.com/users/LanShanPi/orgs",
"repos_url": "https://api.github.com/users/LanShanPi/repos",
"events_url": "https://api.github.com/users/LanShanPi/events{/privacy}",
"received_events_url": "https://api.github.com/users/LanShanPi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Please report this in the `transformers` repo, as it's not related to `datasets`"
] | 2023-05-11T07:12:38 | 2023-05-12T13:44:07 | 2023-05-12T13:44:06 | NONE | null | null | null | ### Describe the bug
I had trained one model use deepspeed, when I load the final load I get the follow error:
OSError: Can't load tokenizer for '/XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/fm001/hzl/Project/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor' is the correct path to a directory containing all relevant files for a BloomTokenizerFast tokenizer.
my load code is : python chat.py --path /XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor/
### Steps to reproduce the bug
。。。
### Expected behavior
。。。
### Environment info
。。。 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5840/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5842/comments | https://api.github.com/repos/huggingface/datasets/issues/5842/events | https://github.com/huggingface/datasets/issues/5842 | 1,705,510,602 | I_kwDODunzps5lqAbK | 5,842 | Remove columns in interable dataset | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Transferring this issue as it's related to the 🤗 Datasets library ",
"Hi @surya-narayanan! Could you provide some code snippet?",
"This method has been recently added to the `IterableDataset`, so you need to update the `datasets`' installation (`pip install -U datasets`) to use it."
] | 2023-05-11T03:48:46 | 2023-06-21T16:36:42 | 2023-06-21T16:36:41 | NONE | null | null | null | ### Feature request
Right now, remove_columns() produces a NotImplementedError for iterable style datasets
### Motivation
It would be great to have the same functionality irrespective of whether one is using an iterable or a map-style dataset
### Your contribution
hope and courage. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5842/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5843/comments | https://api.github.com/repos/huggingface/datasets/issues/5843/events | https://github.com/huggingface/datasets/issues/5843 | 1,705,514,551 | I_kwDODunzps5lqBY3 | 5,843 | Can't add iterable datasets to a Dataset Dict. | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Transferring as this is relating to the 🤗 Datasets library",
"You need to use `IterableDatasetDict` instead of `DatasetDict` for iterable datasets."
] | 2023-05-11T02:09:29 | 2023-05-25T04:51:59 | 2023-05-25T04:51:59 | NONE | null | null | null | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Get the following error:
TypeError: Values in `DatasetDict` should be of type `Dataset` but got type '<class 'datasets.iterable_dataset.IterableDataset'>'
### Expected behavior
should be able to add iterable datasets to a dataset dict | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5843/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5839/comments | https://api.github.com/repos/huggingface/datasets/issues/5839/events | https://github.com/huggingface/datasets/issues/5839 | 1,704,554,718 | I_kwDODunzps5lmXDe | 5,839 | Make models/functions optimized with `torch.compile` hashable | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-05-10T20:02:08 | 2023-05-10T20:02:08 | null | CONTRIBUTOR | null | null | null | As reported in https://github.com/huggingface/datasets/issues/5819, hashing functions/transforms that reference a model, or a function, optimized with `torch.compile` currently fails due to them not being picklable (the concrete error can be found in the linked issue).
The solutions to consider:
1. hashing/pickling the original, uncompiled version of a compiled model/function (attributes `_orig_mod`/`_torchdynamo_orig_callable`) (less precise than the 2nd option as it ignores the other params of `torch.compute`)
2. wait for https://github.com/pytorch/pytorch/issues/101107 to be resolved
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5839/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5838/comments | https://api.github.com/repos/huggingface/datasets/issues/5838/events | https://github.com/huggingface/datasets/issues/5838 | 1,703,210,848 | I_kwDODunzps5lhO9g | 5,838 | Streaming support for `load_from_disk` | {
"login": "Nilabhra",
"id": 5437792,
"node_id": "MDQ6VXNlcjU0Mzc3OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nilabhra",
"html_url": "https://github.com/Nilabhra",
"followers_url": "https://api.github.com/users/Nilabhra/followers",
"following_url": "https://api.github.com/users/Nilabhra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilabhra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nilabhra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilabhra/subscriptions",
"organizations_url": "https://api.github.com/users/Nilabhra/orgs",
"repos_url": "https://api.github.com/users/Nilabhra/repos",
"events_url": "https://api.github.com/users/Nilabhra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nilabhra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"As the name says, `load_from_disk` load the data from your disk. If the data is hosted on S3, it is first downloaded locally and then loaded from your disk.\r\n\r\nThere is a discussion on streaming data from S3 here though: #5281 ",
"@lhoestq \r\nThanks for your comment. I have checked out the discussion before and attempted at replicating the mentioned changes in the main branch (#5580). What I found was that if a dataset is saved using `save_to_disk`, it cannot be read by `load_dataset`. The error message asks me to to use `load_from_disk` instead. What would be the correct way of saving the data in this scenario?",
"Using `push_to_hub` you can save the dataset on the HF Hub as parquet files, and reload it / stream it using `load_dataset` :)\r\n\r\nIf you want to save your dataset somewhere else you can use `.to_parquet` to get a parquet file. If your dataset is big it's usually recommended to shard it into multi parquet files (around 1GB each).",
"@lhoestq \r\nThanks for the explanation. Appreciate it. I'll try this out.",
"@lhoestq\r\nI tried the method you mentioned. This the current scenario I'm facing:\r\n\r\n- The parquet file can be read from disk and streaming can be enabled.\r\n- The parquet file can be read from `s3` (local MinIO).\r\n- When `streaming=True` is enabled for `s3`, I get the error mentioned below:\r\n\r\n```\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```\r\n\r\nDoes this mean there is a bug in the main branch?",
"Streaming from S3 is still experimental, there might be a few bugs unfortunately.\r\n\r\nCan you share the full stack trace ?",
"@lhoestq \r\nSure, here you go:\r\n\r\n```python\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 1\r\n----> 1 dataset = load_dataset(\"parquet\", data_files=[\"s3://<bucket name>/<data folder>/data-parquet\"], storage_options=fs.storage_options, streaming=True)\r\n\r\nFile ~/.../datasets/src/datasets/load.py:1790, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile ~/.../datasets/src/datasets/builder.py:1264, in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1257 dl_manager = StreamingDownloadManager(\r\n 1258 base_path=base_path or self.base_path,\r\n 1259 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1260 dataset_name=self.name,\r\n 1261 data_dir=self.config.data_dir,\r\n 1262 )\r\n 1263 self._check_manual_download(dl_manager)\r\n-> 1264 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1265 # By default, return all splits\r\n 1266 if split is None:\r\n\r\nFile ~/.../datasets/src/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)\r\n 32 if not self.config.data_files:\r\n 33 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 35 if isinstance(data_files, (str, list, tuple)):\r\n 36 files = data_files\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1087, in StreamingDownloadManager.download_and_extract(self, url_or_urls)\r\n 1069 def download_and_extract(self, url_or_urls):\r\n 1070 \"\"\"Prepare given `url_or_urls` for streaming (add extraction protocol).\r\n 1071 \r\n 1072 This is the lazy version of `DownloadManager.download_and_extract` for streaming.\r\n (...)\r\n 1085 url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input `url_or_urls`.\r\n 1086 \"\"\"\r\n-> 1087 return self.extract(self.download(url_or_urls))\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1039, in StreamingDownloadManager.extract(self, url_or_urls)\r\n 1020 def extract(self, url_or_urls):\r\n 1021 \"\"\"Add extraction protocol for given url(s) for streaming.\r\n 1022 \r\n 1023 This is the lazy version of `DownloadManager.extract` for streaming.\r\n (...)\r\n 1037 ```\r\n 1038 \"\"\"\r\n-> 1039 urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\r\n 1040 return urlpaths\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:443, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 443 mapped = [\r\n 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:444, in <listcomp>(.0)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 443 mapped = [\r\n--> 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in _single_map_nested(args)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in <listcomp>(.0)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:346, in _single_map_nested(args)\r\n 344 # Singleton first to spare some computation\r\n 345 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 346 return function(data_struct)\r\n 348 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 349 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1044, in StreamingDownloadManager._extract(self, urlpath)\r\n 1042 def _extract(self, urlpath: str) -> str:\r\n 1043 urlpath = str(urlpath)\r\n-> 1044 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n 1045 # get inner file: zip://train-00000.json.gz::https://foo.bar/data.zip -> zip://train-00000.json.gz\r\n 1046 path = urlpath.split(\"::\")[0]\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:433, in _get_extraction_protocol(urlpath, use_auth_token)\r\n 431 else:\r\n 432 urlpath, kwargs = urlpath, {}\r\n--> 433 with fsspec.open(urlpath, **kwargs) as f:\r\n 434 return _get_extraction_protocol_with_magic_number(f)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/core.py:102, in OpenFile.__enter__(self)\r\n 99 def __enter__(self):\r\n 100 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 102 f = self.fs.open(self.path, mode=mode)\r\n 104 self.fobjects = [f]\r\n 106 if self.compression is not None:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1199, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1197 else:\r\n 1198 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1199 f = self._open(\r\n 1200 path,\r\n 1201 mode=mode,\r\n 1202 block_size=block_size,\r\n 1203 autocommit=ac,\r\n 1204 cache_options=cache_options,\r\n 1205 **kwargs,\r\n 1206 )\r\n 1207 if compression is not None:\r\n 1208 from fsspec.compression import compr\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:659, in S3FileSystem._open(self, path, mode, block_size, acl, version_id, fill_cache, cache_type, autocommit, requester_pays, cache_options, **kwargs)\r\n 656 if cache_type is None:\r\n 657 cache_type = self.default_cache_type\r\n--> 659 return S3File(\r\n 660 self,\r\n 661 path,\r\n 662 mode,\r\n 663 block_size=block_size,\r\n 664 acl=acl,\r\n 665 version_id=version_id,\r\n 666 fill_cache=fill_cache,\r\n 667 s3_additional_kwargs=kw,\r\n 668 cache_type=cache_type,\r\n 669 autocommit=autocommit,\r\n 670 requester_pays=requester_pays,\r\n 671 cache_options=cache_options,\r\n 672 )\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:2043, in S3File.__init__(self, s3, path, mode, block_size, acl, version_id, fill_cache, s3_additional_kwargs, autocommit, cache_type, requester_pays, cache_options)\r\n 2041 self.details = s3.info(path)\r\n 2042 self.version_id = self.details.get(\"VersionId\")\r\n-> 2043 super().__init__(\r\n 2044 s3,\r\n 2045 path,\r\n 2046 mode,\r\n 2047 block_size,\r\n 2048 autocommit=autocommit,\r\n 2049 cache_type=cache_type,\r\n 2050 cache_options=cache_options,\r\n 2051 )\r\n 2052 self.s3 = self.fs # compatibility\r\n 2054 # when not using autocommit we want to have transactional state to manage\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1555, in AbstractBufferedFile.__init__(self, fs, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 1553 self.size = size\r\n 1554 else:\r\n-> 1555 self.size = self.details[\"size\"]\r\n 1556 self.cache = caches[cache_type](\r\n 1557 self.blocksize, self._fetch_range, self.size, **cache_options\r\n 1558 )\r\n 1559 else:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1568, in AbstractBufferedFile.details(self)\r\n 1565 @property\r\n 1566 def details(self):\r\n 1567 if self._details is None:\r\n-> 1568 self._details = self.fs.info(self.path)\r\n 1569 return self._details\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:115, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def wrapper(*args, **kwargs):\r\n 114 self = obj or args[0]\r\n--> 115 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:100, in sync(loop, func, timeout, *args, **kwargs)\r\n 98 raise FSTimeoutError from return_result\r\n 99 elif isinstance(return_result, BaseException):\r\n--> 100 raise return_result\r\n 101 else:\r\n 102 return return_result\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:55, in _runner(event, coro, result, timeout)\r\n 53 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 54 try:\r\n---> 55 result[0] = await coro\r\n 56 except Exception as ex:\r\n 57 result[0] = ex\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:1248, in S3FileSystem._info(self, path, bucket, key, refresh, version_id)\r\n 1246 if key:\r\n 1247 try:\r\n-> 1248 out = await self._call_s3(\r\n 1249 \"head_object\",\r\n 1250 self.kwargs,\r\n 1251 Bucket=bucket,\r\n 1252 Key=key,\r\n 1253 **version_id_kw(version_id),\r\n 1254 **self.req_kw,\r\n 1255 )\r\n 1256 return {\r\n 1257 \"ETag\": out.get(\"ETag\", \"\"),\r\n 1258 \"LastModified\": out[\"LastModified\"],\r\n (...)\r\n 1264 \"ContentType\": out.get(\"ContentType\"),\r\n 1265 }\r\n 1266 except FileNotFoundError:\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:341, in S3FileSystem._call_s3(self, method, *akwarglist, **kwargs)\r\n 340 async def _call_s3(self, method, *akwarglist, **kwargs):\r\n--> 341 await self.set_session()\r\n 342 s3 = await self.get_s3(kwargs.get(\"Bucket\"))\r\n 343 method = getattr(s3, method)\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```",
"Is `\"data-parquet\"` a file ? In `data_files` you should pass the paths to the parquet files (not to a directory). Glob patterns are not supported yet for S3 URLs.\r\n\r\nThe bug seems to happen because your provided data file has no extension. Because of that it tries to infer it from the file content, but fails because `_get_extraction_protocol` doesn't support S3 URLs yet.\r\n\r\n",
"@lhoestq \r\nThank you for your answer. Saving the file with `.parquet` extension solved the issue! This is really great! Really appreciate all the help! \r\n\r\nLet me know if I should close the issue or feel free to close it if you want.",
"Cool ! I'm glad it worked out :)\r\n\r\nSure feel free to close the issue, since the original question about streaming with load_from_disk has been answered anyway"
] | 2023-05-10T06:25:22 | 2023-05-12T09:37:45 | 2023-05-12T09:37:45 | NONE | null | null | null | ### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.
### Your contribution
I'd be happy to contribute this feature if I could get the guidance on how to do so. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5838/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5837/comments | https://api.github.com/repos/huggingface/datasets/issues/5837/events | https://github.com/huggingface/datasets/issues/5837 | 1,703,019,816 | I_kwDODunzps5lggUo | 5,837 | Use DeepSpeed load myself " .csv " dataset. | {
"login": "LanShanPi",
"id": 58167546,
"node_id": "MDQ6VXNlcjU4MTY3NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/58167546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LanShanPi",
"html_url": "https://github.com/LanShanPi",
"followers_url": "https://api.github.com/users/LanShanPi/followers",
"following_url": "https://api.github.com/users/LanShanPi/following{/other_user}",
"gists_url": "https://api.github.com/users/LanShanPi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LanShanPi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LanShanPi/subscriptions",
"organizations_url": "https://api.github.com/users/LanShanPi/orgs",
"repos_url": "https://api.github.com/users/LanShanPi/repos",
"events_url": "https://api.github.com/users/LanShanPi/events{/privacy}",
"received_events_url": "https://api.github.com/users/LanShanPi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! Doing `load_dataset(\"path/to/data.csv\")` is not supported yet, but you can do\r\n\r\n```python\r\nds = load_dataset(\"csv\", data_files=[\"path/to/data.csv\"])\r\n```",
"@lhoestq thank you.",
"The other question: \r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1498, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1127, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 708, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 362, in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/data_files.py\", line 306, in _resolve_single_pattern_locally\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to find '/home/fm001/hzl/Data/qa/' at /\r\n>>> mydata = load_dataset(\"/home/fm001/hzl/Data/qa/\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1508, in load_dataset_builder\r\n builder_cls = import_main_class(dataset_module.module_path)\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 115, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/home/fm001/.cache/huggingface/modules/datasets_modules/datasets/qa/b8b9f481eff9d17b769b4b50f30a51da32b47c94d1af4d2bdffb9fc2c589513a/qa.py\", line 2, in <module>\r\n mydata = load_dataset(\"/home/fm001/hzl/Data/qa/\")\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1767, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py\", line 1524, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\nTypeError: 'NoneType' object is not callable\r\n\r\nAnd I follow the setting with https://huggingface.co/docs/datasets/dataset_script"
] | 2023-05-10T02:39:28 | 2023-05-15T03:51:36 | null | NONE | null | null | null | ### Describe the bug
When I use DeepSpeed train a model with my own " XXX.csv" dataset I got the follow question:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1767, in load_dataset
builder_instance = load_dataset_builder(
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1498, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1217, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /home/fm001/hzl/Data/qa.csv/qa.csv.py or any data file in the same directory.
### Steps to reproduce the bug
my code is :
from datasets import load_dataset
mydata = load_dataset("/home/fm001/hzl/Data/qa.csv")
### Expected behavior
。。。
### Environment info
。。。 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5837/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5836/comments | https://api.github.com/repos/huggingface/datasets/issues/5836/events | https://github.com/huggingface/datasets/pull/5836 | 1,702,773,316 | PR_kwDODunzps5QIgzu | 5,836 | [docs] Custom decoding transforms | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5836). All of your documentation changes will be reflected on that endpoint.",
"The error seems unrelated to the changes, so feel free to merge.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006562 / 0.011353 (-0.004791) | 0.004568 / 0.011008 (-0.006440) | 0.098151 / 0.038508 (0.059643) | 0.028117 / 0.023109 (0.005008) | 0.305442 / 0.275898 (0.029544) | 0.338288 / 0.323480 (0.014808) | 0.005012 / 0.007986 (-0.002973) | 0.003415 / 0.004328 (-0.000913) | 0.075022 / 0.004250 (0.070771) | 0.036869 / 0.037052 (-0.000183) | 0.301427 / 0.258489 (0.042937) | 0.348485 / 0.293841 (0.054644) | 0.030761 / 0.128546 (-0.097785) | 0.011461 / 0.075646 (-0.064185) | 0.321987 / 0.419271 (-0.097285) | 0.042885 / 0.043533 (-0.000648) | 0.300691 / 0.255139 (0.045552) | 0.333208 / 0.283200 (0.050008) | 0.090203 / 0.141683 (-0.051480) | 1.459744 / 1.452155 (0.007590) | 1.522960 / 1.492716 (0.030243) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213219 / 0.018006 (0.195213) | 0.408118 / 0.000490 (0.407629) | 0.003716 / 0.000200 (0.003516) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023060 / 0.037411 (-0.014351) | 0.097423 / 0.014526 (0.082897) | 0.103988 / 0.176557 (-0.072568) | 0.162793 / 0.737135 (-0.574343) | 0.108282 / 0.296338 (-0.188056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431628 / 0.215209 (0.216419) | 4.300881 / 2.077655 (2.223226) | 2.058853 / 1.504120 (0.554733) | 1.897910 / 1.541195 (0.356715) | 1.991723 / 1.468490 (0.523233) | 0.699686 / 4.584777 (-3.885091) | 3.395004 / 3.745712 (-0.350708) | 1.841613 / 5.269862 (-3.428248) | 1.152347 / 4.565676 (-3.413330) | 0.082517 / 0.424275 (-0.341758) | 0.012323 / 0.007607 (0.004715) | 0.535812 / 0.226044 (0.309767) | 5.374103 / 2.268929 (3.105174) | 2.429662 / 55.444624 (-53.014962) | 2.097199 / 6.876477 (-4.779277) | 2.172625 / 2.142072 (0.030552) | 0.810156 / 4.805227 (-3.995071) | 0.151629 / 6.500664 (-6.349035) | 0.066528 / 0.075469 (-0.008941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220667 / 1.841788 (-0.621121) | 13.696976 / 8.074308 (5.622668) | 14.042916 / 10.191392 (3.851524) | 0.129626 / 0.680424 (-0.550798) | 0.016593 / 0.534201 (-0.517607) | 0.383747 / 0.579283 (-0.195536) | 0.386872 / 0.434364 (-0.047492) | 0.456524 / 0.540337 (-0.083813) | 0.545033 / 1.386936 (-0.841903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006361 / 0.011353 (-0.004992) | 0.004516 / 0.011008 (-0.006493) | 0.077155 / 0.038508 (0.038647) | 0.027239 / 0.023109 (0.004130) | 0.359892 / 0.275898 (0.083994) | 0.391994 / 0.323480 (0.068514) | 0.004950 / 0.007986 (-0.003036) | 0.003379 / 0.004328 (-0.000949) | 0.077057 / 0.004250 (0.072806) | 0.039562 / 0.037052 (0.002509) | 0.364244 / 0.258489 (0.105755) | 0.416033 / 0.293841 (0.122192) | 0.031049 / 0.128546 (-0.097497) | 0.011479 / 0.075646 (-0.064167) | 0.086479 / 0.419271 (-0.332793) | 0.039381 / 0.043533 (-0.004151) | 0.372143 / 0.255139 (0.117004) | 0.388569 / 0.283200 (0.105369) | 0.090954 / 0.141683 (-0.050728) | 1.540957 / 1.452155 (0.088802) | 1.596841 / 1.492716 (0.104125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221130 / 0.018006 (0.203123) | 0.403728 / 0.000490 (0.403238) | 0.003172 / 0.000200 (0.002972) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024963 / 0.037411 (-0.012449) | 0.101065 / 0.014526 (0.086539) | 0.110846 / 0.176557 (-0.065710) | 0.158578 / 0.737135 (-0.578557) | 0.112235 / 0.296338 (-0.184104) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457320 / 0.215209 (0.242111) | 4.548094 / 2.077655 (2.470439) | 2.175376 / 1.504120 (0.671256) | 1.964755 / 1.541195 (0.423561) | 2.008128 / 1.468490 (0.539638) | 0.702448 / 4.584777 (-3.882329) | 3.437595 / 3.745712 (-0.308117) | 3.009871 / 5.269862 (-2.259990) | 1.558181 / 4.565676 (-3.007496) | 0.082568 / 0.424275 (-0.341707) | 0.012371 / 0.007607 (0.004764) | 0.550688 / 0.226044 (0.324644) | 5.534210 / 2.268929 (3.265282) | 2.649605 / 55.444624 (-52.795020) | 2.317293 / 6.876477 (-4.559184) | 2.351525 / 2.142072 (0.209453) | 0.808971 / 4.805227 (-3.996256) | 0.152737 / 6.500664 (-6.347927) | 0.068416 / 0.075469 (-0.007053) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340219 / 1.841788 (-0.501569) | 13.903388 / 8.074308 (5.829080) | 13.063477 / 10.191392 (2.872085) | 0.130216 / 0.680424 (-0.550208) | 0.016522 / 0.534201 (-0.517679) | 0.398946 / 0.579283 (-0.180337) | 0.382450 / 0.434364 (-0.051914) | 0.491007 / 0.540337 (-0.049330) | 0.577747 / 1.386936 (-0.809189) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007812 / 0.011353 (-0.003541) | 0.005563 / 0.011008 (-0.005446) | 0.099372 / 0.038508 (0.060864) | 0.035629 / 0.023109 (0.012520) | 0.301457 / 0.275898 (0.025559) | 0.339136 / 0.323480 (0.015656) | 0.006152 / 0.007986 (-0.001834) | 0.005843 / 0.004328 (0.001515) | 0.075280 / 0.004250 (0.071030) | 0.052789 / 0.037052 (0.015736) | 0.301805 / 0.258489 (0.043316) | 0.347918 / 0.293841 (0.054078) | 0.036182 / 0.128546 (-0.092364) | 0.012655 / 0.075646 (-0.062991) | 0.334428 / 0.419271 (-0.084844) | 0.062746 / 0.043533 (0.019213) | 0.296932 / 0.255139 (0.041793) | 0.314115 / 0.283200 (0.030916) | 0.121291 / 0.141683 (-0.020392) | 1.453252 / 1.452155 (0.001097) | 1.564714 / 1.492716 (0.071997) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243810 / 0.018006 (0.225804) | 0.547129 / 0.000490 (0.546640) | 0.004666 / 0.000200 (0.004466) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028214 / 0.037411 (-0.009197) | 0.108878 / 0.014526 (0.094352) | 0.122313 / 0.176557 (-0.054243) | 0.182412 / 0.737135 (-0.554723) | 0.127014 / 0.296338 (-0.169324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423946 / 0.215209 (0.208737) | 4.207112 / 2.077655 (2.129457) | 2.048658 / 1.504120 (0.544538) | 1.843593 / 1.541195 (0.302398) | 1.952426 / 1.468490 (0.483936) | 0.712098 / 4.584777 (-3.872679) | 3.824971 / 3.745712 (0.079258) | 3.507141 / 5.269862 (-1.762721) | 1.868866 / 4.565676 (-2.696810) | 0.087895 / 0.424275 (-0.336380) | 0.012783 / 0.007607 (0.005176) | 0.524087 / 0.226044 (0.298042) | 5.246498 / 2.268929 (2.977570) | 2.495944 / 55.444624 (-52.948680) | 2.126779 / 6.876477 (-4.749698) | 2.315545 / 2.142072 (0.173472) | 0.859546 / 4.805227 (-3.945681) | 0.173457 / 6.500664 (-6.327208) | 0.067483 / 0.075469 (-0.007986) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.173851 / 1.841788 (-0.667937) | 15.091913 / 8.074308 (7.017605) | 14.640035 / 10.191392 (4.448643) | 0.168498 / 0.680424 (-0.511926) | 0.017513 / 0.534201 (-0.516688) | 0.425770 / 0.579283 (-0.153513) | 0.434248 / 0.434364 (-0.000116) | 0.504204 / 0.540337 (-0.036134) | 0.616885 / 1.386936 (-0.770051) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007775 / 0.011353 (-0.003578) | 0.005153 / 0.011008 (-0.005855) | 0.075461 / 0.038508 (0.036953) | 0.034994 / 0.023109 (0.011885) | 0.372389 / 0.275898 (0.096491) | 0.397911 / 0.323480 (0.074431) | 0.006572 / 0.007986 (-0.001413) | 0.005549 / 0.004328 (0.001220) | 0.075101 / 0.004250 (0.070851) | 0.054014 / 0.037052 (0.016962) | 0.368964 / 0.258489 (0.110475) | 0.425353 / 0.293841 (0.131512) | 0.035546 / 0.128546 (-0.093001) | 0.012707 / 0.075646 (-0.062939) | 0.087418 / 0.419271 (-0.331853) | 0.046425 / 0.043533 (0.002893) | 0.363982 / 0.255139 (0.108843) | 0.376421 / 0.283200 (0.093221) | 0.105369 / 0.141683 (-0.036314) | 1.494408 / 1.452155 (0.042253) | 1.596783 / 1.492716 (0.104067) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.258780 / 0.018006 (0.240773) | 0.533373 / 0.000490 (0.532883) | 0.000432 / 0.000200 (0.000232) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030687 / 0.037411 (-0.006725) | 0.110231 / 0.014526 (0.095705) | 0.123738 / 0.176557 (-0.052819) | 0.171999 / 0.737135 (-0.565137) | 0.127673 / 0.296338 (-0.168665) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448058 / 0.215209 (0.232849) | 4.459381 / 2.077655 (2.381726) | 2.234020 / 1.504120 (0.729900) | 2.038616 / 1.541195 (0.497421) | 2.123795 / 1.468490 (0.655305) | 0.702664 / 4.584777 (-3.882113) | 3.837133 / 3.745712 (0.091420) | 2.138574 / 5.269862 (-3.131287) | 1.375955 / 4.565676 (-3.189722) | 0.086996 / 0.424275 (-0.337280) | 0.012461 / 0.007607 (0.004854) | 0.557978 / 0.226044 (0.331934) | 5.648613 / 2.268929 (3.379685) | 2.777829 / 55.444624 (-52.666796) | 2.392424 / 6.876477 (-4.484052) | 2.482823 / 2.142072 (0.340750) | 0.851891 / 4.805227 (-3.953336) | 0.171335 / 6.500664 (-6.329329) | 0.065041 / 0.075469 (-0.010428) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319697 / 1.841788 (-0.522091) | 15.748688 / 8.074308 (7.674380) | 13.397042 / 10.191392 (3.205650) | 0.166424 / 0.680424 (-0.514000) | 0.017755 / 0.534201 (-0.516446) | 0.424989 / 0.579283 (-0.154294) | 0.424705 / 0.434364 (-0.009659) | 0.494190 / 0.540337 (-0.046147) | 0.588315 / 1.386936 (-0.798622) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#15c37ed142e4fbcb8c00ae62d4c71c84ce41959a \"CML watermark\")\n"
] | 2023-05-09T21:21:41 | 2023-05-15T07:36:12 | 2023-05-10T20:23:03 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5836",
"html_url": "https://github.com/huggingface/datasets/pull/5836",
"diff_url": "https://github.com/huggingface/datasets/pull/5836.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5836.patch",
"merged_at": "2023-05-10T20:23:03"
} | Adds custom decoding transform solution to the docs to fix #5782. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5836/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5836/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5835/comments | https://api.github.com/repos/huggingface/datasets/issues/5835/events | https://github.com/huggingface/datasets/pull/5835 | 1,702,522,620 | PR_kwDODunzps5QHquR | 5,835 | Always set nullable fields in the writer | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004606 / 0.011008 (-0.006402) | 0.098870 / 0.038508 (0.060362) | 0.028201 / 0.023109 (0.005092) | 0.304396 / 0.275898 (0.028498) | 0.339804 / 0.323480 (0.016324) | 0.005011 / 0.007986 (-0.002974) | 0.003530 / 0.004328 (-0.000799) | 0.075223 / 0.004250 (0.070973) | 0.037922 / 0.037052 (0.000870) | 0.310273 / 0.258489 (0.051784) | 0.348324 / 0.293841 (0.054483) | 0.030181 / 0.128546 (-0.098365) | 0.011584 / 0.075646 (-0.064062) | 0.322637 / 0.419271 (-0.096635) | 0.043119 / 0.043533 (-0.000414) | 0.314514 / 0.255139 (0.059375) | 0.334384 / 0.283200 (0.051185) | 0.092551 / 0.141683 (-0.049132) | 1.496694 / 1.452155 (0.044539) | 1.555426 / 1.492716 (0.062710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205078 / 0.018006 (0.187072) | 0.399200 / 0.000490 (0.398710) | 0.004881 / 0.000200 (0.004681) | 0.000200 / 0.000054 (0.000146) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025042 / 0.037411 (-0.012369) | 0.101501 / 0.014526 (0.086975) | 0.107430 / 0.176557 (-0.069127) | 0.170107 / 0.737135 (-0.567028) | 0.111253 / 0.296338 (-0.185086) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460358 / 0.215209 (0.245149) | 4.592037 / 2.077655 (2.514383) | 2.222612 / 1.504120 (0.718493) | 2.022804 / 1.541195 (0.481610) | 2.040824 / 1.468490 (0.572334) | 0.700485 / 4.584777 (-3.884292) | 3.427847 / 3.745712 (-0.317866) | 2.836916 / 5.269862 (-2.432946) | 1.505055 / 4.565676 (-3.060621) | 0.083206 / 0.424275 (-0.341069) | 0.046492 / 0.007607 (0.038885) | 0.555562 / 0.226044 (0.329518) | 5.563574 / 2.268929 (3.294645) | 2.635273 / 55.444624 (-52.809351) | 2.299377 / 6.876477 (-4.577100) | 2.394512 / 2.142072 (0.252440) | 0.809541 / 4.805227 (-3.995686) | 0.151814 / 6.500664 (-6.348850) | 0.067241 / 0.075469 (-0.008228) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188396 / 1.841788 (-0.653392) | 13.714596 / 8.074308 (5.640288) | 14.076906 / 10.191392 (3.885514) | 0.143447 / 0.680424 (-0.536977) | 0.016514 / 0.534201 (-0.517687) | 0.383075 / 0.579283 (-0.196209) | 0.386997 / 0.434364 (-0.047367) | 0.441941 / 0.540337 (-0.098396) | 0.522145 / 1.386936 (-0.864791) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006266 / 0.011353 (-0.005086) | 0.004562 / 0.011008 (-0.006446) | 0.077472 / 0.038508 (0.038964) | 0.027596 / 0.023109 (0.004486) | 0.400498 / 0.275898 (0.124600) | 0.406728 / 0.323480 (0.083248) | 0.004745 / 0.007986 (-0.003241) | 0.003375 / 0.004328 (-0.000954) | 0.076645 / 0.004250 (0.072394) | 0.037756 / 0.037052 (0.000703) | 0.415183 / 0.258489 (0.156694) | 0.413758 / 0.293841 (0.119917) | 0.030624 / 0.128546 (-0.097922) | 0.011525 / 0.075646 (-0.064121) | 0.086033 / 0.419271 (-0.333238) | 0.039307 / 0.043533 (-0.004226) | 0.418192 / 0.255139 (0.163053) | 0.403152 / 0.283200 (0.119952) | 0.094141 / 0.141683 (-0.047542) | 1.459012 / 1.452155 (0.006857) | 1.546493 / 1.492716 (0.053777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239494 / 0.018006 (0.221488) | 0.420918 / 0.000490 (0.420428) | 0.000411 / 0.000200 (0.000211) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024525 / 0.037411 (-0.012886) | 0.099793 / 0.014526 (0.085267) | 0.105888 / 0.176557 (-0.070669) | 0.155912 / 0.737135 (-0.581223) | 0.109937 / 0.296338 (-0.186401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470108 / 0.215209 (0.254899) | 4.696390 / 2.077655 (2.618735) | 2.467841 / 1.504120 (0.963721) | 2.275012 / 1.541195 (0.733818) | 2.430736 / 1.468490 (0.962245) | 0.700442 / 4.584777 (-3.884335) | 3.458451 / 3.745712 (-0.287261) | 1.921120 / 5.269862 (-3.348742) | 1.183292 / 4.565676 (-3.382384) | 0.083985 / 0.424275 (-0.340290) | 0.012510 / 0.007607 (0.004903) | 0.589066 / 0.226044 (0.363022) | 5.896070 / 2.268929 (3.627141) | 2.935379 / 55.444624 (-52.509245) | 2.599524 / 6.876477 (-4.276953) | 2.663426 / 2.142072 (0.521354) | 0.812096 / 4.805227 (-3.993131) | 0.152559 / 6.500664 (-6.348105) | 0.066906 / 0.075469 (-0.008563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333341 / 1.841788 (-0.508446) | 14.441667 / 8.074308 (6.367359) | 14.754069 / 10.191392 (4.562677) | 0.155707 / 0.680424 (-0.524716) | 0.016983 / 0.534201 (-0.517218) | 0.389386 / 0.579283 (-0.189897) | 0.394106 / 0.434364 (-0.040258) | 0.447355 / 0.540337 (-0.092982) | 0.533142 / 1.386936 (-0.853794) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#99ee4467ce77f8f718159a535e237dd8790b5bed \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007801 / 0.011353 (-0.003552) | 0.004884 / 0.011008 (-0.006124) | 0.114754 / 0.038508 (0.076245) | 0.040427 / 0.023109 (0.017318) | 0.402064 / 0.275898 (0.126166) | 0.428830 / 0.323480 (0.105350) | 0.006429 / 0.007986 (-0.001556) | 0.004394 / 0.004328 (0.000066) | 0.087681 / 0.004250 (0.083431) | 0.053684 / 0.037052 (0.016632) | 0.399967 / 0.258489 (0.141478) | 0.445298 / 0.293841 (0.151457) | 0.033194 / 0.128546 (-0.095352) | 0.010288 / 0.075646 (-0.065359) | 0.390719 / 0.419271 (-0.028552) | 0.059311 / 0.043533 (0.015778) | 0.393651 / 0.255139 (0.138512) | 0.418395 / 0.283200 (0.135196) | 0.121494 / 0.141683 (-0.020189) | 1.735470 / 1.452155 (0.283315) | 1.820485 / 1.492716 (0.327769) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012887 / 0.018006 (-0.005119) | 0.491652 / 0.000490 (0.491162) | 0.005481 / 0.000200 (0.005281) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030931 / 0.037411 (-0.006480) | 0.125212 / 0.014526 (0.110686) | 0.136004 / 0.176557 (-0.040552) | 0.201686 / 0.737135 (-0.535449) | 0.140181 / 0.296338 (-0.156157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475003 / 0.215209 (0.259794) | 4.743918 / 2.077655 (2.666263) | 2.149422 / 1.504120 (0.645302) | 1.925016 / 1.541195 (0.383821) | 2.061441 / 1.468490 (0.592951) | 0.619845 / 4.584777 (-3.964932) | 4.534691 / 3.745712 (0.788979) | 2.248198 / 5.269862 (-3.021664) | 1.409868 / 4.565676 (-3.155808) | 0.080265 / 0.424275 (-0.344010) | 0.014455 / 0.007607 (0.006848) | 0.597810 / 0.226044 (0.371765) | 5.845492 / 2.268929 (3.576564) | 2.729139 / 55.444624 (-52.715486) | 2.313879 / 6.876477 (-4.562598) | 2.418763 / 2.142072 (0.276690) | 0.748687 / 4.805227 (-4.056540) | 0.165278 / 6.500664 (-6.335387) | 0.076848 / 0.075469 (0.001379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.416349 / 1.841788 (-0.425439) | 17.440903 / 8.074308 (9.366595) | 17.025733 / 10.191392 (6.834341) | 0.167428 / 0.680424 (-0.512995) | 0.020484 / 0.534201 (-0.513717) | 0.470273 / 0.579283 (-0.109010) | 0.494380 / 0.434364 (0.060016) | 0.566131 / 0.540337 (0.025794) | 0.690444 / 1.386936 (-0.696492) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007695 / 0.011353 (-0.003657) | 0.005551 / 0.011008 (-0.005457) | 0.087812 / 0.038508 (0.049304) | 0.039107 / 0.023109 (0.015998) | 0.436461 / 0.275898 (0.160563) | 0.465116 / 0.323480 (0.141636) | 0.006590 / 0.007986 (-0.001396) | 0.004672 / 0.004328 (0.000343) | 0.087109 / 0.004250 (0.082858) | 0.054227 / 0.037052 (0.017175) | 0.442660 / 0.258489 (0.184171) | 0.484296 / 0.293841 (0.190455) | 0.033308 / 0.128546 (-0.095238) | 0.010780 / 0.075646 (-0.064866) | 0.095255 / 0.419271 (-0.324016) | 0.054399 / 0.043533 (0.010866) | 0.431734 / 0.255139 (0.176595) | 0.453583 / 0.283200 (0.170383) | 0.116067 / 0.141683 (-0.025616) | 1.780701 / 1.452155 (0.328546) | 1.851077 / 1.492716 (0.358360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228000 / 0.018006 (0.209994) | 0.485733 / 0.000490 (0.485243) | 0.003955 / 0.000200 (0.003755) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033974 / 0.037411 (-0.003437) | 0.134504 / 0.014526 (0.119978) | 0.144421 / 0.176557 (-0.032135) | 0.202171 / 0.737135 (-0.534964) | 0.152015 / 0.296338 (-0.144323) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520462 / 0.215209 (0.305253) | 5.233339 / 2.077655 (3.155684) | 2.575013 / 1.504120 (1.070893) | 2.384119 / 1.541195 (0.842924) | 2.403856 / 1.468490 (0.935366) | 0.618656 / 4.584777 (-3.966121) | 4.663582 / 3.745712 (0.917870) | 3.738594 / 5.269862 (-1.531268) | 1.794903 / 4.565676 (-2.770773) | 0.077903 / 0.424275 (-0.346372) | 0.014681 / 0.007607 (0.007074) | 0.648615 / 0.226044 (0.422570) | 6.503721 / 2.268929 (4.234792) | 3.326239 / 55.444624 (-52.118386) | 2.989791 / 6.876477 (-3.886685) | 2.995479 / 2.142072 (0.853407) | 0.765483 / 4.805227 (-4.039744) | 0.169783 / 6.500664 (-6.330882) | 0.077533 / 0.075469 (0.002064) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.518736 / 1.841788 (-0.323051) | 17.989119 / 8.074308 (9.914811) | 15.484365 / 10.191392 (5.292973) | 0.168507 / 0.680424 (-0.511917) | 0.020289 / 0.534201 (-0.513912) | 0.467491 / 0.579283 (-0.111793) | 0.501714 / 0.434364 (0.067350) | 0.553418 / 0.540337 (0.013081) | 0.662199 / 1.386936 (-0.724737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007044 / 0.011353 (-0.004309) | 0.004750 / 0.011008 (-0.006258) | 0.096694 / 0.038508 (0.058186) | 0.035682 / 0.023109 (0.012573) | 0.300613 / 0.275898 (0.024715) | 0.334831 / 0.323480 (0.011351) | 0.006428 / 0.007986 (-0.001558) | 0.004456 / 0.004328 (0.000128) | 0.075060 / 0.004250 (0.070810) | 0.053166 / 0.037052 (0.016114) | 0.299601 / 0.258489 (0.041112) | 0.359521 / 0.293841 (0.065680) | 0.028072 / 0.128546 (-0.100474) | 0.009216 / 0.075646 (-0.066430) | 0.328895 / 0.419271 (-0.090377) | 0.050881 / 0.043533 (0.007349) | 0.298265 / 0.255139 (0.043126) | 0.318095 / 0.283200 (0.034896) | 0.116046 / 0.141683 (-0.025637) | 1.491312 / 1.452155 (0.039157) | 1.556053 / 1.492716 (0.063337) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014248 / 0.018006 (-0.003758) | 0.551455 / 0.000490 (0.550965) | 0.006096 / 0.000200 (0.005897) | 0.000145 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030598 / 0.037411 (-0.006813) | 0.109549 / 0.014526 (0.095023) | 0.123207 / 0.176557 (-0.053350) | 0.181940 / 0.737135 (-0.555195) | 0.128965 / 0.296338 (-0.167374) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404552 / 0.215209 (0.189343) | 4.030674 / 2.077655 (1.953020) | 1.841819 / 1.504120 (0.337699) | 1.650055 / 1.541195 (0.108860) | 1.763208 / 1.468490 (0.294718) | 0.532715 / 4.584777 (-4.052062) | 3.774810 / 3.745712 (0.029098) | 3.221927 / 5.269862 (-2.047934) | 1.607974 / 4.565676 (-2.957702) | 0.067160 / 0.424275 (-0.357116) | 0.012479 / 0.007607 (0.004872) | 0.498801 / 0.226044 (0.272757) | 4.980567 / 2.268929 (2.711638) | 2.356017 / 55.444624 (-53.088608) | 2.018975 / 6.876477 (-4.857502) | 2.218343 / 2.142072 (0.076270) | 0.645714 / 4.805227 (-4.159514) | 0.145470 / 6.500664 (-6.355195) | 0.065666 / 0.075469 (-0.009803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205756 / 1.841788 (-0.636031) | 15.682779 / 8.074308 (7.608470) | 14.748987 / 10.191392 (4.557595) | 0.167105 / 0.680424 (-0.513319) | 0.017554 / 0.534201 (-0.516647) | 0.393924 / 0.579283 (-0.185359) | 0.432659 / 0.434364 (-0.001705) | 0.502033 / 0.540337 (-0.038304) | 0.602244 / 1.386936 (-0.784692) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007077 / 0.011353 (-0.004276) | 0.004911 / 0.011008 (-0.006097) | 0.075120 / 0.038508 (0.036612) | 0.035460 / 0.023109 (0.012351) | 0.362569 / 0.275898 (0.086671) | 0.398995 / 0.323480 (0.075515) | 0.006587 / 0.007986 (-0.001398) | 0.004571 / 0.004328 (0.000242) | 0.074647 / 0.004250 (0.070397) | 0.057331 / 0.037052 (0.020279) | 0.365123 / 0.258489 (0.106634) | 0.408617 / 0.293841 (0.114776) | 0.028911 / 0.128546 (-0.099635) | 0.009533 / 0.075646 (-0.066113) | 0.081566 / 0.419271 (-0.337705) | 0.048841 / 0.043533 (0.005308) | 0.367245 / 0.255139 (0.112106) | 0.375975 / 0.283200 (0.092776) | 0.123211 / 0.141683 (-0.018472) | 1.471588 / 1.452155 (0.019433) | 1.569342 / 1.492716 (0.076625) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328443 / 0.018006 (0.310436) | 0.541402 / 0.000490 (0.540912) | 0.000440 / 0.000200 (0.000240) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030772 / 0.037411 (-0.006639) | 0.115833 / 0.014526 (0.101307) | 0.127837 / 0.176557 (-0.048719) | 0.180897 / 0.737135 (-0.556238) | 0.132458 / 0.296338 (-0.163881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445979 / 0.215209 (0.230770) | 4.453101 / 2.077655 (2.375447) | 2.276625 / 1.504120 (0.772505) | 2.102167 / 1.541195 (0.560972) | 2.181583 / 1.468490 (0.713093) | 0.525069 / 4.584777 (-4.059708) | 3.803446 / 3.745712 (0.057734) | 1.954173 / 5.269862 (-3.315688) | 1.088734 / 4.565676 (-3.476942) | 0.066020 / 0.424275 (-0.358255) | 0.012158 / 0.007607 (0.004551) | 0.546828 / 0.226044 (0.320783) | 5.454060 / 2.268929 (3.185132) | 2.756154 / 55.444624 (-52.688470) | 2.476501 / 6.876477 (-4.399976) | 2.525875 / 2.142072 (0.383803) | 0.647515 / 4.805227 (-4.157712) | 0.144511 / 6.500664 (-6.356153) | 0.067060 / 0.075469 (-0.008409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306456 / 1.841788 (-0.535332) | 15.822623 / 8.074308 (7.748315) | 14.929114 / 10.191392 (4.737721) | 0.168650 / 0.680424 (-0.511773) | 0.018043 / 0.534201 (-0.516158) | 0.396712 / 0.579283 (-0.182572) | 0.425800 / 0.434364 (-0.008564) | 0.466452 / 0.540337 (-0.073885) | 0.564370 / 1.386936 (-0.822566) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ebda17e4362bd5f6123543a14fa526a3b54481a \"CML watermark\")\n"
] | 2023-05-09T18:16:59 | 2023-05-23T16:10:29 | 2023-05-19T13:04:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5835",
"html_url": "https://github.com/huggingface/datasets/pull/5835",
"diff_url": "https://github.com/huggingface/datasets/pull/5835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5835.patch",
"merged_at": "2023-05-19T13:04:30"
} | This fixes loading of e.g. parquet data with non-nullable fields.
Indeed `datasets.Features` doesn't support non-nullable fields, which can lead to data not concatenable due to arrow schema mismatch. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5835/timeline | null | null | true |